title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Installing Satellite Server in a connected network environment
|
Installing Satellite Server in a connected network environment Red Hat Satellite 6.16 Install and configure Satellite Server in a network with Internet access Red Hat Satellite Documentation Team [email protected]
| null |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/installing_satellite_server_in_a_connected_network_environment/index
|
4.3.8. Removing Volume Groups
|
4.3.8. Removing Volume Groups To remove a volume group that contains no logical volumes, use the vgremove command.
|
[
"vgremove officevg Volume group \"officevg\" successfully removed"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/VG_remove
|
Deploying OpenShift Data Foundation on any platform
|
Deploying OpenShift Data Foundation on any platform Red Hat OpenShift Data Foundation 4.15 Instructions on deploying OpenShift Data Foundation on any platform including virtualized and cloud environments. Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install Red Hat OpenShift Data Foundation to use local storage on any platform.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_on_any_platform/index
|
Appendix A. ASN.1 and Distinguished Names
|
Appendix A. ASN.1 and Distinguished Names Abstract The OSI Abstract Syntax Notation One (ASN.1) and X.500 Distinguished Names play an important role in the security standards that define X.509 certificates and LDAP directories. A.1. ASN.1 Overview The Abstract Syntax Notation One (ASN.1) was defined by the OSI standards body in the early 1980s to provide a way of defining data types and structures that are independent of any particular machine hardware or programming language. In many ways, ASN.1 can be considered a forerunner of modern interface definition languages, such as the OMG's IDL and WSDL, which are concerned with defining platform-independent data types. ASN.1 is important, because it is widely used in the definition of standards (for example, SNMP, X.509, and LDAP). In particular, ASN.1 is ubiquitous in the field of security standards. The formal definitions of X.509 certificates and distinguished names are described using ASN.1 syntax. You do not require detailed knowledge of ASN.1 syntax to use these security standards, but you need to be aware that ASN.1 is used for the basic definitions of most security-related data types. BER The OSI's Basic Encoding Rules (BER) define how to translate an ASN.1 data type into a sequence of octets (binary representation). The role played by BER with respect to ASN.1 is, therefore, similar to the role played by GIOP with respect to the OMG IDL. DER The OSI's Distinguished Encoding Rules (DER) are a specialization of the BER. The DER consists of the BER plus some additional rules to ensure that the encoding is unique (BER encodings are not). References You can read more about ASN.1 in the following standards documents: ASN.1 is defined in X.208. BER is defined in X.209. A.2. Distinguished Names Overview Historically, distinguished names (DN) are defined as the primary keys in an X.500 directory structure. However, DNs have come to be used in many other contexts as general purpose identifiers. In Apache CXF, DNs occur in the following contexts: X.509 certificates-for example, one of the DNs in a certificate identifies the owner of the certificate (the security principal). LDAP-DNs are used to locate objects in an LDAP directory tree. String representation of DN Although a DN is formally defined in ASN.1, there is also an LDAP standard that defines a UTF-8 string representation of a DN (see RFC 2253 ). The string representation provides a convenient basis for describing the structure of a DN. Note The string representation of a DN does not provide a unique representation of DER-encoded DN. Hence, a DN that is converted from string format back to DER format does not always recover the original DER encoding. DN string example The following string is a typical example of a DN: Structure of a DN string A DN string is built up from the following basic elements: OID . Attribute Types . AVA . RDN . OID An OBJECT IDENTIFIER (OID) is a sequence of bytes that uniquely identifies a grammatical construct in ASN.1. Attribute types The variety of attribute types that can appear in a DN is theoretically open-ended, but in practice only a small subset of attribute types are used. Table A.1, "Commonly Used Attribute Types" shows a selection of the attribute types that you are most likely to encounter: Table A.1. Commonly Used Attribute Types String Representation X.500 Attribute Type Size of Data Equivalent OID C countryName 2 2.5.4.6 O organizationName 1... 64 2.5.4.10 OU organizationalUnitName 1... 64 2.5.4.11 CN commonName 1... 64 2.5.4.3 ST stateOrProvinceName 1... 64 2.5.4.8 L localityName 1... 64 2.5.4.7 STREET streetAddress DC domainComponent UID userid AVA An attribute value assertion (AVA) assigns an attribute value to an attribute type. In the string representation, it has the following syntax: For example: Alternatively, you can use the equivalent OID to identify the attribute type in the string representation (see Table A.1, "Commonly Used Attribute Types" ). For example: RDN A relative distinguished name (RDN) represents a single node of a DN (the bit that appears between the commas in the string representation). Technically, an RDN might contain more than one AVA (it is formally defined as a set of AVAs). However, this almost never occurs in practice. In the string representation, an RDN has the following syntax: Here is an example of a (very unlikely) multiple-value RDN: Here is an example of a single-value RDN:
|
[
"C=US,O=IONA Technologies,OU=Engineering,CN=A. N. Other",
"<attr-type> = <attr-value>",
"CN=A. N. Other",
"2.5.4.3=A. N. Other",
"<attr-type> = <attr-value>[ + <attr-type> =<attr-value> ...]",
"OU=Eng1+OU=Eng2+OU=Eng3",
"OU=Engineering"
] |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_security_guide/dn
|
Chapter 54. Braintree Component
|
Chapter 54. Braintree Component Available as of Camel version 2.17 The Braintree component provides access to Braintree Payments trough through theirs Java SDK . All client applications need API credential in order to process payments. In order to use camel-braintree with your account, you'll need to create a new Sandbox or Production account. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-braintree</artifactId> <version>USD{camel-version}</version> </dependency> 54.1. Braintree Options The Braintree component supports 2 options, which are listed below. Name Description Default Type configuration (common) To use the shared configuration BraintreeConfiguration resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Braintree endpoint is configured using URI syntax: with the following path and query parameters: 54.1.1. Path Parameters (2 parameters): Name Description Default Type apiName Required What kind of operation to perform BraintreeApiName methodName What sub operation to use for the selected operation String 54.1.2. Query Parameters (14 parameters): Name Description Default Type environment (common) The environment Either SANDBOX or PRODUCTION String inBody (common) Sets the name of a parameter to be passed in the exchange In Body String merchantId (common) The merchant id provided by Braintree. String privateKey (common) The private key provided by Braintree. String publicKey (common) The public key provided by Braintree. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern accessToken (advanced) The access token granted by a merchant to another in order to process transactions on their behalf. Used in place of environment, merchant id, public key and private key fields. String httpReadTimeout (advanced) Set read timeout for http calls. Integer synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean httpLogLevel (logging) Set logging level for http calls, see java.util.logging.Level String proxyHost (proxy) The proxy host String proxyPort (proxy) The proxy port Integer 54.2. Spring Boot Auto-Configuration The component supports 14 options, which are listed below. Name Description Default Type camel.component.braintree.configuration.access-token The access token granted by a merchant to another in order to process transactions on their behalf. Used in place of environment, merchant id, public key and private key fields. String camel.component.braintree.configuration.api-name What kind of operation to perform BraintreeApiName camel.component.braintree.configuration.environment The environment Either SANDBOX or PRODUCTION String camel.component.braintree.configuration.http-log-level Set logging level for http calls, see java.util.logging.Level Level camel.component.braintree.configuration.http-log-name Set log category to use to log http calls, default "Braintree" String camel.component.braintree.configuration.http-read-timeout Set read timeout for http calls. Integer camel.component.braintree.configuration.merchant-id The merchant id provided by Braintree. String camel.component.braintree.configuration.method-name What sub operation to use for the selected operation String camel.component.braintree.configuration.private-key The private key provided by Braintree. String camel.component.braintree.configuration.proxy-host The proxy host String camel.component.braintree.configuration.proxy-port The proxy port Integer camel.component.braintree.configuration.public-key The public key provided by Braintree. String camel.component.braintree.enabled Enable braintree component true Boolean camel.component.braintree.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 54.3. URI format braintree://endpoint-prefix/endpoint?[options] Endpoint prefix can be one of: addOn address clientToken creditCardverification customer discount dispute documentUpload merchantAccount paymentmethod paymentmethodNonce plan report settlementBatchSummary subscription transaction webhookNotification 54.4. BraintreeComponent The Braintree Component can be configured with the options below. These options can be provided using the component's bean property configuration of type org.apache.camel.component.braintree.BraintreeConfiguration . Option Type Description environment String Value that specifies where requests should be directed - sandbox or production merchantId String A unique identifier for your gateway account, which is different than your merchant account ID publicKey String User-specific public identifier privateKey String User-specific secure identifier that should not be shared - even with us! accessToken String Token granted to a merchant using Braintree Auth allowing them to process transactions on another's behalf. Used in place of the environment, merchantId, publicKey and privateKey options. All the options above are provided by Braintree Payments 54.5. Producer Endpoints: Producer endpoints can use endpoint prefixes followed by endpoint names and associated options described . A shorthand alias can be used for some endpoints. The endpoint URI MUST contain a prefix. Endpoint options that are not mandatory are denoted by []. When there are no mandatory options for an endpoint, one of the set of [] options MUST be provided. Producer endpoints can also use a special option inBody that in turn should contain the name of the endpoint option whose value will be contained in the Camel Exchange In message. Any of the endpoint options can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelBraintree.<option> . Note that the inBody option overrides message header, i.e. the endpoint option inBody=option would override a CamelBraintree.option header. For more information on the endpoints and options see Braintree references at https://developers.braintreepayments.com/reference/overview 54.5.1. Endpoint prefix addOn The following endpoints can be invoked with the prefix addOn as follows: braintree://addOn/endpoint Endpoint Shorthand Alias Options Result Body Type all List<com.braintreegateway.Addon> 54.5.2. Endpoint prefix address The following endpoints can be invoked with the prefix address as follows: braintree://address/endpoint?[options] Endpoint Shorthand Alias Options Result Body Type create customerId, request com.braintreegateway.Result<com.braintreegateway.Address> delete customerId, id com.braintreegateway.Result<com.braintreegateway.Address> find customerId, id com.braintreegateway.Address update customerId, id, request com.braintreegateway.Result<com.braintreegateway.Address> URI Options for address Name Type customerId String request com.braintreegateway.AddressRequest id String 54.5.3. Endpoint prefix clientToken The following endpoints can be invoked with the prefix clientToken as follows: braintree://clientToken/endpoint?[options] Endpoint Shorthand Alias Options Result Body Type generate request String URI Options for clientToken Name Type request com.braintreegateway.ClientTokenrequest 54.5.4. Endpoint prefix creditCardVerification The following endpoints can be invoked with the prefix creditCardverification as follows: braintree://creditCardVerification/endpoint?[options] Endpoint Shorthand Alias Options Result Body Type find id com.braintreegateway.CreditCardVerification search query com.braintreegateway.ResourceCollection<com.braintreegateway.CreditCardVerification> URI Options for creditCardVerification Name Type id String query com.braintreegateway.CreditCardVerificationSearchRequest 54.5.5. Endpoint prefix customer The following endpoints can be invoked with the prefix customer as follows: braintree://customer/endpoint?[options] Endpoint Shorthand Alias Options Result Body Type all create request com.braintreegateway.Result<com.braintreegateway.Customer> delete id com.braintreegateway.Result<com.braintreegateway.Customer> find id com.braintreegateway.Customer search query com.braintreegateway.ResourceCollection<com.braintreegateway.Customer> update id, request com.braintreegateway.Result<com.braintreegateway.Customer> URI Options for customer Name Type id String request com.braintreegateway.CustomerRequest query com.braintreegateway.CustomerSearchRequest 54.5.6. Endpoint prefix discount The following endpoints can be invoked with the prefix discount as follows: braintree://discount/endpoint Endpoint Shorthand Alias Options Result Body Type all List<com.braintreegateway.Discount> 54.5.7. Endpoint prefix dispute The following endpoints can be invoked with the prefix dispute as follows: braintree://dispute/endpoint?[options] Endpoint Shorthand Alias Options Result Body Type accept id com.braintreegateway.Result addFileEvidence disputeId, documentId com.braintreegateway.Result<DisputeEvidence> addFileEvidence disputeId, fileEvidenceRequest com.braintreegateway.Result<DisputeEvidence> addTextEvidence disputeId, content com.braintreegateway.Result<DisputeEvidence> addTextEvidence disputeId, textEvidenceRequest com.braintreegateway.Result<DisputeEvidence> finalize id com.braintreegateway.Result find id com.braintreegateway.Dispute removeEvidence id com.braintreegateway.Result search disputeSearchRequest com.braintreegateway.PaginatedCollection<com.braintreegateway.Dispute> URI Options for dispute Name Type id String disputeId String documentId String fileEvidenceRequest com.braintreegateway.FileEvidenceRequest content String textEvidenceRequest com.braintreegateway.TextEvidenceRequest disputeSearchRequest 54.5.8. Endpoint prefix documentUpload The following endpoints can be invoked with the prefix documentUpload as follows: braintree://documentUpload/endpoint?[options] Endpoint Shorthand Alias Options Result Body Type create request com.braintreegateway.Result<com.braintreegateway.DocumentUpload> URI Options for documentUpload Name Type request com.braintreegateway.DocumentUploadRequest 54.5.9. Endpoint prefix merchantAccount The following endpoints can be invoked with the prefix merchantAccount as follows: braintree://merchantAccount/endpoint?[options] Endpoint Shorthand Alias Options Result Body Type create request com.braintreegateway.Result<com.braintreegateway.MerchantAccount> createForCurrency currencyRequest com.braintreegateway.Result<com.braintreegateway.MerchantAccount> find id com.braintreegateway.MerchantAccount update id, request com.braintreegateway.Result<com.braintreegateway.MerchantAccount> URI Options for merchantAccount Name Type id String request com.braintreegateway.MerchantAccountRequest currencyRequest com.braintreegateway.MerchantAccountCreateForCurrencyRequest 54.5.10. Endpoint prefix paymentMethod The following endpoints can be invoked with the prefix paymentMethod as follows: braintree://paymentMethod/endpoint?[options] Endpoint Shorthand Alias Options Result Body Type create request com.braintreegateway.Result<com.braintreegateway.PaymentMethod> delete token, deleteRequest com.braintreegateway.Result<com.braintreegateway.PaymentMethod> find token com.braintreegateway.PaymentMethod update token, request com.braintreegateway.Result<com.braintreegateway.PaymentMethod> URI Options for paymentMethod Name Type token String request com.braintreegateway.PaymentMethodRequest deleteRequest com.braintreegateway.PaymentMethodDeleteRequest 54.5.11. Endpoint prefix paymentMethodNonce The following endpoints can be invoked with the prefix paymentMethodNonce as follows: braintree://paymentMethodNonce/endpoint?[options] Endpoint Shorthand Alias Options Result Body Type create paymentMethodToken com.braintreegateway.Result<com.braintreegateway.PaymentMethodNonce> find paymentMethodNonce com.braintreegateway.PaymentMethodNonce URI Options for paymentMethodNonce Name Type paymentMethodToken String paymentMethodNonce String 54.5.12. Endpoint prefix plan The following endpoints can be invoked with the prefix plan as follows: braintree://plan/endpoint Endpoint Shorthand Alias Options Result Body Type all List<com.braintreegateway.Plan> 54.5.13. Endpoint prefix report The following endpoints can be invoked with the prefix report as follows: braintree://plan/report?[options] Endpoint Shorthand Alias Options Result Body Type transactionLevelFees request com.braintreegateway.Result<com.braintreegateway.TransactionLevelFeeReport> URI Options for report Name Type request com.braintreegateway.TransactionLevelFeeReportRequest 54.5.14. Endpoint prefix settlementBatchSummary The following endpoints can be invoked with the prefix settlementBatchSummary as follows: braintree://settlementBatchSummary/endpoint?[options] Endpoint Shorthand Alias Options Result Body Type generate request com.braintreegateway.Result<com.braintreegateway.SettlementBatchSummary> URI Options for settlementBatchSummary Name Type settlementDate Calendar groupByCustomField String 54.5.15. Endpoint prefix subscription The following endpoints can be invoked with the prefix subscription as follows: braintree://subscription/endpoint?[options] Endpoint Shorthand Alias Options Result Body Type cancel id com.braintreegateway.Result<com.braintreegateway.Subscription> create request com.braintreegateway.Result<com.braintreegateway.Subscription> delete customerId, id com.braintreegateway.Result<com.braintreegateway.Subscription> find id com.braintreegateway.Subscription retryCharge subscriptionId, amount com.braintreegateway.Result<com.braintreegateway.Transaction> search searchRequest com.braintreegateway.ResourceCollection<com.braintreegateway.Subscription> update id, request com.braintreegateway.Result<com.braintreegateway.Subscription> URI Options for subscription Name Type id String request com.braintreegateway.SubscriptionRequest customerId String subscriptionId String amount BigDecimal searchRequest com.braintreegateway.SubscriptionSearchRequest. 54.5.16. Endpoint prefix transaction The following endpoints can be invoked with the prefix transaction as follows: braintree://transaction/endpoint?[options] Endpoint Shorthand Alias Options Result Body Type cancelRelease id com.braintreegateway.Result<com.braintreegateway.Transaction> cloneTransaction id, cloneRequest com.braintreegateway.Result<com.braintreegateway.Transaction> credit request com.braintreegateway.Result<com.braintreegateway.Transaction> find id com.braintreegateway.Transaction holdInEscrow id com.braintreegateway.Result<com.braintreegateway.Transaction> releaseFromEscrow id com.braintreegateway.Result<com.braintreegateway.Transaction> refund id, amount, refundRequest com.braintreegateway.Result<com.braintreegateway.Transaction> sale request com.braintreegateway.Result<com.braintreegateway.Transaction> search query com.braintreegateway.ResourceCollection<com.braintreegateway.Transaction> submitForPartialSettlement id, amount com.braintreegateway.Result<com.braintreegateway.Transaction> submitForSettlement id, amount, request com.braintreegateway.Result<com.braintreegateway.Transaction> voidTransaction id com.braintreegateway.Result<com.braintreegateway.Transaction> URI Options for transaction Name Type id String request com.braintreegateway.TransactionCloneRequest cloneRequest com.braintreegateway.TransactionCloneRequest refundRequest com.braintreegateway.TransactionRefundRequest amount BigDecimal query com.braintreegateway.TransactionSearchRequest 54.5.17. Endpoint prefix webhookNotification The following endpoints can be invoked with the prefix webhookNotification as follows: braintree://webhookNotification/endpoint?[options] Endpoint Shorthand Alias Options Result Body Type parse signature, payload com.braintreegateway.WebhookNotification verify challenge String URI Options for webhookNotification Name Type signature String payload String challenge String 54.6. Consumer Endpoints Any of the producer endpoints can be used as a consumer endpoint. Consumer endpoints can use Scheduled Poll Consumer Options with a consumer. prefix to schedule endpoint invocation. By default Consumer endpoints that return an array or collection will generate one exchange per element, and their routes will be executed once for each exchange. To change this behavior use the property consumer.splitResults=true to return a single exchange for the entire list or array. 54.7. Message Headers Any URI option can be provided in a message header for producer endpoints with a CamelBraintree. prefix. 54.8. Message body All result message bodies utilize objects provided by the Braintree Java SDK. Producer endpoints can specify the option name for incoming message body in the inBody endpoint parameter. 54.9. Examples Blueprint <?xml version="1.0"?> <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:cm="http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.0.0" xsi:schemaLocation=" http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.0.0 http://aries.apache.org/schemas/blueprint-cm/blueprint-cm-1.0.0.xsd http://www.osgi.org/xmlns/blueprint/v1.0.0 https://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd http://camel.apache.org/schema/blueprint http://camel.apache.org/schema/blueprint/camel-blueprint.xsd"> <cm:property-placeholder id="placeholder" persistent-id="camel.braintree"> </cm:property-placeholder> <bean id="braintree" class="org.apache.camel.component.braintree.BraintreeComponent"> <property name="configuration"> <bean class="org.apache.camel.component.braintree.BraintreeConfiguration"> <property name="environment" value="USD{environment}"/> <property name="merchantId" value="USD{merchantId}"/> <property name="publicKey" value="USD{publicKey}"/> <property name="privateKey" value="USD{privateKey}"/> </bean> </property> </bean> <camelContext trace="true" xmlns="http://camel.apache.org/schema/blueprint" id="braintree-example-context"> <route id="braintree-example-route"> <from uri="direct:generateClientToken"/> <to uri="braintree://clientToken/generate"/> <to uri="stream:out"/> </route> </camelContext> </blueprint> 54.10. See Also * Configuring Camel * Component * Endpoint * Getting Started
|
[
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-braintree</artifactId> <version>USD{camel-version}</version> </dependency>",
"braintree:apiName/methodName",
"braintree://endpoint-prefix/endpoint?[options]",
"braintree://addOn/endpoint",
"braintree://address/endpoint?[options]",
"braintree://clientToken/endpoint?[options]",
"braintree://creditCardVerification/endpoint?[options]",
"braintree://customer/endpoint?[options]",
"braintree://discount/endpoint",
"+",
"+",
"braintree://dispute/endpoint?[options]",
"braintree://documentUpload/endpoint?[options]",
"braintree://merchantAccount/endpoint?[options]",
"braintree://paymentMethod/endpoint?[options]",
"braintree://paymentMethodNonce/endpoint?[options]",
"braintree://plan/endpoint",
"braintree://plan/report?[options]",
"braintree://settlementBatchSummary/endpoint?[options]",
"braintree://subscription/endpoint?[options]",
"braintree://transaction/endpoint?[options]",
"braintree://webhookNotification/endpoint?[options]",
"<?xml version=\"1.0\"?> <blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:cm=\"http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.0.0\" xsi:schemaLocation=\" http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.0.0 http://aries.apache.org/schemas/blueprint-cm/blueprint-cm-1.0.0.xsd http://www.osgi.org/xmlns/blueprint/v1.0.0 https://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd http://camel.apache.org/schema/blueprint http://camel.apache.org/schema/blueprint/camel-blueprint.xsd\"> <cm:property-placeholder id=\"placeholder\" persistent-id=\"camel.braintree\"> </cm:property-placeholder> <bean id=\"braintree\" class=\"org.apache.camel.component.braintree.BraintreeComponent\"> <property name=\"configuration\"> <bean class=\"org.apache.camel.component.braintree.BraintreeConfiguration\"> <property name=\"environment\" value=\"USD{environment}\"/> <property name=\"merchantId\" value=\"USD{merchantId}\"/> <property name=\"publicKey\" value=\"USD{publicKey}\"/> <property name=\"privateKey\" value=\"USD{privateKey}\"/> </bean> </property> </bean> <camelContext trace=\"true\" xmlns=\"http://camel.apache.org/schema/blueprint\" id=\"braintree-example-context\"> <route id=\"braintree-example-route\"> <from uri=\"direct:generateClientToken\"/> <to uri=\"braintree://clientToken/generate\"/> <to uri=\"stream:out\"/> </route> </camelContext> </blueprint>"
] |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/braintree-component
|
Chapter 4. Updating Red Hat OpenShift Data Foundation 4.13.x to 4.13.y
|
Chapter 4. Updating Red Hat OpenShift Data Foundation 4.13.x to 4.13.y This chapter helps you to upgrade between the z-stream release for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what's not. For Internal and Internal-attached deployments, upgrading OpenShift Data Foundation upgrades all OpenShift Data Foundation services including the backend Ceph Storage cluster. For External mode deployments, upgrading OpenShift Data Foundation only upgrades the OpenShift Data Foundation service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately. Hence, we recommend upgrading RHCS along with OpenShift Data Foundation in order to get new feature support, security fixes, and other bug fixes. Since we do not have a strong dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. See solution to know more about Red Hat Ceph Storage releases. When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic . If the update strategy is set to Manual then use the following procedure. Prerequisites Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.13.X, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy. Ensure that all OpenShift Data Foundation Pods, including the operator pods, are in Running state in the openshift-storage namespace. To view the state of the pods, on the OpenShift Web Console, click Workloads Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster. Procedure On the OpenShift Web Console, navigate to Operators Installed Operators . Select openshift-storage project. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click the OpenShift Data Foundation operator name. Click the Subscription tab. If the Upgrade Status shows require approval , click on requires approval link. On the InstallPlan Details page, click Preview Install Plan . Review the install plan and click Approve . Wait for the Status to change from Unknown to Created . After the operator is successfully upgraded, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. Verification steps Check the Version below the OpenShift Data Foundation name and check the operator status. Navigate to Operators Installed Operators and select the openshift-storage project. When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick. Verify that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy If verification steps fail, contact Red Hat Support .
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/updating_openshift_data_foundation/updating-zstream-odf_rhodf
|
Chapter 3. Red Hat build of OpenJDK features
|
Chapter 3. Red Hat build of OpenJDK features The latest Red Hat build of OpenJDK 17 release might include new features. Additionally, the latest release might enhance, deprecate, or remove features that originated from Red Hat build of OpenJDK 17 releases. Note For all the other changes and security fixes, see OpenJDK 17.0.9 Released . 3.1. Red Hat build of OpenJDK enhancements Red Hat build of OpenJDK 17 provides enhancements to features originally created in releases of Red Hat build of OpenJDK. Increased default group size of TLS Diffie-Hellman In Red Hat build of OpenJDK 17.0.9, the JDK implementation of Transport Layer Security (TLS) 1.2 uses a default Diffie-Hellman key size of 2048 bits. This supersedes the behavior in releases where the default Diffie-Hellman key size was 1024 bits. This enhancement is relevant when a TLS_DHE cipher suite is negotiated and either the client or the server does not support Finite Field Diffie-Hellman Ephemeral (FFDHE) parameters. The JDK TLS implementation supports FFDHE, which is enabled by default and can negotiate a stronger key size. As a workaround, you can revert to the key size by setting the jdk.tls.ephemeralDHKeySize system property to 1024 . However, to mitigate risk, consider using the default key size of 2048 bits. Note This change does not affect TLS 1.3, which already uses a minimum Diffie-Hellman key size of 2048 bits. See JDK-8301700 (JDK Bug System) . -XshowSettings:locale output includes tzdata version In Red Hat build of OpenJDK 17.0.9, the -XshowSettings launcher option also prints the tzdata version that the JDK uses. The tzdata version is displayed as part of the output for the -XshowSettings:locale option. For example: See JDK-8305950 (JDK Bug System) . Certigna root CA certificate added In Red Hat build of OpenJDK 17.0.9, the cacerts truststore includes the Certigna root certificate: Name: Certigna (Dhimyotis) Alias name: certignarootca Distinguished name: CN=Certigna Root CA, OU=0002 48146308100036, O=Dhimyotis, C=FR See JDK-8314960 (JDK Bug System) . 3.2. Red Hat build of OpenJDK deprecated features Review the following release notes to understand pre-existing features that have been either deprecated or removed in Red Hat build of OpenJDK 17.0.9: SECOM Trust Systems root CA1 certificate removed Red Hat build of OpenJDK 17.0.9 removes the following root certificate from the cacerts truststore: Alias name: secomscrootca1 [jdk] Distinguished name: OU=Security Communication RootCA1, O=SECOM Trust.net, C=JP See JDK-8295894 (JDK Bug System) .
|
[
"Locale settings: default locale = English default display locale = English default format locale = English tzdata version = 2023c"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.9/rn_openjdk-1709-features_openjdk
|
Chapter 2. Developer metrics
|
Chapter 2. Developer metrics 2.1. Serverless developer metrics overview Metrics enable developers to monitor how Knative services are performing. You can use the OpenShift Container Platform monitoring stack to record and view health checks and metrics for your Knative services. You can view different metrics for OpenShift Serverless by navigating to Dashboards in the web console Developer perspective. Warning If Service Mesh is enabled with mTLS, metrics for Knative Serving are disabled by default because Service Mesh prevents Prometheus from scraping metrics. For information about resolving this issue, see Enabling Knative Serving metrics when using Service Mesh with mTLS . Scraping the metrics does not affect autoscaling of a Knative service, because scraping requests do not go through the activator. Consequently, no scraping takes place if no pods are running. 2.1.1. Additional resources for OpenShift Container Platform Monitoring overview Enabling monitoring for user-defined projects 2.2. Knative service metrics exposed by default Table 2.1. Metrics exposed by default for each Knative service on port 9091 Metric name, unit, and type Description Metric tags request_count Metric unit: dimensionless Metric type: counter The number of requests that are routed to queue-proxy . configuration_name="event-display", container_name="queue-proxy", namespace_name="apiserversource1", pod_name="event-display-00001-deployment-658fd4f9cf-qcnr5", response_code="200", response_code_class="2xx", revision_name="event-display-00001", service_name="event-display" request_latencies Metric unit: milliseconds Metric type: histogram The response time in milliseconds. configuration_name="event-display", container_name="queue-proxy", namespace_name="apiserversource1", pod_name="event-display-00001-deployment-658fd4f9cf-qcnr5", response_code="200", response_code_class="2xx", revision_name="event-display-00001", service_name="event-display" app_request_count Metric unit: dimensionless Metric type: counter The number of requests that are routed to user-container . configuration_name="event-display", container_name="queue-proxy", namespace_name="apiserversource1", pod_name="event-display-00001-deployment-658fd4f9cf-qcnr5", response_code="200", response_code_class="2xx", revision_name="event-display-00001", service_name="event-display" app_request_latencies Metric unit: milliseconds Metric type: histogram The response time in milliseconds. configuration_name="event-display", container_name="queue-proxy", namespace_name="apiserversource1", pod_name="event-display-00001-deployment-658fd4f9cf-qcnr5", response_code="200", response_code_class="2xx", revision_name="event-display-00001", service_name="event-display" queue_depth Metric unit: dimensionless Metric type: gauge The current number of items in the serving and waiting queue, or not reported if unlimited concurrency. breaker.inFlight is used. configuration_name="event-display", container_name="queue-proxy", namespace_name="apiserversource1", pod_name="event-display-00001-deployment-658fd4f9cf-qcnr5", response_code="200", response_code_class="2xx", revision_name="event-display-00001", service_name="event-display" 2.3. Knative service with custom application metrics You can extend the set of metrics exported by a Knative service. The exact implementation depends on your application and the language used. The following listing implements a sample Go application that exports the count of processed events custom metric. package main import ( "fmt" "log" "net/http" "os" "github.com/prometheus/client_golang/prometheus" 1 "github.com/prometheus/client_golang/prometheus/promauto" "github.com/prometheus/client_golang/prometheus/promhttp" ) var ( opsProcessed = promauto.NewCounter(prometheus.CounterOpts{ 2 Name: "myapp_processed_ops_total", Help: "The total number of processed events", }) ) func handler(w http.ResponseWriter, r *http.Request) { log.Print("helloworld: received a request") target := os.Getenv("TARGET") if target == "" { target = "World" } fmt.Fprintf(w, "Hello %s!\n", target) opsProcessed.Inc() 3 } func main() { log.Print("helloworld: starting server...") port := os.Getenv("PORT") if port == "" { port = "8080" } http.HandleFunc("/", handler) // Separate server for metrics requests go func() { 4 mux := http.NewServeMux() server := &http.Server{ Addr: fmt.Sprintf(":%s", "9095"), Handler: mux, } mux.Handle("/metrics", promhttp.Handler()) log.Printf("prometheus: listening on port %s", 9095) log.Fatal(server.ListenAndServe()) }() // Use same port as normal requests for metrics //http.Handle("/metrics", promhttp.Handler()) 5 log.Printf("helloworld: listening on port %s", port) log.Fatal(http.ListenAndServe(fmt.Sprintf(":%s", port), nil)) } 1 Including the Prometheus packages. 2 Defining the opsProcessed metric. 3 Incrementing the opsProcessed metric. 4 Configuring to use a separate server for metrics requests. 5 Configuring to use the same port as normal requests for metrics and the metrics subpath. 2.4. Configuration for scraping custom metrics Custom metrics scraping is performed by an instance of Prometheus purposed for user workload monitoring. After you enable user workload monitoring and create the application, you need a configuration that defines how the monitoring stack will scrape the metrics. The following sample configuration defines the ksvc for your application and configures the service monitor. The exact configuration depends on your application and how it exports the metrics. apiVersion: serving.knative.dev/v1 1 kind: Service metadata: name: helloworld-go spec: template: metadata: labels: app: helloworld-go annotations: spec: containers: - image: docker.io/skonto/helloworld-go:metrics resources: requests: cpu: "200m" env: - name: TARGET value: "Go Sample v1" --- apiVersion: monitoring.coreos.com/v1 2 kind: ServiceMonitor metadata: labels: name: helloworld-go-sm spec: endpoints: - port: queue-proxy-metrics scheme: http - port: app-metrics scheme: http namespaceSelector: {} selector: matchLabels: name: helloworld-go-sm --- apiVersion: v1 3 kind: Service metadata: labels: name: helloworld-go-sm name: helloworld-go-sm spec: ports: - name: queue-proxy-metrics port: 9091 protocol: TCP targetPort: 9091 - name: app-metrics port: 9095 protocol: TCP targetPort: 9095 selector: serving.knative.dev/service: helloworld-go type: ClusterIP 1 Application specification. 2 Configuration of which application's metrics are scraped. 3 Configuration of the way metrics are scraped. 2.5. Examining metrics of a service After you have configured the application to export the metrics and the monitoring stack to scrape them, you can examine the metrics in the web console. Prerequisites You have logged in to the OpenShift Container Platform web console. You have installed the OpenShift Serverless Operator and Knative Serving. Procedure Optional: Run requests against your application that you will be able to see in the metrics: USD hello_route=USD(oc get ksvc helloworld-go -n ns1 -o jsonpath='{.status.url}') && \ curl USDhello_route Example output Hello Go Sample v1! In the web console, navigate to the Observe Metrics interface. In the input field, enter the query for the metric you want to observe, for example: Another example: Observe the visualized metrics: 2.5.1. Queue proxy metrics Each Knative service has a proxy container that proxies the connections to the application container. A number of metrics are reported for the queue proxy performance. You can use the following metrics to measure if requests are queued at the proxy side and the actual delay in serving requests at the application side. Metric name Description Type Tags Unit revision_request_count The number of requests that are routed to queue-proxy pod. Counter configuration_name , container_name , namespace_name , pod_name , response_code , response_code_class , revision_name , service_name Integer (no units) revision_request_latencies The response time of revision requests. Histogram configuration_name , container_name , namespace_name , pod_name , response_code , response_code_class , revision_name , service_name Milliseconds revision_app_request_count The number of requests that are routed to the user-container pod. Counter configuration_name , container_name , namespace_name , pod_name , response_code , response_code_class , revision_name , service_name Integer (no units) revision_app_request_latencies The response time of revision app requests. Histogram configuration_name , namespace_name , pod_name , response_code , response_code_class , revision_name , service_name Milliseconds revision_queue_depth The current number of items in the serving and waiting queues. This metric is not reported if unlimited concurrency is configured. Gauge configuration_name , event-display , container_name , namespace_name , pod_name , response_code_class , revision_name , service_name Integer (no units) 2.6. Dashboard for service metrics You can examine the metrics using a dedicated dashboard that aggregates queue proxy metrics by namespace. 2.6.1. Examining metrics of a service in the dashboard Prerequisites You have logged in to the OpenShift Container Platform web console. You have installed the OpenShift Serverless Operator and Knative Serving. Procedure In the web console, navigate to the Observe Metrics interface. Select the Knative User Services (Queue Proxy metrics) dashboard. Select the Namespace , Configuration , and Revision that correspond to your application. Observe the visualized metrics:
|
[
"package main import ( \"fmt\" \"log\" \"net/http\" \"os\" \"github.com/prometheus/client_golang/prometheus\" 1 \"github.com/prometheus/client_golang/prometheus/promauto\" \"github.com/prometheus/client_golang/prometheus/promhttp\" ) var ( opsProcessed = promauto.NewCounter(prometheus.CounterOpts{ 2 Name: \"myapp_processed_ops_total\", Help: \"The total number of processed events\", }) ) func handler(w http.ResponseWriter, r *http.Request) { log.Print(\"helloworld: received a request\") target := os.Getenv(\"TARGET\") if target == \"\" { target = \"World\" } fmt.Fprintf(w, \"Hello %s!\\n\", target) opsProcessed.Inc() 3 } func main() { log.Print(\"helloworld: starting server...\") port := os.Getenv(\"PORT\") if port == \"\" { port = \"8080\" } http.HandleFunc(\"/\", handler) // Separate server for metrics requests go func() { 4 mux := http.NewServeMux() server := &http.Server{ Addr: fmt.Sprintf(\":%s\", \"9095\"), Handler: mux, } mux.Handle(\"/metrics\", promhttp.Handler()) log.Printf(\"prometheus: listening on port %s\", 9095) log.Fatal(server.ListenAndServe()) }() // Use same port as normal requests for metrics //http.Handle(\"/metrics\", promhttp.Handler()) 5 log.Printf(\"helloworld: listening on port %s\", port) log.Fatal(http.ListenAndServe(fmt.Sprintf(\":%s\", port), nil)) }",
"apiVersion: serving.knative.dev/v1 1 kind: Service metadata: name: helloworld-go spec: template: metadata: labels: app: helloworld-go annotations: spec: containers: - image: docker.io/skonto/helloworld-go:metrics resources: requests: cpu: \"200m\" env: - name: TARGET value: \"Go Sample v1\" --- apiVersion: monitoring.coreos.com/v1 2 kind: ServiceMonitor metadata: labels: name: helloworld-go-sm spec: endpoints: - port: queue-proxy-metrics scheme: http - port: app-metrics scheme: http namespaceSelector: {} selector: matchLabels: name: helloworld-go-sm --- apiVersion: v1 3 kind: Service metadata: labels: name: helloworld-go-sm name: helloworld-go-sm spec: ports: - name: queue-proxy-metrics port: 9091 protocol: TCP targetPort: 9091 - name: app-metrics port: 9095 protocol: TCP targetPort: 9095 selector: serving.knative.dev/service: helloworld-go type: ClusterIP",
"hello_route=USD(oc get ksvc helloworld-go -n ns1 -o jsonpath='{.status.url}') && curl USDhello_route",
"Hello Go Sample v1!",
"revision_app_request_count{namespace=\"ns1\", job=\"helloworld-go-sm\"}",
"myapp_processed_ops_total{namespace=\"ns1\", job=\"helloworld-go-sm\"}"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/observability/developer-metrics
|
Chapter 6. Installing a private cluster on IBM Power Virtual Server
|
Chapter 6. Installing a private cluster on IBM Power Virtual Server In OpenShift Container Platform version 4.14, you can install a private cluster into an existing VPC and IBM Power(R) Virtual Server Workspace. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. Important IBM Power(R) Virtual Server using installer-provisioned infrastructure is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud(R) account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring the Cloud Credential Operator utility . 6.2. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Create a DNS zone using IBM Cloud(R) DNS Services and specify it as the base domain of the cluster. For more information, see "Using IBM Cloud(R) DNS Services to configure DNS resolution". Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 6.3. Private clusters in IBM Power Virtual Server To create a private cluster on IBM Power(R) Virtual Server, you must provide an existing private Virtual Private Cloud (VPC) and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic. The cluster still requires access to internet to access the IBM Cloud(R) APIs. The following items are not required or created when you install a private cluster: Public subnets Public network load balancers, which support public Ingress A public DNS zone that matches the baseDomain for the cluster You will also need to create an IBM(R) DNS service containing a DNS zone that matches your baseDomain . Unlike standard deployments on Power VS which use IBM(R) CIS for DNS, you must use IBM(R) DNS for your DNS service. 6.3.1. Limitations Private clusters on IBM Power(R) Virtual Server are subject only to the limitations associated with the existing VPC that was used for cluster deployment. 6.4. Requirements for using your VPC You must correctly configure the existing VPC and its subnets before you install the cluster. The installation program does not create a VPC or VPC subnet in this scenario. The installation program cannot: Subdivide network ranges for the cluster to use Set route tables for the subnets Set VPC options like DHCP Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 6.4.1. VPC validation The VPC and all of the subnets must be in an existing resource group. The cluster is deployed to this resource group. As part of the installation, specify the following in the install-config.yaml file: The name of the resource group The name of VPC The name of the VPC subnet To ensure that the subnets that you provide are suitable, the installation program confirms that all of the subnets you specify exists. Note Subnet IDs are not supported. 6.4.2. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: ICMP Ingress is allowed to the entire network. TCP port 22 Ingress (SSH) is allowed to the entire network. Control plane TCP 6443 Ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 Ingress (MCS) is allowed to the entire network. 6.5. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 6.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 6.8. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IBMCLOUD_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 6.9. Manually creating the installation configuration file When installing a private OpenShift Container Platform cluster, you must manually generate the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Power(R) Virtual Server 6.9.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 6.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 2 16 GB 100 GB 300 Control plane RHCOS 2 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 6.9.2. Sample customized install-config.yaml file for IBM Power Virtual Server You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: {} replicas: 3 controlPlane: 4 5 architecture: ppc64le hyperthreading: Enabled 6 name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: example-private-cluster-name networking: clusterNetwork: - cidr: 10.128.0.0/14 7 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 8 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id powervsResourceGroup: "ibmcloud-resource-group" region: powervs-region vpcName: name-of-existing-vpc 9 cloudConnectionName: powervs-region-example-cloud-con-priv vpcSubnets: - powervs-region-example-subnet-1 vpcRegion : vpc-region zone: powervs-zone serviceInstanceID: "powervs-region-service-instance-id" publish: Internal 10 pullSecret: '{"auths": ...}' 11 sshKey: ssh-ed25519 AAAA... 12 1 4 If you do not provide these parameters and values, the installation program provides the default value. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Both sections currently define a single machine pool. Only one control plane pool is used. 3 6 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. 7 The machine CIDR must contain the subnets for the compute machines and control plane machines. 8 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 9 Specify the name of an existing VPC. 10 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster. 11 Required. The installation program prompts you for this value. 12 Provide the sshKey value that you use to access the machines in your cluster. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 6.9.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 6.10. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 6.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.12. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 6.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 6.14. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 6.15. steps Customize your cluster Optional: Opt out of remote health reporting
|
[
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"export IBMCLOUD_API_KEY=<api_key>",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: {} replicas: 3 controlPlane: 4 5 architecture: ppc64le hyperthreading: Enabled 6 name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: example-private-cluster-name networking: clusterNetwork: - cidr: 10.128.0.0/14 7 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 8 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id powervsResourceGroup: \"ibmcloud-resource-group\" region: powervs-region vpcName: name-of-existing-vpc 9 cloudConnectionName: powervs-region-example-cloud-con-priv vpcSubnets: - powervs-region-example-subnet-1 vpcRegion : vpc-region zone: powervs-zone serviceInstanceID: \"powervs-region-service-instance-id\" publish: Internal 10 pullSecret: '{\"auths\": ...}' 11 sshKey: ssh-ed25519 AAAA... 12",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled",
"./openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer",
"ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4",
"grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_ibm_power_virtual_server/installing-ibm-power-vs-private-cluster
|
Planning your deployment
|
Planning your deployment Red Hat OpenShift Data Foundation 4.13 Important considerations when deploying Red Hat OpenShift Data Foundation 4.13 Red Hat Storage Documentation Team Abstract Read this document for important considerations when planning your Red Hat OpenShift Data Foundation deployment.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/planning_your_deployment/index
|
Chapter 13. Additional resources
|
Chapter 13. Additional resources Red Hat OpenShift AI documentation Boto3 documentation Amazon Simple Storage Service documentation
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/working_with_data_in_an_s3-compatible_object_store/additional_resources
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation. The Jira issue will be created in the Red Hat OpenStack Platform Jira project, where you can track the progress of your feedback. Ensure that you are logged in to Jira. If you do not have a Jira account, create an account to submit feedback. Click the following link to open a the Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/introduction_to_the_openstack_dashboard/proc_providing-feedback-on-red-hat-documentation
|
Chapter 1. Building applications overview
|
Chapter 1. Building applications overview Using OpenShift Container Platform, you can create, edit, delete, and manage applications using the web console or command line interface (CLI). 1.1. Working on a project Using projects, you can organize and manage applications in isolation. You can manage the entire project lifecycle, including creating, viewing, and deleting a project in OpenShift Container Platform. After you create the project, you can grant or revoke access to a project and manage cluster roles for the users using the Developer perspective. You can also edit the project configuration resource while creating a project template that is used for automatic provisioning of new projects. Using the CLI, you can create a project as a different user by impersonating a request to the OpenShift Container Platform API. When you make a request to create a new project, the OpenShift Container Platform uses an endpoint to provision the project according to a customizable template. As a cluster administrator, you can choose to prevent an authenticated user group from self-provisioning new projects . 1.2. Working on an application 1.2.1. Creating an application To create applications, you must have created a project or have access to a project with the appropriate roles and permissions. You can create an application by using either the Developer perspective in the web console , installed Operators , or the OpenShift Container Platform CLI . You can source the applications to be added to the project from Git, JAR files, devfiles, or the developer catalog. You can also use components that include source or binary code, images, and templates to create an application by using the OpenShift Container Platform CLI. With the OpenShift Container Platform web console, you can create an application from an Operator installed by a cluster administrator. 1.2.2. Maintaining an application After you create the application you can use the web console to monitor your project or application metrics . You can also edit or delete the application using the web console. When the application is running, not all applications resources are used. As a cluster administrator, you can choose to idle these scalable resources to reduce resource consumption. 1.2.3. Connecting an application to services An application uses backing services to build and connect workloads, which vary according to the service provider. Using the Service Binding Operator , as a developer, you can bind workloads together with Operator-managed backing services, without any manual procedures to configure the binding connection. You can apply service binding also on IBM Power Systems, IBM Z, and LinuxONE environments . 1.2.4. Deploying an application You can deploy your application using Deployment or DeploymentConfig objects and manage them from the web console. You can create deployment strategies that help reduce downtime during a change or an upgrade to the application. You can also use Helm , a software package manager that simplifies deployment of applications and services to OpenShift Container Platform clusters. 1.3. Using the Red Hat Marketplace The Red Hat Marketplace is an open cloud marketplace where you can discover and access certified software for container-based environments that run on public clouds and on-premises.
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/building_applications/building-applications-overview
|
Chapter 13. Updating and upgrading the Load-balancing service
|
Chapter 13. Updating and upgrading the Load-balancing service Perform regular updates and upgrades so that you can use the latest Red Hat OpenStack Platform Load-balancing service (octavia) features, and avoid possible lengthy and problematic issues caused by infrequent updates and upgrades. Section 13.1, "Updating and upgrading the Load-balancing service" Section 13.2, "Updating running Load-balancing service instances" 13.1. Updating and upgrading the Load-balancing service The Load-balancing service (octavia) is a part of a Red Hat OpenStack Platform (RHOSP) update or upgrade. Prerequisites Schedule a maintenance window to perform the upgrade, because during the upgrade the Load-balancing service control plane is not fully functional. Procedure Perform the RHOSP update as described in the Keeping Red Hat OpenStack Platform Updated guide. After the maintenance release is applied, if you need to use new features, then rotate running amphorae to update them to the latest amphora image. Additional resources Keeping Red Hat OpenStack Platform Updated guide Section 13.2, "Updating running Load-balancing service instances" Section 2.2, "Load-balancing service (octavia) feature support matrix" 13.2. Updating running Load-balancing service instances Periodically, you can update a running Load-balancing service instance (amphora) with a newer image. For example, you might want to update an amphora instance during the following events: An update or upgrade of Red Hat OpenStack Platform (RHOSP). A security update to your system. A change to a different flavor for the underlying virtual machine. During an RHOSP update or upgrade, director automatically downloads the default amphora image, uploads it to the overcloud Image service (glance), and then configures the Load-balancing service (octavia) to use the new image. When you fail over a load balancer, you force the Load-balancing service to start an instance (amphora) that uses the new amphora image. Prerequisites New images for amphora. These are available during an RHOSP update or upgrade. Procedure Source your credentials file. Example List the IDs for all of the load balancers that you want to update: Fail over each load balancer: Note When you start failing over the load balancers, monitor system utilization, and as needed, adjust the rate at which you perform failovers. A load balancer failover creates new virtual machines and ports, which might temporarily increase the load on OpenStack Networking. Monitor the state of the failed over load balancer: The update is complete when the load balancer status is ACTIVE . Additional resources loadbalancer in the Command Line Interface Reference
|
[
"source ~/overcloudrc",
"openstack loadbalancer list -c id -f value",
"openstack loadbalancer failover <loadbalancer_id>",
"openstack loadbalancer show <loadbalancer_id>"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/using_octavia_for_load_balancing-as-a-service/update-upgrade-lb-service_rhosp-lbaas
|
10.6. Starting Geo-replication on a Newly Added Brick, Node, or Volume
|
10.6. Starting Geo-replication on a Newly Added Brick, Node, or Volume 10.6.1. Starting Geo-replication for a New Brick or New Node If a geo-replication session is running, and a new node is added to the trusted storage pool or a brick is added to the volume from a newly added node in the trusted storage pool, then you must perform the following steps to start the geo-replication daemon on the new node: Run the following command on the master node where key-based SSH authentication connection is configured, in order to create a common pem pub file. Create the geo-replication session using the following command. The push-pem and force options are required to perform the necessary pem-file setup on the slave nodes. For example: Note There must be key-based SSH authentication access between the node from which this command is run, and the slave host specified in the above command. This command performs the slave verification, which includes checking for a valid slave URL, valid slave volume, and available space on the slave. After successfully setting up the shared storage volume, when a new node is added to the cluster, the shared storage is not mounted automatically on this node. Neither is the /etc/fstab entry added for the shared storage on this node. To make use of shared storage on this node, execute the following commands: Note With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ . For more information on setting up shared storage volume, see Section 11.12, "Setting up Shared Storage Volume" . Configure the meta-volume for geo-replication: For example: For more information on configuring meta-volume, see Section 10.3.5, "Configuring a Meta-Volume" . If a node is added at slave, stop the geo-replication session using the following command: Start the geo-replication session between the slave and master forcefully, using the following command: Verify the status of the created session, using the following command: Warning The following scenarios can lead to a checksum mismatch: Adding bricks to expand a geo-replicated volume. Expanding the volume while the geo-replication synchronization is in progress. Newly added brick becomes `ACTIVE` to sync the data. Self healing on the new brick is not completed. 10.6.2. Starting Geo-replication for a New Brick on an Existing Node When adding a brick to the volume on an existing node in the trusted storage pool with a geo-replication session running, the geo-replication daemon on that particular node will automatically be restarted. The new brick will then be recognized by the geo-replication daemon. This is an automated process and no configuration changes are required. 10.6.3. Starting Geo-replication for a New Volume To create and start a geo-replication session between a new volume added to the master cluster and a new volume added to the slave cluster, you must perform the following steps: Prerequisites There must be key-based SSH authentication without a password access between the master volume node and the slave volume node. Create the geo-replication session using the following command: For example: Note This command performs the slave verification, which includes checking for a valid slave URL, valid slave volume, and available space on the slave. Configure the meta-volume for geo-replication: For example: For more information on configuring meta-volume, see Section 10.3.5, "Configuring a Meta-Volume" . Start the geo-replication session between the slave and master, using the following command: Verify the status of the created session, using the following command:
|
[
"gluster system:: execute gsec_create",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL create push-pem force",
"gluster volume geo-replication Volume1 storage.backup.com::slave-vol create push-pem force",
"mount -t glusterfs <local node's ip>:gluster_shared_storage /var/run/gluster/shared_storage cp /etc/fstab /var/run/gluster/fstab.tmp echo \"<local node's ip>:/gluster_shared_storage /var/run/gluster/shared_storage/ glusterfs defaults 0 0\" >> /etc/fstab",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL config use_meta_volume true",
"gluster volume geo-replication Volume1 storage.backup.com::slave-vol config use_meta_volume true",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL stop",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL start force",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL status",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL create",
"gluster volume geo-replication Volume1 storage.backup.com::slave-vol create",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL config use_meta_volume true",
"gluster volume geo-replication Volume1 storage.backup.com::slave-vol config use_meta_volume true",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL start",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST :: SLAVE_VOL status"
] |
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/chap-managing_geo-replication-starting_geo-replication_on_a_newly_added_brick
|
Appendix C. Journaler configuration reference
|
Appendix C. Journaler configuration reference Reference of the list commands that can be used for journaler configuration. journaler_write_head_interval Description How frequently to update the journal head object. Type Integer Required No Default 15 journaler_prefetch_periods Description How many stripe periods to read ahead on journal replay. Type Integer Required No Default 10 journaler_prezero_periods Description How many stripe periods to zero ahead of write position. Type Integer Required No Default 10 journaler_batch_interval Description Maximum additional latency in seconds to incur artificially. Type Double Required No Default .001 journaler_batch_max Description Maximum bytes that will be delayed flushing. Type 64-bit Unsigned Integer Required No Default 0
| null |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/file_system_guide/journaler-configuration-reference_fs
|
Chapter 2. Understanding process management for Ceph
|
Chapter 2. Understanding process management for Ceph As a storage administrator, you can manipulate the various Ceph daemons by type or instance in a Red Hat Ceph Storage cluster. Manipulating these daemons allows you to start, stop and restart all of the Ceph services as needed. 2.1. Prerequisites Installation of the Red Hat Ceph Storage software. 2.2. Ceph process management In Red Hat Ceph Storage, all process management is done through the Systemd service. Each time you want to start , restart , and stop the Ceph daemons, you must specify the daemon type or the daemon instance. Additional Resources For more information on using systemd, see Introduction to systemd chapter, and the Managing system services with systemctl chapter in the Configuring basic system settings guide for Red Hat Enterprise Linux 8. 2.3. Starting, stopping, and restarting all Ceph daemons You can start, stop, and restart all Ceph daemons as the root user from the host where you want to stop the Ceph daemons. Prerequisites A running Red Hat Ceph Storage cluster. Having root access to the node. Procedure On the host where you want to start, stop, and restart the daemons, run the systemctl service to get the SERVICE_ID of the service. Example Starting all Ceph daemons: Syntax Example Stopping all Ceph daemons: Syntax Example Restarting all Ceph daemons: Syntax Example 2.4. Starting, stopping, and restarting all Ceph services Ceph services are logical groups of Ceph daemons of the same type, configured to run in the same Red Hat Ceph Storage cluster. The orchestration layer in Ceph allows the user to manage these services in a centralized way, making it easy to execute operations that affect all the Ceph daemons that belong to the same logical service. The Ceph daemons running in each host are managed through the Systemd service. You can start, stop, and restart all Ceph services from the host where you want to manage the Ceph services. Important If you want to start,stop, or restart a specific Ceph daemon in a specific host, you need to use the SystemD service. To obtain a list of the SystemD services running in a specific host, connect to the host, and run the following command: Example The output will give you a list of the service names that you can use, to manage each Ceph daemon. Prerequisites A running Red Hat Ceph Storage cluster. Having root access to the node. Procedure Log into the Cephadm shell: Example Run the ceph orch ls command to get a list of Ceph services configured in the Red Hat Ceph Storage cluster and to get the specific service ID. Example To start a specific service, run the following command: Syntax Example To stop a specific service, run the following command: Important The ceph orch stop SERVICE_ID command results in the Red Hat Ceph Storage cluster being inaccessible, only for the MON and MGR service. It is recommended to use the systemctl stop SERVICE_ID command to stop a specific daemon in the host. Syntax Example In the example the ceph orch stop node-exporter command removes all the daemons of the node exporter service. To restart a specific service, run the following command: Syntax Example 2.5. Viewing log files of Ceph daemons that run in containers Use the journald daemon from the container host to view a log file of a Ceph daemon from a container. Prerequisites Installation of the Red Hat Ceph Storage software. Root-level access to the node. Procedure To view the entire Ceph log file, run a journalctl command as root composed in the following format: Syntax In the above example, you can view the entire log for the OSD with ID osd.8 . To show only the recent journal entries, use the -f option. Syntax Example Note You can also use the sosreport utility to view the journald logs. For more details about SOS reports, see the What is an sosreport and how to create one in Red Hat Enterprise Linux? solution on the Red Hat Customer Portal. Additional Resources The journalctl manual page. 2.6. Powering down and rebooting Red Hat Ceph Storage cluster You can power down and reboot the Red Hat Ceph Storage cluster using two different approaches: systemctl commands and the Ceph Orchestrator. You can choose either approach to power down and reboot the cluster. 2.6.1. Powering down and rebooting the cluster using the systemctl commands You can use the systemctl commands approach to power down and reboot the Red Hat Ceph Storage cluster. This approach follows the Linux way of stopping the services. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access. Procedure Powering down the Red Hat Ceph Storage cluster Stop the clients from using the Block Device images RADOS Gateway - Ceph Object Gateway on this cluster and any other clients. Log into the Cephadm shell: Example The cluster must be in healthy state ( Health_OK and all PGs active+clean ) before proceeding. Run ceph status on the host with the client keyrings, for example, the Ceph Monitor or OpenStack controller nodes, to ensure the cluster is healthy. Example If you use the Ceph File System ( CephFS ), bring down the CephFS cluster: Syntax Example Set the noout , norecover , norebalance , nobackfill , nodown , and pause flags. Run the following on a node with the client keyrings, for example, the Ceph Monitor or OpenStack controller node: Example Important The above example is only for stopping the service and each OSD in the OSD node and it needs to be repeated on each OSD node. If the MDS and Ceph Object Gateway nodes are on their own dedicated nodes, power them off. Get the systemd target of the daemons: Example Disable the target that includes the cluster FSID: Example Stop the target: Example This stops all the daemons on the host that needs to be stopped. Shutdown the node: Example Repeat the above steps for all the nodes of the cluster. Rebooting the Red Hat Ceph Storage cluster If network equipment was involved, ensure it is powered ON and stable prior to powering ON any Ceph hosts or nodes. Power ON the administration node. Enable the systemd target to get all the daemons running: Example Start the systemd target: Example Wait for all the nodes to come up. Verify all the services are up and there are no connectivity issues between the nodes. Unset the noout , norecover , norebalance , nobackfill , nodown and pause flags. Run the following on a node with the client keyrings, for example, the Ceph Monitor or OpenStack controller node: Example If you use the Ceph File System ( CephFS ), bring the CephFS cluster back up by setting the joinable flag to true : Syntax Example Verification Verify the cluster is in healthy state ( Health_OK and all PGs active+clean ). Run ceph status on a node with the client keyrings, for example, the Ceph Monitor or OpenStack controller nodes, to ensure the cluster is healthy. Example Additional Resources For more information on installing Ceph, see the Red Hat Ceph Storage Installation Guide . 2.6.2. Powering down and rebooting the cluster using the Ceph Orchestrator You can also use the capabilities of the Ceph Orchestrator to power down and reboot the Red Hat Ceph Storage cluster. In most cases, it is a single system login that can help in powering off the cluster. The Ceph Orchestrator supports several operations, such as start , stop , and restart . You can use these commands with systemctl , for some cases, in powering down or rebooting the cluster. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Powering down the Red Hat Ceph Storage cluster Stop the clients from using the user Block Device Image and Ceph Object Gateway on this cluster and any other clients. Log into the Cephadm shell: Example The cluster must be in healthy state ( Health_OK and all PGs active+clean ) before proceeding. Run ceph status on the host with the client keyrings, for example, the Ceph Monitor or OpenStack controller nodes, to ensure the cluster is healthy. Example If you use the Ceph File System ( CephFS ), bring down the CephFS cluster: Syntax Example Set the noout , norecover , norebalance , nobackfill , nodown , and pause flags. Run the following on a node with the client keyrings, for example, the Ceph Monitor or OpenStack controller node: Example Stop the MDS service. Fetch the MDS service name: Example Stop the MDS service using the fetched name in the step: Syntax Stop the Ceph Object Gateway services. Repeat for each deployed service. Fetch the Ceph Object Gateway service names: Example Stop the Ceph Object Gateway service using the fetched name: Syntax Stop the Alertmanager service: Example Stop the node-exporter service which is a part of the monitoring stack: Example Stop the Prometheus service: Example Stop the Grafana dashboard service: Example Stop the crash service: Example Shut down the OSD nodes from the cephadm node, one by one. Repeat this step for all the OSDs in the cluster. Fetch the OSD ID: Example Shut down the OSD node using the OSD ID you fetched: Example Stop the monitors one by one. Identify the hosts hosting the monitors: Example On each host, stop the monitor. Identify the systemctl unit name: Example Stop the service: Syntax Shut down all the hosts. Rebooting the Red Hat Ceph Storage cluster If network equipment was involved, ensure it is powered ON and stable prior to powering ON any Ceph hosts or nodes. Power ON all the Ceph hosts. Log into the administration node from the Cephadm shell: Example Verify all the services are in running state: Example Ensure the cluster health is `Health_OK`status: Example Unset the noout , norecover , norebalance , nobackfill , nodown and pause flags. Run the following on a node with the client keyrings, for example, the Ceph Monitor or OpenStack controller node: Example If you use the Ceph File System ( CephFS ), bring the CephFS cluster back up by setting the joinable flag to true : Syntax Example Verification Verify the cluster is in healthy state ( Health_OK and all PGs active+clean ). Run ceph status on a node with the client keyrings, for example, the Ceph Monitor or OpenStack controller nodes, to ensure the cluster is healthy. Example Additional Resources For more information on installing Ceph see the Red Hat Ceph Storage Installation Guide
|
[
"systemctl --type=service [email protected]",
"systemctl start SERVICE_ID",
"systemctl start [email protected]",
"systemctl stop SERVICE_ID",
"systemctl stop [email protected]",
"systemctl restart SERVICE_ID",
"systemctl restart [email protected]",
"systemctl list-units \"ceph*\"",
"cephadm shell",
"ceph orch ls NAME RUNNING REFRESHED AGE PLACEMENT IMAGE NAME IMAGE ID alertmanager 1/1 4m ago 4M count:1 registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.5 b7bae610cd46 crash 3/3 4m ago 4M * registry.redhat.io/rhceph-alpha/rhceph-5-rhel8:latest c88a5d60f510 grafana 1/1 4m ago 4M count:1 registry.redhat.io/rhceph-alpha/rhceph-5-dashboard-rhel8:latest bd3d7748747b mgr 2/2 4m ago 4M count:2 registry.redhat.io/rhceph-alpha/rhceph-5-rhel8:latest c88a5d60f510 mon 2/2 4m ago 10w count:2 registry.redhat.io/rhceph-alpha/rhceph-5-rhel8:latest c88a5d60f510 nfs.foo 0/1 - - count:1 <unknown> <unknown> node-exporter 1/3 4m ago 4M * registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5 mix osd.all-available-devices 5/5 4m ago 3M * registry.redhat.io/rhceph-alpha/rhceph-5-rhel8:latest c88a5d60f510 prometheus 1/1 4m ago 4M count:1 registry.redhat.io/openshift4/ose-prometheus:v4.6 bebb0ddef7f0 rgw.test_realm.test_zone 2/2 4m ago 3M count:2 registry.redhat.io/rhceph-alpha/rhceph-5-rhel8:latest c88a5d60f510",
"ceph orch start SERVICE_ID",
"ceph orch start node-exporter",
"ceph orch stop SERVICE_ID",
"ceph orch stop node-exporter",
"ceph orch restart SERVICE_ID",
"ceph orch restart node-exporter",
"journalctl -u ceph SERVICE_ID",
"journalctl -u [email protected]",
"journalctl -fu SERVICE_ID",
"journalctl -fu [email protected]",
"cephadm shell",
"ceph -s",
"ceph fs set FS_NAME max_mds 1 ceph fs fail FS_NAME ceph status ceph fs set FS_NAME joinable false",
"ceph fs set cephfs max_mds 1 ceph fs fail cephfs ceph status ceph fs set cephfs joinable false",
"ceph osd set noout ceph osd set norecover ceph osd set norebalance ceph osd set nobackfill ceph osd set nodown ceph osd set pause",
"systemctl list-units --type target | grep ceph ceph-0b007564-ec48-11ee-b736-525400fd02f8.target loaded active active Ceph cluster 0b007564-ec48-11ee-b736-525400fd02f8 ceph.target loaded active active All Ceph clusters and services",
"systemctl disable ceph-0b007564-ec48-11ee-b736-525400fd02f8.target Removed \"/etc/systemd/system/multi-user.target.wants/ceph-0b007564-ec48-11ee-b736-525400fd02f8.target\". Removed \"/etc/systemd/system/ceph.target.wants/ceph-0b007564-ec48-11ee-b736-525400fd02f8.target\".",
"systemctl stop ceph-0b007564-ec48-11ee-b736-525400fd02f8.target",
"shutdown Shutdown scheduled for Wed 2024-03-27 11:47:19 EDT, use 'shutdown -c' to cancel.",
"systemctl enable ceph-0b007564-ec48-11ee-b736-525400fd02f8.target Created symlink /etc/systemd/system/multi-user.target.wants/ceph-0b007564-ec48-11ee-b736-525400fd02f8.target /etc/systemd/system/ceph-0b007564-ec48-11ee-b736-525400fd02f8.target. Created symlink /etc/systemd/system/ceph.target.wants/ceph-0b007564-ec48-11ee-b736-525400fd02f8.target /etc/systemd/system/ceph-0b007564-ec48-11ee-b736-525400fd02f8.target.",
"systemctl start ceph-0b007564-ec48-11ee-b736-525400fd02f8.target",
"ceph osd unset noout ceph osd unset norecover ceph osd unset norebalance ceph osd unset nobackfill ceph osd unset nodown ceph osd unset pause",
"ceph fs set FS_NAME joinable true",
"ceph fs set cephfs joinable true",
"ceph -s",
"cephadm shell",
"ceph -s",
"ceph fs set FS_NAME max_mds 1 ceph fs fail FS_NAME ceph status ceph fs set FS_NAME joinable false ceph mds fail FS_NAME : N",
"ceph fs set cephfs max_mds 1 ceph fs fail cephfs ceph status ceph fs set cephfs joinable false ceph mds fail cephfs:1",
"ceph osd set noout ceph osd set norecover ceph osd set norebalance ceph osd set nobackfill ceph osd set nodown ceph osd set pause",
"ceph orch ls --service-type mds",
"ceph orch stop SERVICE-NAME",
"ceph orch ls --service-type rgw",
"ceph orch stop SERVICE-NAME",
"ceph orch stop alertmanager",
"ceph orch stop node-exporter",
"ceph orch stop prometheus",
"ceph orch stop grafana",
"ceph orch stop crash",
"ceph orch ps --daemon-type=osd",
"ceph orch daemon stop osd.1 Scheduled to stop osd.1 on host 'host02'",
"ceph orch ps --daemon-type mon",
"systemctl list-units ceph-* | grep mon",
"systemct stop SERVICE-NAME",
"cephadm shell",
"ceph orch ls",
"ceph -s",
"ceph osd unset noout ceph osd unset norecover ceph osd unset norebalance ceph osd unset nobackfill ceph osd unset nodown ceph osd unset pause",
"ceph fs set FS_NAME joinable true",
"ceph fs set cephfs joinable true",
"ceph -s"
] |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/administration_guide/understanding-process-management-for-ceph
|
21.3.10. Modifying Existing Printers
|
21.3.10. Modifying Existing Printers To delete an existing printer, in the Printer Configuration window, select the printer and go to Printer Delete . Confirm the printer deletion. Alternatively, press the Delete key. To set the default printer, right-click the printer in the printer list and click the Set as Default button in the context menu. 21.3.10.1. The Settings Page To change printer driver configuration, double-click the corresponding name in the Printer list and click the Settings label on the left to display the Settings page. You can modify printer settings such as make and model, print a test page, change the device location (URI), and more. Figure 21.12. Settings page 21.3.10.2. The Policies Page Click the Policies button on the left to change settings in printer state and print output. You can select the printer states, configure the Error Policy of the printer (you can decide to abort the print job, retry, or stop it if an error occurs). You can also create a banner page (a page that describes aspects of the print job such as the originating printer, the user name from the which the job originated, and the security status of the document being printed): click the Starting Banner or Ending Banner drop-down menu and choose the option that best describes the nature of the print jobs (for example, confidential ). 21.3.10.2.1. Sharing Printers On the Policies page, you can mark a printer as shared: if a printer is shared, users published on the network can use it. To allow the sharing function for printers, go to Server Settings and select Publish shared printers connected to this system . Finally, make sure that the firewall allows incoming TCP connections to port 631 (that is Network Printing Server (IPP) in system-config-firewall). Figure 21.13. Policies page 21.3.10.2.2. The Access Control Page You can change user-level access to the configured printer on the Access Control page. Click the Access Control label on the left to display the page. Select either Allow printing for everyone except these users or Deny printing for everyone except these users and define the user set below: enter the user name in the text box and click the Add button to add the user to the user set. Figure 21.14. Access Control page 21.3.10.2.3. The Printer Options Page The Printer Options page contains various configuration options for the printer media and output, and its content may vary from printer to printer. It contains general printing, paper, quality, and printing size settings. Figure 21.15. Printer Options page 21.3.10.2.4. Job Options Page On the Job Options page, you can detail the printer job options. Click the Job Options label on the left to display the page. Edit the default settings to apply custom job options, such as number of copies, orientation, pages per side, scaling (increase or decrease the size of the printable area, which can be used to fit an oversize print area onto a smaller physical sheet of print medium), detailed text options, and custom job options. Figure 21.16. Job Options page 21.3.10.2.5. Ink/Toner Levels Page The Ink/Toner Levels page contains details on toner status if available and printer status messages. Click the Ink/Toner Levels label on the left to display the page. Figure 21.17. Ink/Toner Levels page 21.3.10.3. Managing Print Jobs When you send a print job to the printer daemon, such as printing a text file from Emacs or printing an image from GIMP , the print job is added to the print spool queue. The print spool queue is a list of print jobs that have been sent to the printer and information about each print request, such as the status of the request, the job number, and more. During the printing process, the Printer Status icon appears in the Notification Area on the panel. To check the status of a print job, click the Printer Status , which displays a window similar to Figure 21.18, "GNOME Print Status" . Figure 21.18. GNOME Print Status To cancel, hold, release, reprint or authenticate a print job, select the job in the GNOME Print Status and on the Job menu, click the respective command. To view the list of print jobs in the print spool from a shell prompt, type the command lpstat -o . The last few lines look similar to the following: Example 21.11. Example of lpstat -o output If you want to cancel a print job, find the job number of the request with the command lpstat -o and then use the command cancel job number . For example, cancel 60 would cancel the print job in Example 21.11, "Example of lpstat -o output" . You can not cancel print jobs that were started by other users with the cancel command. However, you can enforce deletion of such job by issuing the cancel -U root job_number command. To prevent such canceling change the printer operation policy to Authenticated to force root authentication. You can also print a file directly from a shell prompt. For example, the command lp sample.txt prints the text file sample.txt . The print filter determines what type of file it is and converts it into a format the printer can understand.
|
[
"lpstat -o Charlie-60 twaugh 1024 Tue 08 Feb 2011 16:42:11 GMT Aaron-61 twaugh 1024 Tue 08 Feb 2011 16:42:44 GMT Ben-62 root 1024 Tue 08 Feb 2011 16:45:42 GMT"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-printing-edit
|
Red Hat OpenStack Services on OpenShift Certification Workflow Guide
|
Red Hat OpenStack Services on OpenShift Certification Workflow Guide Red Hat Software Certification 2025 For Use with Red Hat OpenStack 18 Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openstack_services_on_openshift_certification_workflow_guide/index
|
Chapter 36. Virtualization
|
Chapter 36. Virtualization SeaBIOS recognizes SCSI devices with a non-zero LUN Previously, SeaBIOS only recognized SCSI devices when the logical unit number (LUN) was set to zero. Consequently, if a SCSI device was defined with a LUN other than zero, SeaBIOS failed to boot. With this update, SeaBIOS recognizes SCSI devices with LUNs other than zero. As a result, SeaBIOS boots successfully. (BZ# 1020622 ) The libguestfs tools now correctly handle guests where /usr/ is not on the same partition as root Previously, the libguestfs library did not recognize the guest operating system when the /usr/ directory was not located on the same partition as the root directory. As a consequence, multiple libguestfs tools, such as the virt-v2v utility, did not perform as expected when used on such guests. This update ensures that libguestfs recognizes guest operating systems when /usr/ is not on the same partition as root. As a result, the affected libguestfs tools perform as expected. (BZ# 1401474 ) virt-v2v can convert Windows guests with corrupted or damaged Windows registries Previously, the hivex library used by libguestfs to manipulate the Windows registry could not handle corrupted registries. Consequently, the virt-v2v utility was not able to convert Windows guests with corrupted or damaged Windows registries. With this update, libguestfs configures hivex to be less strict when reading the Windows registry. As a result, virt-v2v can now convert most Windows guests with corrupted or damaged Windows registries. (BZ# 1311890 , BZ# 1423436 ) Converting Windows guests with non-system dynamic disks using virt-v2v now works correctly Previously, using the virt-v2v utility to convert a Windows guest virtual machine with non-system dynamic disks did not work correctly, and the guest were not usable after the conversion. This update fixes the underlying code and thus prevents the described problem. Note that the conversion of Windows guests using dynamic disks on the system disk (C: drive) is still not supported. (BZ# 1265588 ) Guests can be converted to Glance images, regardless of the Glance client version Previously, if the Glance command-line client version 1.0.0 or greater was installed on the virt-v2v conversion server, using the virt-v2v utility to convert a guest virtual machine to a Glance image failed. With this release, when exporting images, virt-v2v directly sets all the properties of images. As a result, the conversion to Glance works regardless of the version of the Glance client installed on the virt-v2v conversion server. (BZ# 1374405 ) Red Hat Enterprise Linux 6.2 - 6.5 guest virtual machines can now be converted using virt-v2v Previously, an error in the SELinux file_contexts file in Red Hat Enterprise Linux versions 6.2 - 6.5 prevented conversion of these guests using the virt-v2v utiltiy. With this update, virt-v2v automatically fixes the error in the SElinux file_contexts file. As a result, Red Hat Enterprise Linux 6.2-6.5 guest virtual machines can now be converted using virt-v2v . (BZ# 1374232 ) Btrfs entries in /etc/fstab are now parsed correctly by libguestfs Previously, Btrfs sub-volume entries with more than one comma-separated option in /etc/fstab were not parsed properly by libguestfs . Consequently, Linux guest virtual machines with these configurations could not be inspected, and the virt-v2v utility could not convert them. With this update, libguestfs parses Btrfs sub-volume entries with more than one comma-separated option in /etc/fstab correctly. As a result, these entries can be inspected and converted by virt-v2v . (BZ# 1383517 ) libguestfs can now correctly open libvirt domain disks that require authentication Previously, when adding disks from a libvirt domain, libguestfs did not read any disk secrets. Consequently, libguestfs could not open disks that required authentication. With this update, libguestfs reads secrets of disks in libvirt domains, if present. As a result, libguestfs can now correctly open disks of libvirt domains that require authentication. (BZ# 1392798 ) Converted Windows UEFI guests boot properly Previously, when converting Windows 8 UEFI guests, virtio drivers were not installed correctly. Consequently, the converted guests did not boot. With this update, virtio drivers are installed correctly in Windows UEFI guests. As a result, converted Windows UEFI guests boot properly. (BZ# 1431579 ) The virt-v2v utility now ignores proxy environment variables consistently Prior to this update, when using the virt-v2v utility to convert a VMware guest virtual machine, virt-v2v used the proxy environment variables for some connections to VMware, but not for others. This in some cases caused conversions to fail. Now, virt-v2v ignores all proxy environment settings during the conversion, which prevents the described problem. (BZ# 1354507 ) virt-v2v only copies rhev-apt.exe and rhsrvany.exe when needed Previously, virt-v2v always copied the rhev-apt.exe and rhsrvany.exe files when converting Windows guests. Consequently, they were present in the converted Windows guests, even when they were not needed. With this update, virt-v2v only copies these files when they are needed in the Windows guest. (BZ# 1161019 ) Guests with VLAN over a bonded interaface no longer stop passing traffic after a failover Previously, on guest virtual machines with VLAN configured over a bonded interface that used ixgbe virtual functions (VFs), the bonded network interface stopped passing traffic when a failover occurred. The hypervisor console also logged this error as a requested MACVLAN filter but is administratively denied message. With this update ensures that failovers are handled correctly and thus prevents the described problem. (BZ#1379787) virt-v2v imports OVAs that do not have the <ovf:Name> attribute Previously, the virt-v2v utility rejected the import of Open Virtual Appliances (OVAs) without the <ovf:Name> attribute. As a consequence, the virt-v2v utility did not import OVAs exported by Amazon Web Services (AWS). In this release, if the <ovf:Name> attribute is missing, virt-v2v uses the base name of the disk image file as the name of the virtual machine. As a result, the virt-v2v utility now imports OVAs exported by AWS. (BZ# 1402301 )
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/bug_fixes_virtualization
|
Chapter 34. Jira Transition Issue Sink
|
Chapter 34. Jira Transition Issue Sink Sets a new status (transition to) of an existing issue in Jira. The Kamelet expects the following headers to be set: issueKey / ce-issueKey : as the issue unique code. issueTransitionId / ce-issueTransitionId : as the new status (transition) code. You should carefully check the project workflow as each transition may have conditions to check before the transition is made. The comment of the transition is set in the body of the message. 34.1. Configuration Options The following table summarizes the configuration options available for the jira-transition-issue-sink Kamelet: Property Name Description Type Default Example jiraUrl * Jira URL The URL of your instance of Jira string "http://my_jira.com:8081" password * Password The password or the API Token to access Jira string username * Username The username to access Jira string Note Fields marked with an asterisk (*) are mandatory. 34.2. Dependencies At runtime, the jira-transition-issue-sink Kamelet relies upon the presence of the following dependencies: camel:core camel:jackson camel:jira camel:kamelet mvn:com.fasterxml.jackson.datatype:jackson-datatype-joda:2.12.4.redhat-00001 34.3. Usage This section describes how you can use the jira-transition-issue-sink . 34.3.1. Knative Sink You can use the jira-transition-issue-sink Kamelet as a Knative sink by binding it to a Knative object. jira-transition-issue-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-transition-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issueTransitionId" value: 701 - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issueKey" value: "MYP-162" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel properties: jiraUrl: "jira server url" username: "username" password: "password" 34.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 34.3.1.2. Procedure for using the cluster CLI Save the jira-transition-issue-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f jira-transition-issue-sink-binding.yaml 34.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind --name jira-transition-issue-sink-binding timer-source?message="The new comment 123"\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-170 --step insert-header-action -p step-1.name=issueTransitionId -p step-1.value=5 jira-transition-issue-sink?jiraUrl="jira url"\&username="username"\&password="password" This command creates the KameletBinding in the current namespace on the cluster. 34.3.2. Kafka Sink You can use the jira-transition-issue-sink Kamelet as a Kafka sink by binding it to a Kafka topic. jira-transition-issue-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-transition-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issueTransitionId" value: 701 - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issueKey" value: "MYP-162" sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jira-transition-issue-sink properties: jiraUrl: "jira server url" username: "username" password: "password" 34.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 34.3.2.2. Procedure for using the cluster CLI Save the jira-transition-issue-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f jira-transition-issue-sink-binding.yaml 34.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind --name jira-transition-issue-sink-binding timer-source?message="The new comment 123"\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-170 --step insert-header-action -p step-1.name=issueTransitionId -p step-1.value=5 jira-transition-issue-sink?jiraUrl="jira url"\&username="username"\&password="password" This command creates the KameletBinding in the current namespace on the cluster. 34.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/jira-transition-issue-sink.kamelet.yaml
|
[
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-transition-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issueTransitionId\" value: 701 - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issueKey\" value: \"MYP-162\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel properties: jiraUrl: \"jira server url\" username: \"username\" password: \"password\"",
"apply -f jira-transition-issue-sink-binding.yaml",
"kamel bind --name jira-transition-issue-sink-binding timer-source?message=\"The new comment 123\"\\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-170 --step insert-header-action -p step-1.name=issueTransitionId -p step-1.value=5 jira-transition-issue-sink?jiraUrl=\"jira url\"\\&username=\"username\"\\&password=\"password\"",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-transition-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issueTransitionId\" value: 701 - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issueKey\" value: \"MYP-162\" sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jira-transition-issue-sink properties: jiraUrl: \"jira server url\" username: \"username\" password: \"password\"",
"apply -f jira-transition-issue-sink-binding.yaml",
"kamel bind --name jira-transition-issue-sink-binding timer-source?message=\"The new comment 123\"\\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-170 --step insert-header-action -p step-1.name=issueTransitionId -p step-1.value=5 jira-transition-issue-sink?jiraUrl=\"jira url\"\\&username=\"username\"\\&password=\"password\""
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/jira-transition-issue-sink
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_go_1.20.10_toolset/making-open-source-more-inclusive
|
5.363. xorg-x11-drv-intel
|
5.363. xorg-x11-drv-intel 5.363.1. RHBA-2012:0995 - xorg-x11-drv-intel bug fix and enhancement update Updated xorg-x11-drv-intel packages that fix two bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The xorg-x11-drv-intel packages contain an Intel integrated graphics video driver for the X.Org implementation of the X Window System. Bug Fixes BZ# 692776 On Lenovo ThinkPad T500 laptops, the display could have stayed blank after opening the lid when it was used with an external display in mirror mode. Consequently, the following message appeared: With this update, the underlying source code has been modified so that the display turns on as expected when the lid is open. BZ# 711452 On Lenovo ThinkPad series laptops, the system did not always resume from the suspend state. This was dependent on monitor configuration and could occur under various circumstances, for example if the laptop was suspended docked with only external display enabled, and later resumed undocked with no external display. With this update, the system now resumes correctly regardless of the monitor configuration. Enhancement BZ# 821521 In addition, this update adds accelerated rendering support for the Intel Core i5 and i7 processors. All users of xorg-x11-drv-intel are advised to upgrade to these updated packages, which fix these bugs and add this enhancement.
|
[
"Could not switch the monitor configuration Could not set the configuration for CRT63"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/xorg-x11-drv-intel
|
7.278. X.Org Legacy Input Drivers
|
7.278. X.Org Legacy Input Drivers 7.278.1. RHEA-2013:0295 - X.Org X11 legacy input drivers enhancement update Updated xorg-x11-drv-acecad , xorg-x11-drv-aiptek , xorg-x11-drv-hyperpen , xorg-x11-drv-elographics , xorg-x11-drv-fpit , xorg-x11-drv-mutouch , xorg-x11-drv-penmount , and xorg-x11-drv-void packages that add various enhancements are now available for Red Hat Enterprise Linux 6. The xorg-x11-drv-keyboard and xorg-x11-drv-mouse packages contain the legacy X.Org X11 input drivers for keyboards and mice. The xorg-x11-drv-acecad , xorg-x11-drv-aiptek , xorg-x11-drv-hyperpen , xorg-x11-drv-elographics , xorg-x11-drv-fpit , xorg-x11-drv-mutouch , xorg-x11-drv-penmount , and xorg-x11-drv-void packages contain the X.Org X11 input drivers for legacy devices. The following packages have been upgraded to their respective upstream versions, which provide a number of enhancements over the versions: Table 7.3. Upgraded packages PACKAGE NAME UPSTREAM VERSION BZ NUMBER xorg-x11-drv-acecad 1.5.0 835212 xorg-x11-drv-aiptek 1.4.1 835215 xorg-x11-drv-elographics 1.4.1 835222 xorg-x11-drv-fpit 1.4.0 835229 xorg-x11-drv-hyperpen 1.4.1 835233 xorg-x11-drv-keyboard 1.6.2 835237 xorg-x11-drv-mouse 1.8.1 835242 xorg-x11-drv-mutouch 1.3.0 835243 xorg-x11-drv-penmount 1.5.0 835248 xorg-x11-drv-void 1.4.0 835264 Users of X.Org X11 legacy input drivers are advised to upgrade to these updated packages, which add these enhancements.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/xorg-x11-legacy
|
Chapter 8. Known Issues
|
Chapter 8. Known Issues This chapter documents known problems in Red Hat Enterprise Linux 7.9. 8.1. Authentication and Interoperability Trusts with Active Directory do not work properly after upgrading ipa-server using the latest container image After upgrading an IdM server with the latest version of the container image, existing trusts with Active Directory domains no longer work. To work around this problem, delete the existing trust and re-establish it after the upgrade. ( BZ#1819745 ) Potential risk when using the default value for ldap_id_use_start_tls option When using ldap:// without TLS for identity lookups, it can pose a risk for an attack vector. Particularly a man-in-the-middle (MITM) attack which could allow an attacker to impersonate a user by altering, for example, the UID or GID of an object returned in an LDAP search. Currently, the SSSD configuration option to enforce TLS, ldap_id_use_start_tls , defaults to false . Ensure that your setup operates in a trusted environment and decide if it is safe to use unencrypted communication for id_provider = ldap . Note id_provider = ad and id_provider = ipa are not affected as they use encrypted connections protected by SASL and GSSAPI. If it is not safe to use unencrypted communication, enforce TLS by setting the ldap_id_use_start_tls option to true in the /etc/sssd/sssd.conf file. The default behavior is planned to be changed in a future release of RHEL. (JIRA:RHELPLAN-155168) 8.2. Compiler and Tools GCC thread sanitizer included in RHEL no longer works Due to incompatible changes in kernel memory mapping, the thread sanitizer included with the GNU C Compiler (GCC) compiler version in RHEL no longer works. Additionally, the thread sanitizer cannot be adapted to the incompatible memory layout. As a result, it is no longer possible to use the GCC thread sanitizer included with RHEL. As a workaround, use the version of GCC included in Red Hat Developer Toolset to build code which uses the thread sanitizer. (BZ#1569484) 8.3. Installation and Booting Systems installed as Server with GUI with the DISA STIG profile or with the CIS profile do not start properly The DISA STIG profile and the CIS profile require the removal of the xorg-x11-server-common (X Windows) package but does not require the change of the default target. As a consequence, the system is configured to run the GUI but the X Windows package is missing. As a result, the system does not start properly. To work around this problem, do not use the DISA STIG profile and the CIS profile with the Server with GUI software selection or customize the profile by removing the package_xorg-x11-server-common_removed rule. ( BZ#1648162 ) 8.4. Kernel The radeon driver fails to reset hardware correctly when performing kdump When booting the kernel from the currently running kernel, such as when performing the kdump process, the radeon kernel driver currently does not properly reset hardware. Instead, the kdump kernel terminates unexpectedly, which causes the rest of the kdump service to fail. To work around this problem, disable radeon in kdump by adding the following line to the /etc/kdump.conf file: Afterwards, restart the machine and kdump . Note that in this scenario, no graphics will be available during kdump , but kdump will complete successfully. (BZ#1168430) Slow connection to RHEL 7 guest console on a Windows Server 2019 host When using RHEL 7 as a guest operating system in multi-user mode on a Windows Server 2019 host, connecting to a console output of the guest currently takes significantly longer than expected. To work around this problem, connect to the guest using SSH or use Windows Server 2016 as the host. (BZ#1706522) Kernel deadlocks can occur when dm_crypt is used with intel_qat The intel_qat kernel module uses the GFP_ATOMIC memory allocations, which can fail under memory stress. Consequently, kernel deadlocks and possible data corruption can occur when the dm_crypt kernel module uses intel_qat for encryption offload. To work around this problem, you can choose either of the following: Update to RHEL 8 Avoid using intel_qat for encryption offload (potential performance impact) Ensure the system does not get under excessive memory pressure (BZ#1813394) The vmcore file generation fails on Amazon c5a machines on RHEL 7 On Amazon c5a machines, the Advanced Programmable Interrupt Controller (APIC) fails to route the interrupts of the Local APIC (LAPIC), when configured in the flat mode inside the kdump kernel. As a consequence, the kdump kernel fails to boot and prevents the kdump kernel from saving the vmcore file for further analysis. To work around the problem: Increase the crash kernel size by setting the crashkernel argument to 256M : Set the nr_cpus=9 option by editing the /etc/sysconfig/kdump file: As a result, the kdump kernel boots with 9 CPUs and the vmcore file is captured upon kernel crash. Note that the kdump service can use a significant amount of crash kernel memory to dump the vmcore file since it enables 9 CPUs in the kdump kernel. Therefore, ensure that the crash kernel has a size reserve of 256MB available for booting the kdump kernel. (BZ#1844522) Enabling some kretprobes can trigger kernel panic Using kretprobes of the following functions can cause CPU hard-lock: _raw_spin_lock _raw_spin_lock_irqsave _raw_spin_unlock_irqrestore queued_spin_lock_slowpath As a consequence, enabling these kprobe events, you can experience a system response failure. This situation triggers a kernel panic. To workaround this problem, avoid configuring kretprobes for mentioned functions and prevent system response failure. (BZ#1838903) The kdump service fails on UEFI Secure Boot enabled systems If a UEFI Secure Boot enabled system boots with a not up-to-date RHEL kernel version, the kdump service fails to start. In the described scenario, kdump reports the following error message: This behavior displays due to either of these: Booting the crash kernel with a not up-to-date kernel version. Configuring the KDUMP_KERNELVER variable in /etc/sysconfig/kdump file to a not up-to-date kernel version. As a consequence, kdump fails to start and hence no dump core is saved during the crash event. To workaround this problem, use either of these: Boot the crash kernel with the latest RHEL 7 fixes. Configure KDUMP_KERNELVER in etc/sysconfig/kdump to use the latest kernel version. As a result, kdump starts successfully in the described scenario. (BZ#1862840) The RHEL installer might not detect iSCSI storage The RHEL installer might not automatically set kernel command-line options related to iSCSI for some offloading iSCSI host bus adapters (HBAs). As a consequence, the RHEL installer might not detect iSCSI storage. To work around the problem, add the following options to the kernel command line when booting to the installer: These options enable network configuration and iSCSI target discovery from the pre-OS firmware configuration. The firmware configures the iSCSI storage, and as a result, the installer can discover and use the iSCSI storage. (BZ#1871027) Race condition in the mlx5e_rep_neigh_update work queue sometimes triggers the kernel panic When offloading encapsulation actions over the mlx5 device using the switchdev in-kernel driver model in the Single Root I/O Virtualization (SR-IOV) capability, a race condition can happen in the mlx5e_rep_neigh_update work queue. Consequently, the system terminates unexpectedly with the kernel panic and the following message appears: Currently, a workaround or partial mitigation to this problem is not known. (BZ#1874101) The ice driver does not load for Intel(R) network adapters The ice kernel driver does not load for all Intel(R) Ethernet network adapters E810-XXV except the following: v00008086d00001593sv*sd*bc*sc*i* v00008086d00001592sv*sd*bc*sc*i* v00008086d00001591sv*sd*bc*sc*i* Consequently, the network adapter remains undetected by the operating system. To work around this problem, you can use external drivers for RHEL 7 provided by Intel(R) or Dell. (BZ#1933998) kdump does not support setting nr_cpus to 2 or higher in Hyper-V virtual machines When using RHEL 7.9 as a guest operating system on a Microsoft Hyper-V hypervisor, the kdump kernel in some cases becomes unresponsive when the nr_cpus parameter is set to 2 or higher. To avoid this problem from occurring, do not change the default nr_cpus=1 parameter in the /etc/sysconfig/kdump file of the guest. ( BZ#1773478 ) 8.5. Networking Verification of signatures using the MD5 hash algorithm is disabled in Red Hat Enterprise Linux 7 It is impossible to connect to any Wi-Fi Protected Access (WPA) Enterprise Access Point (AP) that requires MD5 signed certificates. To work around this problem, copy the wpa_supplicant.service file from the /usr/lib/systemd/system/ directory to the /etc/systemd/system/ directory and add the following line to the Service section of the file: Then run the systemctl daemon-reload command as root to reload the service file. Important Note that MD5 certificates are highly insecure and Red Hat does not recommend using them. (BZ#1062656) bind-utils DNS lookup utilities support fewer search domains than glibc The dig , host , and nslookup DNS lookup utilities from the bind-utils package support only up to 8 search domains, while the glibc resolver in the system supports any number of search domains. As a consequence, the DNS lookup utilities may get different results than applications when a search in the /etc/resolv.conf file contains more than 8 domains. To work around this problem, use one of the following: Full names ending with a dot, or Fewer than nine domains in the resolv.conf search clause. Note that it is not recommended to use more than three domains. ( BZ#1758317 ) BIND 9.11 changes log severity of query errors when query logging is enabled With the BIND 9.11 update, the log severity for the query-errors changes from debug 1 to info when query logging is enabled. Consequently, additional log entries describing errors now appear in the query log. To work around this problem, add the following statement into the logging section of the /etc/named.conf file: This will move query errors back into the debug log. Alternatively, use the following statement to discard all query error messages: As a result, only name queries are logged in a similar way to the BIND 9.9.4 release. (BZ#1853191) named-chroot service fails to start when check-names option is not allowed in forward zone Previously, the usage of the check-names option was allowed in the forward zone definitions. With the rebase to bind 9.11, only the following zone types: master slave stub hint use the check-names statement. Consequently, the check-names option, previously allowed in the forward zone definitions, is no longer accepted and causes a failure on start of the named-chroot service. To work around this problem, remove the check-names option from all the zone types except for master , slave , stub or hint . As a result, the named-chroot service starts again without errors. Note that the ignored statements will not change the provided service. (BZ#1851836) The NFQUEUE target overrides queue-cpu-fanout flag iptables NFQUEUE target using --queue-bypass and --queue-cpu-fanout options accidentally overrides the --queue-cpu-fanout option if ordered after the --queue-bypass option. Consequently, the --queue-cpu-fanout option is ignored. To work around this problem, rearrange the --queue-bypass option before --queue-cpu-fanout option. ( BZ#1851944 ) 8.6. Security Audit executable watches on symlinks do not work File monitoring provided by the -w option cannot directly track a path. It has to resolve the path to a device and an inode to make a comparison with the executed program. A watch monitoring an executable symlink monitors the device and an inode of the symlink itself instead of the program executed in memory, which is found from the resolution of the symlink. Even if the watch resolves the symlink to get the resulting executable program, the rule triggers on any multi-call binary called from a different symlink. This results in flooding logs with false positives. Consequently, Audit executable watches on symlinks do not work. To work around the problem, set up a watch for the resolved path of the program executable, and filter the resulting log messages using the last component listed in the comm= or proctitle= fields. (BZ#1421794) Executing a file while transitioning to another SELinux context requires additional permissions Due to the backport of the fix for CVE-2019-11190 in RHEL 7.8, executing a file while transitioning to another SELinux context requires more permissions than in releases. In most cases, the domain_entry_file() interface grants the newly required permission to the SELinux domain. However, in case the executed file is a script, then the target domain may lack the permission to execute the interpreter's binary. This lack of the newly required permission leads to AVC denials. If SELinux is running in enforcing mode, the kernel might kill the process with the SIGSEGV or SIGKILL signal in such a case. If the problem occurs on the file from the domain which is a part of the selinux-policy package, file a bug against this component. In case it is part of a custom policy module, Red Hat recommends granting the missing permissions using standard SELinux interfaces: corecmd_exec_shell() for shell scripts corecmd_exec_all_executables() for interpreters labeled as bin_t such as Perl or Python For more details, see the /usr/share/selinux/devel/include/kernel/corecommands.if file provided by the selinux-policy-doc package and the An exception that breaks the stability of the RHEL SELinux policy API article on the Customer Portal. (BZ#1832194) Scanning large numbers of files with OpenSCAP causes systems to run out of memory The OpenSCAP scanner stores all collected results in the memory until the scan finishes. As a consequence, the system might run out of memory on systems with low RAM when scanning large numbers of files, for example, from the large package groups Server with GUI and Workstation . To work around this problem, use smaller package groups, for example, Server and Minimal Install on systems with limited RAM. If your scenario requires large package groups, you can test whether your system has sufficient memory in a virtual or staging environment. Alternatively, you can tailor the scanning profile to deselect rules that involve recursion over the entire / filesystem: rpm_verify_hashes rpm_verify_permissions rpm_verify_ownership file_permissions_unauthorized_world_writable no_files_unowned_by_user dir_perms_world_writable_system_owned file_permissions_unauthorized_suid file_permissions_unauthorized_sgid file_permissions_ungroupowned dir_perms_world_writable_sticky_bits This prevents the OpenSCAP scanner from causing the system to run out of memory. ( BZ#1829782 ) RSA signatures with SHA-1 cannot be completely disabled in RHEL7 Because the ssh-rsa signature algorithm must be allowed in OpenSSH to use the new SHA2 ( rsa-sha2-512 , rsa-sha2-256 ) signatures, you cannot completely disable SHA1 algorithms in RHEL7. To work around this limitation, you can update to RHEL8 or use ECDSA/Ed25519 keys, which use only SHA2. ( BZ#1828598 ) rpm_verify_permissions fails in the CIS profile The rpm_verify_permissions rule compares file permissions to package default permissions. However, the Center for Internet Security (CIS) profile, which is provided by the scap-security-guide packages, changes some file permissions to be more strict than default. As a consequence, verification of certain files using rpm_verify_permissions fails. To work around this problem, manually verify that these files have the following permissions: /etc/cron.d (0700) /etc/cron.hourly (0700) /etc/cron.monthly (0700) /etc/crontab (0600) /etc/cron.weekly (0700) /etc/cron.daily (0700) For more information about the related feature, see SCAP Security Guide now provides a profile aligned with the CIS RHEL 7 Benchmark v2.2.0 . ( BZ#1838622 ) OpenSCAP file ownership-related rules do not work with remote user and group back ends The OVAL language used by the OpenSCAP suite to perform configuration checks has a limited set of capabilities. It lacks possibilities to obtain a complete list of system users, groups, and their IDs if some of them are remote. For example, if they are stored in an external database such as LDAP. As a consequence, rules that work with user IDs or group IDs do not have access to IDs of remote users. Therefore, such IDs are identified as foreign to the system. This might result in scans to fail on compliant systems. In the scap-security-guide packages, the following rules are affected: xccdf_org.ssgproject.content_rule_file_permissions_ungroupowned xccdf_org.ssgproject.content_rule_no_files_unowned_by_user To work around this problem, if a rule that deals with user or group IDs fails on a system that defines remote users, check the failed parts manually. The OpenSCAP scanner enables you to specify the --oval-results option together with the --report option. This option displays offending files and UIDs in the HTML report and makes the manual revision process straightforward. Additionally, in RHEL 8.3, the rules in the scap-security-guide packages contain a warning that only local-user back ends have been evaluated. ( BZ#1721439 ) rpm_verify_permissions and rpm_verify_ownership fail in the Essential Eight profile The rpm_verify_permissions rule compares file permissions to package default permissions and the rpm_verify_ownership rule compares file owner to package default owner. However, the Australian Cyber Security Centre (ACSC) Essential Eight profile, which is provided by the scap-security-guide packages, changes some file permissions and ownerships to be more strict than default. As a consequence, verification of certain files using rpm_verify_permissions and rpm_verify_ownership fails. To work around this problem, manually verify that the /usr/libexec/abrt-action-install-debuginfo-to-abrt-cache file is owned by root and that it has suid and sgid bits set. ( BZ#1778661 ) 8.7. Servers and Services The compat-unixODBC234 package for SAP requires a symlink to load the unixODBC library The unixODBC package version 2.3.1 is available in RHEL 7. In addition, the compat-unixODBC234 package version 2.3.4 is available in the RHEL 7 for SAP Solutions sap-hana repository; see New package: compat-unixODBC234 for SAP for details. Due to minor ABI differences between unixODBC version 2.3.1 and 2.3.4, an application built with version 2.3.1 might not work with version 2.3.4 in certain rare cases. To prevent problems caused by this incompatibility, the compat-unixODBC234 package uses a different SONAME for shared libraries available in this package, and the library file is available under /usr/lib64/libodbc.so.1002.0.0 instead of /usr/lib64/libodbc.so.2.0.0 . As a consequence, third party applications built with unixODBC version 2.3.4 that load the unixODBC library in runtime using the dlopen() function fail to load the library with the following error message: To work around this problem, create the following symbolic link: and similar symlinks for other libraries from the compat-unixODBC234 package if necessary. Note that the compat-unixODBC234 package conflicts with the base RHEL 7 unixODBC package. Therefore, uninstall unixODBC prior to installing compat-unixODBC234 . (BZ#1844443) Symbol conflicts between OpenLDAP libraries might cause crashes in httpd When both the libldap and libldap_r libraries provided by OpenLDAP are loaded and used within a single process, symbol conflicts between these libraries might occur. Consequently, Apache httpd child processes using the PHP ldap extension might terminate unexpectedly if the mod_security or mod_auth_openidc modules are also loaded by the httpd configuration. With this update to the Apache Portable Runtime (APR) library, you can work around the problem by setting the APR_DEEPBIND environment variable, which enables the use of the RTLD_DEEPBIND dynamic linker option when loading httpd modules. When the APR_DEEPBIND environment variable is enabled, crashes no longer occur in httpd configurations that load conflicting libraries. (BZ#1739287) 8.8. Storage RHEL 7 does not support VMD 2.0 storage The 10th generation Intel Core and 3rd generation Intel Xeon Scalable platforms (also known as Intel Ice Lake) include hardware that utilizes version 2.0 of the Volume Management Device (VMD) technology. RHEL 7 no longer receives updates to support new hardware. As a consequence, RHEL 7 cannot recognize Non-Volatile Memory Express (NVMe) devices that are managed by VMD 2.0. To work around the problem, Red Hat recommends that you upgrade to a recent major RHEL release. (BZ#1942865) SCSI devices cannot be deleted after removing the iSCSI target If a SCSI device is BLOCKED due to a transport issue, including an iSCSI session being disrupted due to a network or target side configuration change, the attached devices cannot be deleted while blocked on transport error recovery. If you attempt to remove the SCSI device using the delete sysfs command ( /sys/block/sd*/device/delete ) it can be blocked indefinitely. To work around this issue, terminate the transport session with the iscsiadm logout commands in either session mode (specifying a session ID) or in node mode (specifying a matching target name and portal for the blocked session). Issuing an iSCSI session logout on a recovering session terminates the session and removes the SCSI devices. (BZ#1439055) 8.9. System and Subscription Management The needs-restarting command from yum-utils might fail to display the container boot time In certain RHEL 7 container environments, the needs-restarting command from the yum-utils package might incorrectly display the host boot time instead of the container boot time. As a consequence, this command might still report a false reboot warning message after you restart the container environment. You can safely ignore this harmless warning message in such a case. ( BZ#2042313 ) 8.10. Virtualization RHEL 7.9 virtual machines on IBM POWER sometimes do not detect hot-plugged devices RHEL7.9 virtual machines (VMs) started on an IBM POWER system on a RHEL 8.3 or later hypervisor do not detect hot-plugged PCI devices if the hot plug is performed when the VM is not fully booted yet. To work around the problem, reboot the VM. (BZ#1854917) 8.11. RHEL in cloud environments Core dumping RHEL 7 virtual machines that use NICs with enabled accelerated networking to a remote machine on Azure fails Currently, using the kdump utility to save the core dump file of a RHEL 7 virtual machine (VM) on a Microsoft Azure hypervisor to a remote machine does not work correctly when the VM is using a NIC with enabled accelerated networking. As a consequence, the kdump operation fails. To prevent this problem from occurring, add the following line to the /etc/kdump.conf file and restart the kdump service. (BZ#1846667) SSH with password login now impossible by default on RHEL 8 virtual machines configured using cloud-init For security reasons, the ssh_pwauth option in the configuration of the cloud-init utility is now set to 0 by default. As a consequence, it is not possible to use a password login when connecting via SSH to RHEL 8 virtual machines (VMs) configured using cloud-init . If you require using a password login for SSH connections to your RHEL 8 VMs configured using cloud-init , set ssh_pwauth: 1 in the /etc/cloud/cloud.cfg file before deploying the VM. (BZ#1685580)
|
[
"dracut_args --omit-drivers \"radeon\"",
"grubby-args=\"crashkernel=256M\" --update-kernel /boot/vmlinuz-`uname -r`",
"KDUMP_COMMANDLINE_APPEND=\"irqpoll\" *nr_cpus=9* reset_devices cgroup_disable=memory mce=off numa=off udev.children- max=2 panic=10 acpi_no_memhotplug transparent_hugepage=never nokaslr novmcoredd hest_disable",
"kexec_file_load failed: Required key not available",
"rd.iscsi.ibft=1 rd.iscsi.firmware=1",
"Workqueue: mlx5e mlx5e_rep_neigh_update [mlx5_core]",
"Environment=OPENSSL_ENABLE_MD5_VERIFY=1",
"category query-errors { default_debug; };",
"category querry-errors { null; };",
"/usr/lib64/libodbc.so.2.0.0: cannot open shared object file: No such file or directory",
"ln -s /usr/lib64/libodbc.so.1002.0.0 /usr/lib64/libodbc.so.2.0.0",
"extra_modules pci_hyperv"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.9_release_notes/known_issues
|
Chapter 19. Setting Ceph OSD full thresholds
|
Chapter 19. Setting Ceph OSD full thresholds You can set Ceph OSD full thresholds using the ODF CLI tool or by updating the StorageCluster CR. 19.1. Setting Ceph OSD full thresholds using the ODF CLI tool You can set Ceph OSD full thresholds temporarily by using the ODF CLI tool. This is necessary in cases when the cluster gets into a full state and the thresholds need to be immediately increased. Prerequisites Download the OpenShift Data Foundation command line interface (CLI) tool. With the Data Foundation CLI tool, you can effectively manage and troubleshoot your Data Foundation environment from a terminal. You can find a compatible version and download the CLI tool from the customer portal . Procedure Use the set command to adjust Ceph full thresholds. The set command supports the subcommands full , backfillfull , and nearfull . See the following examples for how to use each subcommand. full This subcommand allows updating the Ceph OSD full ratio in case Ceph prevents the IO operation on OSDs that reached the specified capacity. The default is 0.85 . Note If the value is set too close to 1.0 , the cluster becomes unrecoverable if the OSDs are full and there is nowhere to grow. For example, set Ceph OSD full ratio to 0.9 and then add capacity: For instructions to add capacity for you specific use case, see the Scaling storage guide . If OSDs continue to be in stuck , pending , or do not come up at all: Stop all IOs. Increase the full ratio to 0.92 : Wait for the cluster rebalance to happen. Once cluster rebalance is complete, change the full ratio back to its original value of 0.85: backfillfull This subcommand allows updating the Ceph OSDd backfillfull ratio in case Ceph denies backfilling to the OSD that reached the capacity specified. The default value is 0.80 . Note If the value is set too close to 1.0 , the OSDs become full and the cluster is not able to backfill. For example, to set backfillfull to 0.85 : nearfull This subcommand allows updating the Ceph OSD nearfull ratio in case Ceph returns the nearfull OSDs message when the cluster reaches the capacity specified. The default value is 0.75 . For example, to set nearfull to 0.8 : 19.2. Setting Ceph OSD full thresholds by updating the StorageCluster CR You can set Ceph OSD full thresholds by updating the StorageCluster CR. Use this procedure if you want to override the default settings. Procedure You can update the StorageCluster CR to change the settings for full , backfillfull , and nearfull . full Use this following command to update the Ceph OSD full ratio in case Ceph prevents the IO operation on OSDs that reached the specified capacity. The default is 0.85 . Note If the value is set too close to 1.0 , the cluster becomes unrecoverable if the OSDs are full and there is nowhere to grow. For example, to set Ceph OSD full ratio to 0.9 : backfillfull Use the following command to set the Ceph OSDd backfillfull ratio in case Ceph denies backfilling to the OSD that reached the capacity specified. The default value is 0.80 . Note If the value is set too close to 1.0 , the OSDs become full and the cluster is not able to backfill. For example, set backfill full to 0.85 : nearfull Use the following command to set the Ceph OSD nearfull ratio in case Ceph returns the nearfull OSDs message when the cluster reaches the capacity specified. The default value is 0.75 . For example, set nearfull to 0.8 :
|
[
"odf set full 0.9",
"odf set full 0.92",
"odf set full 0.85",
"odf set backfillfull 0.85",
"odf set nearfull 0.8",
"oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/managedResources/cephCluster/fullRatio\", \"value\": 0.90 }]'",
"oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/managedResources/cephCluster/backfillFullRatio\", \"value\": 0.85 }]'",
"oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/managedResources/cephCluster/nearFullRatio\", \"value\": 0.8 }]'"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/managing_and_allocating_storage_resources/setting-ceph-osd-full-thresholds__rhodf
|
Chapter 3. Debugging Applications
|
Chapter 3. Debugging Applications Debugging applications is a very wide topic. This part provides a developer with the most common techniques for debugging in multiple situations. 3.1. Enabling Debugging with Debugging Information To debug applications and libraries, debugging information is required. The following sections describe how to obtain this information. 3.1.1. Debugging information While debugging any executable code, two types of information allow the tools, and by extension the programmer, to comprehend the binary code: the source code text a description of how the source code text relates to the binary code Such information is called debugging information. Red Hat Enterprise Linux uses the ELF format for executable binaries, shared libraries, or debuginfo files. Within these ELF files, the DWARF format is used to hold the debug information. To display DWARF information stored within an ELF file, run the readelf -w file command. Important STABS is an older, less capable format, occasionally used with UNIX. Its use is discouraged by Red Hat. GCC and GDB provide STABS production and consumption on a best effort basis only. Some other tools such as Valgrind and elfutils do not work with STABS. Additional resources The DWARF Debugging Standard 3.1.2. Enabling debugging of C and C++ applications with GCC Because debugging information is large, it is not included in executable files by default. To enable debugging of your C and C++ applications with it, you must explicitly instruct the compiler to create it. To enable creation of debugging information with GCC when compiling and linking code, use the -g option: Optimizations performed by the compiler and linker can result in executable code which is hard to relate to the original source code: variables may be optimized out, loops unrolled, operations merged into the surrounding ones, and so on. This affects debugging negatively. For improved debugging experience, consider setting the optimization with the -Og option. However, changing the optimization level changes the executable code and may change the actual behaviour including removing some bugs. To also include macro definitions in the debug information, use the -g3 option instead of -g . The -fcompare-debug GCC option tests code compiled by GCC with debug information and without debug information. The test passes if the resulting two binary files are identical. This test ensures that executable code is not affected by any debugging options, which further ensures that there are no hidden bugs in the debug code. Note that using the -fcompare-debug option significantly increases compilation time. See the GCC manual page for details about this option. Additional resources Using the GNU Compiler Collection (GCC) - Options for Debugging Your Program Debugging with GDB - Debugging Information in Separate Files The GCC manual page: 3.1.3. Debuginfo and debugsource packages The debuginfo and debugsource packages contain debugging information and debug source code for programs and libraries. For applications and libraries installed in packages from the Red Hat Enterprise Linux repositories, you can obtain separate debuginfo and debugsource packages from an additional channel. Debugging information package types There are two types of packages available for debugging: Debuginfo packages The debuginfo packages provide debugging information needed to provide human-readable names for binary code features. These packages contain .debug files, which contain DWARF debugging information. These files are installed to the /usr/lib/debug directory. Debugsource packages The debugsource packages contain the source files used for compiling the binary code. With both respective debuginfo and debugsource package installed, debuggers such as GDB or LLDB can relate the execution of binary code to the source code. The source code files are installed to the /usr/src/debug directory. 3.1.4. Getting debuginfo packages for an application or library using GDB Debugging information is required to debug code. For code that is installed from a package, the GNU Debugger (GDB) automatically recognizes missing debug information, resolves the package name and provides concrete advice on how to get the package. Prerequisites The application or library you want to debug must be installed on the system. GDB and the debuginfo-install tool must be installed on the system. For details, see Setting up to debug applications . Repositories providing debuginfo and debugsource packages must be configured and enabled on the system. For details, see Enabling debug and source repositories . Procedure Start GDB attached to the application or library you want to debug. GDB automatically recognizes missing debugging information and suggests a command to run. Exit GDB: type q and confirm with Enter . Run the command suggested by GDB to install the required debuginfo packages: The dnf package management tool provides a summary of the changes, asks for confirmation and once you confirm, downloads and installs all the necessary files. In case GDB is not able to suggest the debuginfo package, follow the procedure described in Getting debuginfo packages for an application or library manually . Additional resources How can I download or install debuginfo packages for RHEL systems? (Red Hat Knowledgebase) 3.1.5. Getting debuginfo packages for an application or library manually You can determine manually which debuginfo packages you need to install by locating the executable file and then finding the package that installs it. Note Red Hat recommends that you use GDB to determine the packages for installation. Use this manual procedure only if GDB is not able to suggest the package to install. Prerequisites The application or library must be installed on the system. The application or library was installed from a package. The debuginfo-install tool must be available on the system. Channels providing the debuginfo packages must be configured and enabled on the system. Procedure Find the executable file of the application or library. Use the which command to find the application file. Use the locate command to find the library file. If the original reasons for debugging include error messages, pick the result where the library has the same additional numbers in its file name as those mentioned in the error messages. If in doubt, try following the rest of the procedure with the result where the library file name includes no additional numbers. Note The locate command is provided by the mlocate package. To install it and enable its use: Search for a name and version of the package that provided the file: The output provides details for the installed package in the name : epoch - version . release . architecture format. Important If this step does not produce any results, it is not possible to determine which package provided the binary file. There are several possible cases: The file is installed from a package which is not known to package management tools in their current configuration. The file is installed from a locally downloaded and manually installed package. Determining a suitable debuginfo package automatically is impossible in that case. Your package management tools are misconfigured. The file is not installed from any package. In such a case, no respective debuginfo package exists. Because further steps depend on this one, you must resolve this situation or abort this procedure. Describing the exact troubleshooting steps is beyond the scope of this procedure. Install the debuginfo packages using the debuginfo-install utility. In the command, use the package name and other details you determined during the step: Additional resources How can I download or install debuginfo packages for RHEL systems? (Red Hat Knowledgebase) 3.2. Inspecting Application Internal State with GDB To find why an application does not work properly, control its execution and examine its internal state with a debugger. This section describes how to use the GNU Debugger (GDB) for this task. 3.2.1. GNU debugger (GDB) Red Hat Enterprise Linux contains the GNU debugger (GDB) which lets you investigate what is happening inside a program through a command-line user interface. GDB capabilities A single GDB session can debug the following types of programs: Multithreaded and forking programs Multiple programs at once Programs on remote machines or in containers with the gdbserver utility connected over a TCP/IP network connection Debugging requirements To debug any executable code, GDB requires debugging information for that particular code: For programs developed by you, you can create the debugging information while building the code. For system programs installed from packages, you must install their debuginfo packages. 3.2.2. Attaching GDB to a process In order to examine a process, GDB must be attached to the process. Prerequisites GDB must be installed on the system Starting a program with GDB When the program is not running as a process, start it with GDB: Replace program with a file name or path to the program. GDB sets up to start execution of the program. You can set up breakpoints and the gdb environment before beginning the execution of the process with the run command. Attaching GDB to an already running process To attach GDB to a program already running as a process: Find the process ID ( pid ) with the ps command: Replace program with a file name or path to the program. Attach GDB to this process: Replace pid with an actual process ID number from the ps output. Attaching an already running GDB to an already running process To attach an already running GDB to an already running program: Use the shell GDB command to run the ps command and find the program's process ID ( pid ): Replace program with a file name or path to the program. Use the attach command to attach GDB to the program: Replace pid by an actual process ID number from the ps output. Note In some cases, GDB might not be able to find the respective executable file. Use the file command to specify the path: Additional resources Debugging with GDB - 2.1 Invoking GDB Debugging with GDB - 4.7 Debugging an Already-running Process 3.2.3. Stepping through program code with GDB Once the GDB debugger is attached to a program, you can use a number of commands to control the execution of the program. Prerequisites You must have the required debugging information available: The program is compiled and built with debugging information, or The relevant debuginfo packages are installed GDB must be attached to the program to be debugged GDB commands to step through the code r (run) Start the execution of the program. If run is executed with any arguments, those arguments are passed on to the executable as if the program has been started normally. Users normally issue this command after setting breakpoints. start Start the execution of the program but stop at the beginning of the program's main function. If start is executed with any arguments, those arguments are passed on to the executable as if the program has been started normally. c (continue) Continue the execution of the program from the current state. The execution of the program will continue until one of the following becomes true: A breakpoint is reached. A specified condition is satisfied. A signal is received by the program. An error occurs. The program terminates. n () Continue the execution of the program from the current state, until the line of code in the current source file is reached. The execution of the program will continue until one of the following becomes true: A breakpoint is reached. A specified condition is satisfied. A signal is received by the program. An error occurs. The program terminates. s (step) The step command also halts execution at each sequential line of code in the current source file. However, if the execution is currently stopped at a source line containing a function call , GDB stops the execution after entering the function call (rather than executing it). until location Continue the execution until the code location specified by the location option is reached. fini (finish) Resume the execution of the program and halt when execution returns from a function. The execution of the program will continue until one of the following becomes true: A breakpoint is reached. A specified condition is satisfied. A signal is received by the program. An error occurs. The program terminates. q (quit) Terminate the execution and exit GDB. Additional resources Debugging with GDB - Starting your Program Debugging with GDB - Continuing and Stepping 3.2.4. Showing program internal values with GDB Displaying the values of a program's internal variables is important for understanding of what the program is doing. GDB offers multiple commands that you can use to inspect the internal variables. The following are the most useful of these commands: p (print) Display the value of the argument given. Usually, the argument is the name of a variable of any complexity, from a simple single value to a structure. An argument can also be an expression valid in the current language, including the use of program variables and library functions, or functions defined in the program being tested. It is possible to extend GDB with pretty-printer Python or Guile scripts for customized display of data structures (such as classes, structs) using the print command. bt (backtrace) Display the chain of function calls used to reach the current execution point, or the chain of functions used up until execution was terminated. This is useful for investigating serious bugs (such as segmentation faults) with elusive causes. Adding the full option to the backtrace command displays local variables, too. It is possible to extend GDB with frame filter Python scripts for customized display of data displayed using the bt and info frame commands. The term frame refers to the data associated with a single function call. info The info command is a generic command to provide information about various items. It takes an option specifying the item to describe. The info args command displays options of the function call that is the currently selected frame. The info locals command displays local variables in the currently selected frame. For a list of the possible items, run the command help info in a GDB session: l (list) Show the line in the source code where the program stopped. This command is available only when the program execution is stopped. While not strictly a command to show internal state, list helps the user understand what changes to the internal state will happen in the step of the program's execution. Additional resources The GDB Python API - Red Hat Developers Blog entry Debugging with GDB - Pretty Printing 3.2.5. Using GDB breakpoints to stop execution at defined code locations Often, only small portions of code are investigated. Breakpoints are markers that tell GDB to stop the execution of a program at a certain place in the code. Breakpoints are most commonly associated with source code lines. In that case, placing a breakpoint requires specifying the source file and line number. To place a breakpoint : Specify the name of the source code file and the line in that file: When file is not present, name of the source file at the current point of execution is used: Alternatively, use a function name to put the breakpoint on its start: A program might encounter an error after a certain number of iterations of a task. To specify an additional condition to halt execution: Replace condition with a condition in the C or C++ language. The meaning of file and line is the same as above. To inspect the status of all breakpoints and watchpoints: To remove a breakpoint by using its number as displayed in the output of info br : To remove a breakpoint at a given location: Additional resources Debugging with GDB - Breakpoints, Watchpoints, and Catchpoints 3.2.6. Using GDB watchpoints to stop execution on data access and changes In many cases, it is advantageous to let the program execute until certain data changes or is accessed. The following examples are the most common use cases. Prerequisites Understanding GDB Using watchpoints in GDB Watchpoints are markers which tell GDB to stop the execution of a program. Watchpoints are associated with data: placing a watchpoint requires specifying an expression that describes a variable, multiple variables, or a memory address. To place a watchpoint for data change (write): Replace expression with an expression that describes what you want to watch. For variables, expression is equal to the name of the variable. To place a watchpoint for data access (read): To place a watchpoint for any data access (both read and write): To inspect the status of all watchpoints and breakpoints: To remove a watchpoint: Replace the num option with the number reported by the info br command. Additional resources Debugging with GDB - Setting Watchpoints 3.2.7. Debugging forking or threaded programs with GDB Some programs use forking or threads to achieve parallel code execution. Debugging multiple simultaneous execution paths requires special considerations. Prerequisites You must understand the concepts of process forking and threads. Debugging forked programs with GDB Forking is a situation when a program ( parent ) creates an independent copy of itself ( child ). Use the following settings and commands to affect what GDB does when a fork occurs: The follow-fork-mode setting controls whether GDB follows the parent or the child after the fork. set follow-fork-mode parent After a fork, debug the parent process. This is the default. set follow-fork-mode child After a fork, debug the child process. show follow-fork-mode Display the current setting of follow-fork-mode . The set detach-on-fork setting controls whether the GDB keeps control of the other (not followed) process or leaves it to run. set detach-on-fork on The process which is not followed (depending on the value of follow-fork-mode ) is detached and runs independently. This is the default. set detach-on-fork off GDB keeps control of both processes. The process which is followed (depending on the value of follow-fork-mode ) is debugged as usual, while the other is suspended. show detach-on-fork Display the current setting of detach-on-fork . Debugging Threaded Programs with GDB GDB has the ability to debug individual threads, and to manipulate and examine them independently. To make GDB stop only the thread that is examined, use the commands set non-stop on and set target-async on . You can add these commands to the .gdbinit file. After that functionality is turned on, GDB is ready to conduct thread debugging. GDB uses a concept of current thread . By default, commands apply to the current thread only. info threads Display a list of threads with their id and gid numbers, indicating the current thread. thread id Set the thread with the specified id as the current thread. thread apply ids command Apply the command command to all threads listed by ids . The ids option is a space-separated list of thread ids. A special value all applies the command to all threads. break location thread id if condition Set a breakpoint at a certain location with a certain condition only for the thread number id . watch expression thread id Set a watchpoint defined by expression only for the thread number id . command& Execute command command and return immediately to the gdb prompt (gdb) , continuing any code execution in the background. interrupt Halt execution in the background. Additional resources Debugging with GDB - 4.10 Debugging Programs with Multiple Threads Debugging with GDB - 4.11 Debugging Forks 3.3. Recording Application Interactions The executable code of applications interacts with the code of the operating system and shared libraries. Recording an activity log of these interactions can provide enough insight into the application's behavior without debugging the actual application code. Alternatively, analyzing an application's interactions can help pinpoint the conditions in which a bug manifests. 3.3.1. Tools useful for recording application interactions Red Hat Enterprise Linux offers multiple tools for analyzing an application's interactions. strace The strace tool primarily enables logging of system calls (kernel functions) used by an application. The strace tool can provide a detailed output about calls, because strace interprets parameters and results with knowledge of the underlying kernel code. Numbers are turned into the respective constant names, bitwise combined flags expanded to flag list, pointers to character arrays dereferenced to provide the actual string, and more. Support for more recent kernel features may be lacking. You can filter the traced calls to reduce the amount of captured data. The use of strace does not require any particular setup except for setting up the log filter. Tracing the application code with strace results in significant slowdown of the application's execution. As a result, strace is not suitable for many production deployments. As an alternative, consider using ltrace or SystemTap. The version of strace available in Red Hat Developer Toolset can also perform system call tampering. This capability is useful for debugging. ltrace The ltrace tool enables logging of an application's user space calls into shared objects (dynamic libraries). The ltrace tool enables tracing calls to any library. You can filter the traced calls to reduce the amount of captured data. The use of ltrace does not require any particular setup except for setting up the log filter. The ltrace tool is lightweight and fast, offering an alternative to strace : it is possible to trace the respective interfaces in libraries such as glibc with ltrace instead of tracing kernel functions with strace . Because ltrace does not handle a known set of calls like strace , it does not attempt to explain the values passed to library functions. The ltrace output contains only raw numbers and pointers. The interpretation of ltrace output requires consulting the actual interface declarations of the libraries present in the output. Note In Red Hat Enterprise Linux 9, a known issue prevents ltrace from tracing system executable files. This limitation does not apply to executable files built by users. SystemTap SystemTap is an instrumentation platform for probing running processes and kernel activity on the Linux system. SystemTap uses its own scripting language for programming custom event handlers. Compared to using strace and ltrace , scripting the logging means more work in the initial setup phase. However, the scripting capabilities extend SystemTap's usefulness beyond just producing logs. SystemTap works by creating and inserting a kernel module. The use of SystemTap is efficient and does not create a significant slowdown of the system or application execution on its own. SystemTap comes with a set of usage examples. GDB The GNU Debugger (GDB) is primarily meant for debugging, not logging. However, some of its features make it useful even in the scenario where an application's interaction is the primary activity of interest. With GDB, it is possible to conveniently combine the capture of an interaction event with immediate debugging of the subsequent execution path. GDB is best suited for analyzing response to infrequent or singular events, after the initial identification of problematic situation by other tools. Using GDB in any scenario with frequent events becomes inefficient or even impossible. Additional resources Getting started with SystemTap Red Hat Developer Toolset User Guide 3.3.2. Monitoring an application's system calls with strace The strace tool enables monitoring the system (kernel) calls performed by an application. Prerequisites You must have strace installed on the system. Procedure Identify the system calls to monitor. Start strace and attach it to the program. If the program you want to monitor is not running, start strace and specify the program : If the program is already running, find its process id ( pid ) and attach strace to it: Replace call with the system calls to be displayed. You can use the -e trace= call option multiple times. If left out, strace will display all system call types. See the strace(1) manual page for more information. If you do not want to trace any forked processes or threads, leave out the -f option. The strace tool displays the system calls made by the application and their details. In most cases, an application and its libraries make a large number of calls and strace output appears immediately, if no filter for system calls is set. The strace tool exits when the program exits. To terminate the monitoring before the traced program exits, press Ctrl+C . If strace started the program, the program terminates together with strace . If you attached strace to an already running program, the program terminates together with strace . Analyze the list of system calls done by the application. Problems with resource access or availability are present in the log as calls returning errors. Values passed to the system calls and patterns of call sequences provide insight into the causes of the application's behaviour. If the application crashes, the important information is probably at the end of log. The output contains a lot of unnecessary information. However, you can construct a more precise filter for the system calls of interest and repeat the procedure. Note It is advantageous to both see the output and save it to a file. Use the tee command to achieve this: Additional resources The strace(1) manual page: How do I use strace to trace system calls made by a command? - Knowledgebase article Red Hat Developer Toolset User Guide - Chapter strace 3.3.3. Monitoring application's library function calls with ltrace The ltrace tool enables monitoring an application's calls to functions available in libraries (shared objects). Note In Red Hat Enterprise Linux 9, a known issue prevents ltrace from tracing system executable files. This limitation does not apply to executable files built by users. Prerequisites You must have ltrace installed on the system. Procedure Identify the libraries and functions of interest, if possible. Start ltrace and attach it to the program. If the program you want to monitor is not running, start ltrace and specify program : If the program is already running, find its process id ( pid ) and attach ltrace to it: Use the -e , -f and -l options to filter the output: Supply the function names to be displayed as function . The -e function option can be used multiple times. If left out, ltrace displays calls to all functions. Instead of specifying functions, you can specify whole libraries with the -l library option. This option behaves similarly to the -e function option. If you do not want to trace any forked processes or threads, leave out the -f option. See the ltrace (1)_ manual page for more information. ltrace displays the library calls made by the application. In most cases, an application makes a large number of calls and ltrace output displays immediately, if no filter is set. ltrace exits when the program exits. To terminate the monitoring before the traced program exits, press ctrl+C . If ltrace started the program, the program terminates together with ltrace . If you attached ltrace to an already running program, the program terminates together with ltrace . Analyze the list of library calls done by the application. If the application crashes, the important information is probably at the end of log. The output contains a lot of unnecessary information. However, you can construct a more precise filter and repeat the procedure. Note It is advantageous to both see the output and save it to a file. Use the tee command to achieve this: Additional resources The ltrace(1) manual page: Red Hat Developer Toolset User Guide - Chapter ltrace 3.3.4. Monitoring application's system calls with SystemTap The SystemTap tool enables registering custom event handlers for kernel events. In comparison with the strace tool, it is harder to use but more efficient and enables more complicated processing logic. A SystemTap script called strace.stp is installed together with SystemTap and provides an approximation of strace functionality using SystemTap. Prerequisites SystemTap and the respective kernel packages must be installed on the system. Procedure Find the process ID ( pid ) of the process you want to monitor: Run SystemTap with the strace.stp script: The value of pid is the process id. The script is compiled to a kernel module, which is then loaded. This introduces a slight delay between entering the command and getting the output. When the process performs a system call, the call name and its parameters are printed to the terminal. The script exits when the process terminates, or when you press Ctrl+C . 3.3.5. Using GDB to intercept application system calls GNU Debugger (GDB) lets you stop an execution in various situations that arise during program execution. To stop the execution when the program performs a system call, use a GDB catchpoint . Prerequisites You must understand the usage of GDB breakpoints. GDB must be attached to the program. Procedure Set the catchpoint: The command catch syscall sets a special type of breakpoint that halts execution when the program performs a system call. The syscall-name option specifies the name of the call. You can specify multiple catchpoints for various system calls. Leaving out the syscall-name option causes GDB to stop on any system call. Start execution of the program. If the program has not started execution, start it: If the program execution is halted, resume it: GDB halts execution after the program performs any specified system call. Additional resources Debugging with GDB - Setting Watchpoints 3.3.6. Using GDB to intercept handling of signals by applications GNU Debugger (GDB) lets you stop the execution in various situations that arise during program execution. To stop the execution when the program receives a signal from the operating system, use a GDB catchpoint . Prerequisites You must understand the usage of GDB breakpoints. GDB must be attached to the program. Procedure Set the catchpoint: The command catch signal sets a special type of a breakpoint that halts execution when a signal is received by the program. The signal-type option specifies the type of the signal. Use the special value 'all' to catch all signals. Let the program run. If the program has not started execution, start it: If the program execution is halted, resume it: GDB halts execution after the program receives any specified signal. Additional resources Debugging With GDB - 5.1.3 Setting Catchpoints 3.4. Debugging a Crashed Application Sometimes, it is not possible to debug an application directly. In these situations, you can collect information about the application at the moment of its termination and analyze it afterwards. 3.4.1. Core dumps: what they are and how to use them A core dump is a copy of a part of the application's memory at the moment the application stopped working, stored in the ELF format. It contains all the application's internal variables and stack, which enables inspection of the application's final state. When augmented with the respective executable file and debugging information, it is possible to analyze a core dump file with a debugger in a way similar to analyzing a running program. The Linux operating system kernel can record core dumps automatically, if this functionality is enabled. Alternatively, you can send a signal to any running application to generate a core dump regardless of its actual state. Warning Some limits might affect the ability to generate a core dump. To see the current limits: 3.4.2. Recording application crashes with core dumps To record application crashes, set up core dump saving and add information about the system. Procedure To enable core dumps, ensure that the /etc/systemd/system.conf file contains the following lines: You can also add comments describing if these settings were previously present, and what the values were. This will enable you to reverse these changes later, if needed. Comments are lines starting with the # character. Changing the file requires administrator level access. Apply the new configuration: Remove the limits for core dump sizes: To reverse this change, run the command with value 0 instead of unlimited . Install the sos package which provides the sosreport utility for collecting system information: When an application crashes, a core dump is generated and handled by systemd-coredump . Create an SOS report to provide additional information about the system: This creates a .tar archive containing information about your system, such as copies of configuration files. Locate and export the core dump: If the application crashed multiple times, output of the first command lists more captured core dumps. In that case, construct for the second command a more precise query using the other information. See the coredumpctl(1) manual page for details. Transfer the core dump and the SOS report to the computer where the debugging will take place. Transfer the executable file, too, if it is known. Important When the executable file is not known, subsequent analysis of the core file identifies it. Optional: Remove the core dump and SOS report after transferring them, to free up disk space. Additional resources Managing systemd in the document Configuring basic system settings How to enable core file dumps when an application crashes or segmentation faults (Red Hat Knowledgebase) What is a sosreport and how to create one in Red Hat Enterprise Linux 4.6 and later? (Red Hat Knowledgebase) 3.4.3. Inspecting application crash states with core dumps Prerequisites You must have a core dump file and sosreport from the system where the crash occurred. GDB and elfutils must be installed on your system. Procedure To identify the executable file where the crash occurred, run the eu-unstrip command with the core dump file: The output contains details for each module on a line, separated by spaces. The information is listed in this order: The memory address where the module was mapped The build-id of the module and where in the memory it was found The module's executable file name, displayed as - when unknown, or as . when the module has not been loaded from a file The source of debugging information, displayed as a file name when available, as . when contained in the executable file itself, or as - when not present at all The shared library name ( soname ) or [exe] for the main module In this example, the important details are the file name /usr/bin/sleep and the build-id 2818b2009547f780a5639c904cded443e564973e on the line containing the text [exe] . With this information, you can identify the executable file required for analyzing the core dump. Get the executable file that crashed. If possible, copy it from the system where the crash occurred. Use the file name extracted from the core file. You can also use an identical executable file on your system. Each executable file built on Red Hat Enterprise Linux contains a note with a unique build-id value. Determine the build-id of the relevant locally available executable files: Use this information to match the executable file on the remote system with your local copy. The build-id of the local file and build-id listed in the core dump must match. Finally, if the application is installed from an RPM package, you can get the executable file from the package. Use the sosreport output to find the exact version of the package required. Get the shared libraries used by the executable file. Use the same steps as for the executable file. If the application is distributed as a package, load the executable file in GDB, to display hints for missing debuginfo packages. For more details, see Section 3.1.4, "Getting debuginfo packages for an application or library using GDB" . To examine the core file in detail, load the executable file and core dump file with GDB: Further messages about missing files and debugging information help you identify what is missing for the debugging session. Return to the step if needed. If the application's debugging information is available as a file instead of as a package, load this file in GDB with the symbol-file command: Replace program.debug with the actual file name. Note It might not be necessary to install the debugging information for all executable files contained in the core dump. Most of these executable files are libraries used by the application code. These libraries might not directly contribute to the problem you are analyzing, and you do not need to include debugging information for them. Use the GDB commands to inspect the state of the application at the moment it crashed. See Inspecting Application Internal State with GDB . Note When analyzing a core file, GDB is not attached to a running process. Commands for controlling execution have no effect. Additional resources Debugging with GDB - 2.1.1 Choosing Files Debugging with GDB - 18.1 Commands to Specify Files Debugging with GDB - 18.3 Debugging Information in Separate Files 3.4.4. Creating and accessing a core dump with coredumpctl The coredumpctl tool of systemd can significantly streamline working with core dumps on the machine where the crash happened. This procedure outlines how to capture a core dump of unresponsive process. Prerequisites The system must be configured to use systemd-coredump for core dump handling. To verify this is true: The configuration is correct if the output starts with the following: Procedure Find the PID of the hung process, based on a known part of the executable file name: This command will output a line in the form Use the command-line value to verify that the PID belongs to the intended process. For example: Send an abort signal to the process: Verify that the core has been captured by coredumpctl : For example: Further examine or use the core file as needed. You can specify the core dump by PID and other values. See the coredumpctl(1) manual page for further details. To show details of the core file: To load the core file in the GDB debugger: Depending on availability of debugging information, GDB will suggest commands to run, such as: For more details on this process, see Getting debuginfo packages for an application or library using GDB . To export the core file for further processing elsewhere: Replace /path/to/file_for_export with the file where you want to put the core dump. 3.4.5. Dumping process memory with gcore The workflow of core dump debugging enables the analysis of the program's state offline. In some cases, you can use this workflow with a program that is still running, such as when it is hard to access the environment with the process. You can use the gcore command to dump memory of any process while it is still running. Prerequisites You must understand what core dumps are and how they are created. GDB must be installed on the system. Procedure Find out the process id ( pid ). Use tools such as ps , pgrep , and top : Dump the memory of this process: This creates a file filename and dumps the process memory in it. While the memory is being dumped, the execution of the process is halted. After the core dump is finished, the process resumes normal execution. Create an SOS report to provide additional information about the system: This creates a tar archive containing information about your system, such as copies of configuration files. Transfer the program's executable file, core dump, and the SOS report to the computer where the debugging will take place. Optional: Remove the core dump and SOS report after transferring them, to free up disk space. Additional resources How to obtain a core file without restarting an application? (Red Hat Knowledgebase) 3.4.6. Dumping protected process memory with GDB You can mark the memory of processes as not to be dumped. This can save resources and ensure additional security when the process memory contains sensitive data: for example, in banking or accounting applications or on whole virtual machines. Both kernel core dumps ( kdump ) and manual core dumps ( gcore , GDB) do not dump memory marked this way. In some cases, you must dump the whole contents of the process memory regardless of these protections. This procedure shows how to do this using the GDB debugger. Prerequisites You must understand what core dumps are. GDB must be installed on the system. GDB must be already attached to the process with protected memory. Procedure Set GDB to ignore the settings in the /proc/PID/coredump_filter file: Set GDB to ignore the memory page flag VM_DONTDUMP : Dump the memory: Replace core-file with name of file where you want to dump the memory. Additional resources Debugging with GDB - How to Produce a Core File from Your Program 3.5. Compatability breaking changes in GDB The version of GDB provided in Red Hat Enterprise Linux 9 contains a number of changes that break compatibility. The following sections provide more details about these changes. Commands The gdb -P python-script.py command is no longer supported. Use the gdb -ex 'source python-script.py' command instead. The gdb COREFILE command is no longer supported. Use the gdb EXECUTABLE --core COREFILE command instead to load the executable specified in the core file. GDB now styles output by default. This new change might break scripts that try to parse the output of GDB. Use the gdb -ex 'set style enabled off' command to disable styling in scripts. Commands now define syntax for symbols according to the language. The info functions , info types , info variables and rbreak commands now define the syntax for entities according to the language chosen bybthe set language command. By setting it to set language auto means that GDB will automatically choose the language of the shown entities. The set print raw frame-arguments and show print raw frame-arguments commands have been deprecated. These commands are replaced with the set print raw-frame-arguments and show print raw-frame-arguments commands. The old commands may be removed in future versions. The following TUI commands are now case-sensitive: focus winheight + - > < The help and apropos commands now display command information only once. These commands now show the documentation of a command only once, even if that command has one or more aliases. These commands now show the command name, then all of its aliases, and finally the description of the command. The MI interpreter The default version of the MI interpreter is now 3. The output of information about multi-location breakpoints (which is syntactically incorrect in MI 2) has changed in MI 3. This affects the following commands and events: -break-insert -break-info =breakpoint-created =breakpoint-modified Use the -fix-multi-location-breakpoint-output command to enable this behavior with MI versions. Python API The following symbols are now deprecated: gdb.SYMBOL_VARIABLES_DOMAIN gdb.SYMBOL_FUNCTIONS_DOMAIN gdb.SYMBOL_TYPES_DOMAIN The gdb.Value type has a new constructor, which is used to construct a gdb.Value from a Python buffer object and a gdb.Type . The frame information printed by the Python frame filtering code is now consistent with what the backtrace command prints when there are no filters, or when using the backtrace command's -no-filters option. 3.6. Debugging applications in containers You can use various command-line tools tailored to different aspects of troubleshooting. The following provides categories along with common command-line tools. Note This is not a complete list of command-line tools. The choice of tool for debugging a container application is heavily based on the container image and your use case. For instance, the systemctl , journalctl , ip , netstat , ping , traceroute , perf , iostat tools may need root access because they interact with system-level resources such as networking, systemd services, or hardware performance counters, which are restricted in rootless containers for security reasons. Rootless containers operate without requiring elevated privileges, running as non-root users within user namespaces to provide improved security and isolation from the host system. They offer limited interaction with the host, reduced attack surface, and enhanced security by mitigating the risk of privilege escalation vulnerabilities. Rootful containers run with elevated privileges, typically as the root user, granting full access to system resources and capabilities. While rootful containers offer greater flexibility and control, they pose security risks due to their potential for privilege escalation and exposure of the host system to vulnerabilities. For more information about rootful and rootless containers, see Setting up rootless containers , Upgrading to rootless containers , and Special considerations for rootless containers . Systemd and Process Management Tools systemctl Controls systemd services within containers, allowing start, stop, enable, and disable operations. journalctl Views logs generated by systemd services, aiding in troubleshooting container issues. Networking Tools ip Manages network interfaces, routing, and addresses within containers. netstat Displays network connections, routing tables, and interface statistics. ping Verifies network connectivity between containers or hosts. traceroute Identifies the path packets take to reach a destination, useful for diagnosing network issues. Process and Performance Tools ps Lists currently running processes within containers. top Provides real-time insights into resource usage by processes within containers. htop Interactive process viewer for monitoring resource utilization. perf CPU performance profiling, tracing, and monitoring, aiding in pinpointing performance bottlenecks within the system or applications. vmstat Reports virtual memory statistics within containers, aiding in performance analysis. iostat Monitors input/output statistics for block devices within containers. gdb (GNU Debugger) A command-line debugger that helps in examining and debugging programs by allowing users to track and control their execution, inspect variables, and analyze memory and registers during runtime. For more information, see the Debugging applications within Red Hat OpenShift containers article. strace Intercepts and records system calls made by a program, aiding in troubleshooting by revealing interactions between the program and the operating system. Security and Access Control Tools sudo Enables executing commands with elevated privileges. chroot Changes the root directory for a command, helpful in testing or troubleshooting within a different root directory. Podman-Specific Tools podman logs Batch-retrieves whatever logs are present for one or more containers at the time of execution. podman inspect Displays the low-level information on containers and images as identified by name or ID. podman events Monitor and print events that occur in Podman. Each event includes a timestamp, a type, a status, a name (if applicable), and an image (if applicable). The default logging mechanism is journald . podman run --health-cmd Use the health check to determine the health or readiness of the process running inside the container. podman top Display the running processes of the container. podman exec Running commands in or attaching to a running container is extremely useful to get a better understanding of what is happening in the container. podman export When the container fails, it is basically impossible to know what happened. Exporting the filesystem structure from the container will allow for checking other logs files that may not be in the mounted volumes. Additional resources Debugging applications within Red Hat OpenShift containers gdb Debugging a Crashed Application Core dump, sosreport , gdb , ps , core . Troubleshooting Kubernetes Docker exec + env, netstat , kubectl , etcdctl , journalctl , docker logs Tips and Tricks for containerizing services Watch, podman logs , systemctl , podman exec / kill / restart , podman insect , podman top , podman exec , podman export , paunch External links Ten tips for debugging Docker containers
|
[
"gcc ... -g",
"man gcc",
"gdb -q /bin/ls Reading symbols from /bin/ls...Reading symbols from .gnu_debugdata for /usr/bin/ls...(no debugging symbols found)...done. (no debugging symbols found)...done. Missing separate debuginfos, use: dnf debuginfo-install coreutils-8.30-6.el8.x86_64 (gdb)",
"(gdb) q",
"dnf debuginfo-install coreutils-8.30-6.el8.x86_64",
"which less /usr/bin/less",
"locate libz | grep so /usr/lib64/libz.so.1 /usr/lib64/libz.so.1.2.11",
"dnf install mlocate updatedb",
"rpm -qf /usr/lib64/libz.so.1.2.7 zlib-1.2.11-10.el8.x86_64",
"debuginfo-install zlib-1.2.11-10.el8.x86_64",
"gdb program",
"ps -C program -o pid h pid",
"gdb -p pid",
"(gdb) shell ps -C program -o pid h pid",
"(gdb) attach pid",
"(gdb) file path/to/program",
"(gdb) help info",
"(gdb) br file:line",
"(gdb) br line",
"(gdb) br function_name",
"(gdb) br file:line if condition",
"(gdb) info br",
"(gdb) delete number",
"(gdb) clear file:line",
"(gdb) watch expression",
"(gdb) rwatch expression",
"(gdb) awatch expression",
"(gdb) info br",
"(gdb) delete num",
"strace -fvttTyy -s 256 -e trace= call program",
"ps -C program (...) strace -fvttTyy -s 256 -e trace= call -p pid",
"strace ... |& tee your_log_file.log",
"man strace",
"ltrace -f -l library -e function program",
"ps -C program (...) ltrace -f -l library -e function -p pid program",
"ltrace ... |& tee your_log_file.log",
"man ltrace",
"ps -aux",
"stap /usr/share/systemtap/examples/process/strace.stp -x pid",
"(gdb) catch syscall syscall-name",
"(gdb) r",
"(gdb) c",
"(gdb) catch signal signal-type",
"(gdb) r",
"(gdb) c",
"ulimit -a",
"DumpCore=yes DefaultLimitCORE=infinity",
"systemctl daemon-reexec",
"ulimit -c unlimited",
"dnf install sos",
"sosreport",
"coredumpctl list executable-name coredumpctl dump executable-name > /path/to/file-for-export",
"eu-unstrip -n --core= ./core.9814 0x400000+0x207000 2818b2009547f780a5639c904cded443e564973e@0x400284 /usr/bin/sleep /usr/lib/debug/bin/sleep.debug [exe] 0x7fff26fff000+0x1000 1e2a683b7d877576970e4275d41a6aaec280795e@0x7fff26fff340 . - linux-vdso.so.1 0x35e7e00000+0x3b6000 374add1ead31ccb449779bc7ee7877de3377e5ad@0x35e7e00280 /usr/lib64/libc-2.14.90.so /usr/lib/debug/lib64/libc-2.14.90.so.debug libc.so.6 0x35e7a00000+0x224000 3ed9e61c2b7e707ce244816335776afa2ad0307d@0x35e7a001d8 /usr/lib64/ld-2.14.90.so /usr/lib/debug/lib64/ld-2.14.90.so.debug ld-linux-x86-64.so.2",
"eu-readelf -n executable_file",
"gdb -e executable_file -c core_file",
"(gdb) symbol-file program.debug",
"sysctl kernel.core_pattern",
"kernel.core_pattern = |/usr/lib/systemd/systemd-coredump",
"pgrep -a executable-name-fragment",
"PID command-line",
"pgrep -a bc 5459 bc",
"kill -ABRT PID",
"coredumpctl list PID",
"coredumpctl list 5459 TIME PID UID GID SIG COREFILE EXE Thu 2019-11-07 15:14:46 CET 5459 1000 1000 6 present /usr/bin/bc",
"coredumpctl info PID",
"coredumpctl debug PID",
"Missing separate debuginfos, use: dnf debuginfo-install bc-1.07.1-5.el8.x86_64",
"coredumpctl dump PID > /path/to/file_for_export",
"ps -C some-program",
"gcore -o filename pid",
"sosreport",
"(gdb) set use-coredump-filter off",
"(gdb) set dump-excluded-mappings on",
"(gdb) gcore core-file"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/developing_c_and_cpp_applications_in_rhel_9/debugging-applications_developing-applications
|
Chapter 7. Installing a cluster on IBM Cloud into an existing VPC
|
Chapter 7. Installing a cluster on IBM Cloud into an existing VPC In OpenShift Container Platform version 4.17, you can install a cluster into an existing Virtual Private Cloud (VPC) on IBM Cloud(R). The installation program provisions the rest of the required infrastructure, which you can then further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud(R) account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring IAM for IBM Cloud(R) . 7.2. About using a custom VPC In OpenShift Container Platform 4.17, you can deploy a cluster into the subnets of an existing IBM(R) Virtual Private Cloud (VPC). Deploying OpenShift Container Platform into an existing VPC can help you avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are in your existing subnets, it cannot choose subnet CIDRs and so forth. You must configure networking for the subnets to which you will install the cluster. 7.2.1. Requirements for using your VPC You must correctly configure the existing VPC and its subnets before you install the cluster. The installation program does not create the following components: NAT gateways Subnets Route tables VPC network The installation program cannot: Subdivide network ranges for the cluster to use Set route tables for the subnets Set VPC options like DHCP Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 7.2.2. VPC validation The VPC and all of the subnets must be in an existing resource group. The cluster is deployed to the existing VPC. As part of the installation, specify the following in the install-config.yaml file: The name of the existing resource group that contains the VPC and subnets ( networkResourceGroupName ) The name of the existing VPC ( vpcName ) The subnets that were created for control plane machines and compute machines ( controlPlaneSubnets and computeSubnets ) Note Additional installer-provisioned cluster resources are deployed to a separate resource group ( resourceGroupName ). You can specify this resource group before installing the cluster. If undefined, a new resource group is created for the cluster. To ensure that the subnets that you provide are suitable, the installation program confirms the following: All of the subnets that you specify exist. For each availability zone in the region, you specify: One subnet for control plane machines. One subnet for compute machines. The machine CIDR that you specified contains the subnets for the compute machines and control plane machines. Note Subnet IDs are not supported. 7.2.3. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed to the entire network. TCP port 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 7.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.17, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 7.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 7.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 7.6. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IC_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 7.7. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on IBM Cloud(R). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select ibmcloud as the platform to target. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for IBM Cloud(R) 7.7.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 7.1. Minimum resource requirements Machine Operating System vCPU Virtual RAM Storage Input/Output Per Second (IOPS) Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 7.7.2. Tested instance types for IBM Cloud The following IBM Cloud(R) instance types have been tested with OpenShift Container Platform. Example 7.1. Machine series bx2-8x32 bx2d-4x16 bx3d-4x20 cx2-8x16 cx2d-4x8 cx3d-8x20 gx2-8x64x1v100 gx3-16x80x1l4 gx3d-160x1792x8h100 mx2-8x64 mx2d-4x32 mx3d-4x40 ox2-8x64 ux2d-2x56 vx2d-4x56 Additional resources Optimizing storage 7.7.3. Sample customized install-config.yaml file for IBM Cloud You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and then modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 10 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: eu-gb 11 resourceGroupName: eu-gb-example-cluster-rg 12 networkResourceGroupName: eu-gb-example-existing-network-rg 13 vpcName: eu-gb-example-network-1 14 controlPlaneSubnets: 15 - eu-gb-example-network-1-cp-eu-gb-1 - eu-gb-example-network-1-cp-eu-gb-2 - eu-gb-example-network-1-cp-eu-gb-3 computeSubnets: 16 - eu-gb-example-network-1-compute-eu-gb-1 - eu-gb-example-network-1-compute-eu-gb-2 - eu-gb-example-network-1-compute-eu-gb-3 credentialsMode: Manual publish: External pullSecret: '{"auths": ...}' 17 fips: false 18 sshKey: ssh-ed25519 AAAA... 19 1 8 11 17 Required. The installation program prompts you for this value. 2 5 If you do not provide these parameters and values, the installation program provides the default value. 3 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 7 Enables or disables simultaneous multithreading, also known as Hyper-Threading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 9 The machine CIDR must contain the subnets for the compute machines and control plane machines. 10 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 The name of an existing resource group. All installer-provisioned cluster resources are deployed to this resource group. If undefined, a new resource group is created for the cluster. 13 Specify the name of the resource group that contains the existing virtual private cloud (VPC). The existing VPC and subnets should be in this resource group. The cluster will be installed to this VPC. 14 Specify the name of an existing VPC. 15 Specify the name of the existing subnets to which to deploy the control plane machines. The subnets must belong to the VPC that you specified. Specify a subnet for each availability zone in the region. 16 Specify the name of the existing subnets to which to deploy the compute machines. The subnets must belong to the VPC that you specified. Specify a subnet for each availability zone in the region. 18 Enables or disables FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 19 Optional: provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 7.7.4. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 7.8. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 7.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 7.10. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.17. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.17 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 7.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 7.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.17, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 7.13. steps Customize your cluster . Optional: Opt out of remote health reporting .
|
[
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"export IC_API_KEY=<api_key>",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 10 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: eu-gb 11 resourceGroupName: eu-gb-example-cluster-rg 12 networkResourceGroupName: eu-gb-example-existing-network-rg 13 vpcName: eu-gb-example-network-1 14 controlPlaneSubnets: 15 - eu-gb-example-network-1-cp-eu-gb-1 - eu-gb-example-network-1-cp-eu-gb-2 - eu-gb-example-network-1-cp-eu-gb-3 computeSubnets: 16 - eu-gb-example-network-1-compute-eu-gb-1 - eu-gb-example-network-1-compute-eu-gb-2 - eu-gb-example-network-1-compute-eu-gb-3 credentialsMode: Manual publish: External pullSecret: '{\"auths\": ...}' 17 fips: false 18 sshKey: ssh-ed25519 AAAA... 19",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled",
"./openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer",
"ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4",
"grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_ibm_cloud/installing-ibm-cloud-vpc
|
5.4.16.5. Changing the Number of Images in an Existing RAID1 Device
|
5.4.16.5. Changing the Number of Images in an Existing RAID1 Device You can change the number of images in an existing RAID1 array just as you can change the number of images in the earlier implementation of LVM mirroring, by using the lvconvert command to specify the number of additional metadata/data subvolume pairs to add or remove. For information on changing the volume configuration in the earlier implementation of LVM mirroring, see Section 5.4.3.4, "Changing Mirrored Volume Configuration" . When you add images to a RAID1 device with the lvconvert command, you can specify the total number of images for the resulting device, or you can specify how many images to add to the device. You can also optionally specify on which physical volumes the new metadata/data image pairs will reside. Metadata subvolumes (named *_rmeta_* ) always exist on the same physical devices as their data subvolume counterparts *_rimage_* ). The metadata/data subvolume pairs will not be created on the same physical volumes as those from another metadata/data subvolume pair in the RAID array (unless you specify --alloc anywhere ). The format for the command to add images to a RAID1 volume is as follows: For example, the following display shows the LVM device my_vg/my_lv which is a 2-way RAID1 array: The following command converts the 2-way RAID1 device my_vg/my_lv to a 3-way RAID1 device: When you add an image to a RAID1 array, you can specify which physical volumes to use for the image. The following command converts the 2-way RAID1 device my_vg/my_lv to a 3-way RAID1 device, specifying that the physical volume /dev/sdd1 be used for the array: To remove images from a RAID1 array, use the following command. When you remove images from a RAID1 device with the lvconvert command, you can specify the total number of images for the resulting device, or you can specify how many images to remove from the device. You can also optionally specify the physical volumes from which to remove the device. Additionally, when an image and its associated metadata subvolume volume are removed, any higher-numbered images will be shifted down to fill the slot. If you remove lv_rimage_1 from a 3-way RAID1 array that consists of lv_rimage_0 , lv_rimage_1 , and lv_rimage_2 , this results in a RAID1 array that consists of lv_rimage_0 and lv_rimage_1 . The subvolume lv_rimage_2 will be renamed and take over the empty slot, becoming lv_rimage_1 . The following example shows the layout of a 3-way RAID1 logical volume my_vg/my_lv . The following command converts the 3-way RAID1 logical volume into a 2-way RAID1 logical volume. The following command converts the 3-way RAID1 logical volume into a 2-way RAID1 logical volume, specifying the physical volume that contains the image to remove as /dev/sde1 .
|
[
"lvconvert -m new_absolute_count vg/lv [ removable_PVs ] lvconvert -m + num_additional_images vg/lv [ removable_PVs ]",
"lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 6.25 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(0) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(256) [my_lv_rmeta_1] /dev/sdf1(0)",
"lvconvert -m 2 my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 6.25 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(0) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rimage_2] /dev/sdg1(1) [my_lv_rmeta_0] /dev/sde1(256) [my_lv_rmeta_1] /dev/sdf1(0) [my_lv_rmeta_2] /dev/sdg1(0)",
"lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 56.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdb1(0) lvconvert -m 2 my_vg/my_lv /dev/sdd1 lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 28.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdb1(0) [my_lv_rmeta_2] /dev/sdd1(0)",
"lvconvert -m new_absolute_count vg/lv [ removable_PVs ] lvconvert -m - num_fewer_images vg/lv [ removable_PVs ]",
"lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rimage_2] /dev/sdg1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0) [my_lv_rmeta_2] /dev/sdg1(0)",
"lvconvert -m1 my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0)",
"lvconvert -m1 my_vg/my_lv /dev/sde1 lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sdf1(1) [my_lv_rimage_1] /dev/sdg1(1) [my_lv_rmeta_0] /dev/sdf1(0) [my_lv_rmeta_1] /dev/sdg1(0)"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/RAID-upconvert
|
7.8 Release Notes
|
7.8 Release Notes Red Hat Enterprise Linux 7 Release Notes for Red Hat Enterprise Linux 7.8 Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.8_release_notes/index
|
Chapter 3. Joining RHEL systems to an Active Directory by using RHEL system roles
|
Chapter 3. Joining RHEL systems to an Active Directory by using RHEL system roles If your organization uses Microsoft Active Directory (AD) to centrally manage users, groups, and other resources, you can join your Red Hat Enterprise Linux (RHEL) host to this AD. For example, AD users can then log into RHEL and you can make services on the RHEL host available for authenticated AD users. By using the ad_integration RHEL system role, you can automate the integration of Red Hat Enterprise Linux system into an Active Directory (AD) domain. Note The ad_integration role is for deployments using direct AD integration without an Identity Management (IdM) environment. For IdM environments, use the ansible-freeipa roles. 3.1. Joining RHEL to an Active Directory domain by using the ad_integration RHEL system role You can use the ad_integration RHEL system role to automate the process of joining RHEL to an Active Directory (AD) domain. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The managed node uses a DNS server that can resolve AD DNS entries. Credentials of an AD account which has permissions to join computers to the domain. Ensure that the required ports are open: Ports required for direct integration of RHEL systems into AD using SSSD Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: usr: administrator pwd: <password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Active Directory integration hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Join an Active Directory ansible.builtin.include_role: name: rhel-system-roles.ad_integration vars: ad_integration_user: "{{ usr }}" ad_integration_password: "{{ pwd }}" ad_integration_realm: "ad.example.com" ad_integration_allow_rc4_crypto: false ad_integration_timesync_source: "time_server.ad.example.com" The settings specified in the example playbook include the following: ad_integration_allow_rc4_crypto: <true|false> Configures whether the role activates the AD-SUPPORT crypto policy on the managed node. By default, RHEL does not support the weak RC4 encryption but, if Kerberos in your AD still requires RC4, you can enable this encryption type by setting ad_integration_allow_rc4_crypto: true . Omit this the variable or set it to false if Kerberos uses AES encryption. ad_integration_timesync_source: <time_server> Specifies the NTP server to use for time synchronization. Kerberos requires a synchronized time among AD domain controllers and domain members to prevent replay attacks. If you omit this variable, the ad_integration role does not utilize the timesync RHEL system role to configure time synchronization on the managed node. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.ad_integration/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Check if AD users, such as administrator , are available locally on the managed node: Additional resources /usr/share/ansible/roles/rhel-system-roles.ad_integration/README.md file /usr/share/doc/rhel-system-roles/ad_integration/ directory Ansible vault
|
[
"ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>",
"usr: administrator pwd: <password>",
"--- - name: Active Directory integration hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Join an Active Directory ansible.builtin.include_role: name: rhel-system-roles.ad_integration vars: ad_integration_user: \"{{ usr }}\" ad_integration_password: \"{{ pwd }}\" ad_integration_realm: \"ad.example.com\" ad_integration_allow_rc4_crypto: false ad_integration_timesync_source: \"time_server.ad.example.com\"",
"ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml",
"ansible-playbook --ask-vault-pass ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'getent passwd [email protected]' [email protected]:*:1450400500:1450400513:Administrator:/home/[email protected]:/bin/bash"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/integrating_rhel_systems_directly_with_windows_active_directory/integrating-rhel-systems-into-ad-directly-with-ansible-using-rhel-system-roles_integrating-rhel-systems-directly-with-active-directory
|
Chapter 10. Troubleshooting scrub and deep-scrub issues
|
Chapter 10. Troubleshooting scrub and deep-scrub issues Learn to troubleshoot the scrub and deep-scrub issues. 10.1. Addressing the scrub slowness issue while upgrading from 6 to 7 Learn to troubleshoot the scrub slowness issue which seen after upgrading from 6 to 7. Scrub slowness is caused by the automated OSD benchmark setting a very low value for osd_mclock_max_capacity_iops_hdd . Due to this, scrub operations are impacted since the IOPS capacity of an OSD plays a significant role in determining the bandwidth the scrub operation receives. To further increase the problem, scrubs receive only a fraction of the total IOPS capacity based on the QoS allocation defined by the mClock profile. Due to this, the Ceph cluster reports the expected scrub completion time in multiples of days or weeks. Prequisites A running Red Hat Ceph Storage cluster in a healthy state. Root-level access to the node. Procedure Detect low measured IOPS reported by OSD bench during OSD boot-up and fallback to default IOPS setting defined for osd_mclock_max_capacity_iops_[hdd|ssd] . The fallback is triggered if the reported IOPS falls below a threshold determined by osd_mclock_iops_capacity_low_threshold_[hdd|ssd] . A cluster warning is also logged. Example: [Optional]: Perform the following steps if you have not yet upgraded to 7 from 6 (before the upgrade): For clusters already affected by the issue, remove the IOPS capacity setting on the OSD(s) before upgrading to the release with the fix by running the following command: Example: Set the osd_mclock_force_run_benchmark_on_init option for the affected OSD to true before the upgrade: Example: After upgrading to the release with this fix, the IOPS capacity reflects the default setting or the new one reported by the OSD bench. [Optional]: Perform the following steps if you have already upgraded from 6 to 7 (after upgrade): If you were unable to perform the above steps before upgrade, you re-run the OSD bench again after upgrading by removing the osd_mclock_max_capacity_iops_[hdd|ssd] setting: Example: Set osd_mclock_force_run_benchmark_on_init to true . Example: Restart the OSD. After the OSD restarts, the IOPS capacity reflects the default setting or the new setting reported by the OSD bench.
|
[
"ceph config rm osd.X osd_mclock_max_capacity_iops_[hdd|ssd]",
"ceph config rm osd.X osd_mclock_max_capacity_iops_[hdd|ssd]",
"ceph config set osd.X osd_mclock_force_run_benchmark_on_init true",
"ceph config rm osd.X osd_mclock_max_capacity_iops_[hdd|ssd]",
"ceph config set osd.X osd_mclock_force_run_benchmark_on_init true"
] |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/troubleshooting_guide/troubleshootig-scrub-and-deep-scrub-issues_diag
|
Deploying installer-provisioned clusters on bare metal
|
Deploying installer-provisioned clusters on bare metal OpenShift Container Platform 4.15 Deploying installer-provisioned OpenShift Container Platform clusters on bare metal Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/deploying_installer-provisioned_clusters_on_bare_metal/index
|
Chapter 13. File-based configuration
|
Chapter 13. File-based configuration AMQ C++ can read the configuration options used to establish connections from a local file named connect.json . This enables you to configure connections in your application at the time of deployment. The library attempts to read the file when the application calls the container connect method without supplying any connection options. 13.1. File locations If set, AMQ C++ uses the value of the MESSAGING_CONNECT_FILE environment variable to locate the configuration file. If MESSAGING_CONNECT_FILE is not set, AMQ C++ searches for a file named connect.json at the following locations and in the order shown. It stops at the first match it encounters. On Linux: USDPWD/connect.json , where USDPWD is the current working directory of the client process USDHOME/.config/messaging/connect.json , where USDHOME is the current user home directory /etc/messaging/connect.json On Windows: %cd%/connect.json , where %cd% is the current working directory of the client process If no connect.json file is found, the library uses default values for all options. 13.2. The file format The connect.json file contains JSON data, with additional support for JavaScript comments. All of the configuration attributes are optional or have default values, so a simple example need only provide a few details: Example: A simple connect.json file { "host": "example.com", "user": "alice", "password": "secret" } SASL and SSL/TLS options are nested under "sasl" and "tls" namespaces: Example: A connect.json file with SASL and SSL/TLS options { "host": "example.com", "user": "ortega", "password": "secret", "sasl": { "mechanisms": ["SCRAM-SHA-1", "SCRAM-SHA-256"] }, "tls": { "cert": "/home/ortega/cert.pem", "key": "/home/ortega/key.pem" } } 13.3. Configuration options The option keys containing a dot (.) represent attributes nested inside a namespace. Table 13.1. Configuration options in connect.json Key Value type Default value Description scheme string "amqps" "amqp" for cleartext or "amqps" for SSL/TLS host string "localhost" The hostname or IP address of the remote host port string or number "amqps" A port number or port literal user string None The user name for authentication password string None The password for authentication sasl.mechanisms list or string None (system defaults) A JSON list of enabled SASL mechanisms. A bare string represents one mechanism. If none are specified, the client uses the default mechanisms provided by the system. sasl.allow_insecure boolean false Enable mechanisms that send cleartext passwords tls.cert string None The filename or database ID of the client certificate tls.key string None The filename or database ID of the private key for the client certificate tls.ca string None The filename, directory, or database ID of the CA certificate tls.verify boolean true Require a valid server certificate with a matching hostname
|
[
"{ \"host\": \"example.com\", \"user\": \"alice\", \"password\": \"secret\" }",
"{ \"host\": \"example.com\", \"user\": \"ortega\", \"password\": \"secret\", \"sasl\": { \"mechanisms\": [\"SCRAM-SHA-1\", \"SCRAM-SHA-256\"] }, \"tls\": { \"cert\": \"/home/ortega/cert.pem\", \"key\": \"/home/ortega/key.pem\" } }"
] |
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_cpp_client/file_based_configuration
|
Release Notes
|
Release Notes Red Hat Virtualization 4.3 Release notes for Red Hat Virtualization 4.3 Red Hat Virtualization Documentation Team Red Hat Customer Content Services [email protected] Abstract The Release Notes provide high-level coverage of the improvements and additions that have been implemented in Red Hat Virtualization 4.3.
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/release_notes/index
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request.
| null |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/tested_deployment_models/providing-feedback
|
Chapter 38. hypervisor
|
Chapter 38. hypervisor This chapter describes the commands under the hypervisor command. 38.1. hypervisor list List hypervisors Usage: Table 38.1. Command arguments Value Summary -h, --help Show this help message and exit --matching <hostname> Filter hypervisors using <hostname> substring --long List additional fields in output Table 38.2. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 38.3. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 38.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 38.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 38.2. hypervisor show Display hypervisor details Usage: Table 38.6. Positional arguments Value Summary <hypervisor> Hypervisor to display (name or id) Table 38.7. Command arguments Value Summary -h, --help Show this help message and exit Table 38.8. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 38.9. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 38.10. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 38.11. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 38.3. hypervisor stats show Display hypervisor stats details Usage: Table 38.12. Command arguments Value Summary -h, --help Show this help message and exit Table 38.13. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 38.14. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 38.15. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 38.16. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
|
[
"openstack hypervisor list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--matching <hostname>] [--long]",
"openstack hypervisor show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <hypervisor>",
"openstack hypervisor stats show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty]"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/hypervisor
|
Appendix A. Troubleshooting
|
Appendix A. Troubleshooting A.1. Ansible stops installation because it detects less devices than expected The Ansible automation application stops the installation process and returns the following error: What this means: When the osd_auto_discovery parameter is set to true in the /usr/share/ceph-ansible/group_vars/osds.yml file, Ansible automatically detects and configures all the available devices. During this process, Ansible expects that all OSDs use the same devices. The devices get their names in the same order in which Ansible detects them. If one of the devices fails on one of the OSDs, Ansible fails to detect the failed device and stops the whole installation process. Example situation: Three OSD nodes ( host1 , host2 , host3 ) use the /dev/sdb , /dev/sdc , and dev/sdd disks. On host2 , the /dev/sdc disk fails and is removed. Upon the reboot, Ansible fails to detect the removed /dev/sdc disk and expects that only two disks will be used for host2 , /dev/sdb and /dev/sdc (formerly /dev/sdd ). Ansible stops the installation process and returns the above error message. To fix the problem: In the /etc/ansible/hosts file, specify the devices used by the OSD node with the failed disk ( host2 in the Example situation above): See Chapter 5, Installing Red Hat Ceph Storage using Ansible for details.
|
[
"- name: fix partitions gpt header or labels of the osd disks (autodiscover disks) shell: \"sgdisk --zap-all --clear --mbrtogpt -- '/dev/{{ item.0.item.key }}' || sgdisk --zap-all --clear --mbrtogpt -- '/dev/{{ item.0.item.key }}'\" with_together: - \"{{ osd_partition_status_results.results }}\" - \"{{ ansible_devices }}\" changed_when: false when: - ansible_devices is defined - item.0.item.value.removable == \"0\" - item.0.item.value.partitions|count == 0 - item.0.rc != 0",
"[osds] host1 host2 devices=\"[ '/dev/sdb', '/dev/sdc' ]\" host3"
] |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/installation_guide/troubleshooting
|
2.2. Query aggregation operations
|
2.2. Query aggregation operations JDG 6.6 adds aggregation operations such as sum, average, min/max, count, group-by, and order-by to the Query DSL. This enhancement is available in both Library and Client-Server modes. Report a bug
| null |
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/6.6.0_release_notes/query_aggregation_operations
|
Chapter 4. Demoting or promoting hidden replicas
|
Chapter 4. Demoting or promoting hidden replicas After a replica has been installed, you can configure whether the replica is hidden or visible. For details about hidden replicas, see The hidden replica mode . Prerequisites Ensure that the replica is not the DNSSEC key master. If it is, move the service to another replica before making this replica hidden. Ensure that the replica is not a CA renewal server. If it is, move the service to another replica before making this replica hidden. For details, see Procedure To hide a replica: To make a replica visible again: To view a list of all the hidden replicas in your topology: If all of your replicas are enabled, the command output does not mention hidden replicas.
|
[
"ipa server-state replica.idm.example.com --state=hidden",
"ipa server-state replica.idm.example.com --state=enabled",
"ipa config-show"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_replication_in_identity_management/demoting-or-promoting-hidden-replicas_managing-replication-in-idm
|
5.6. Securing FTP
|
5.6. Securing FTP The File Transport Protocol , or FTP , is an older TCP protocol designed to transfer files over a network. Because all transactions with the server, including user authentication, are unencrypted, it is considered an insecure protocol and should be carefully configured. Red Hat Enterprise Linux provides three FTP servers. gssftpd - A kerberized xinetd -based FTP daemon which does not pass authentication information over the network. Red Hat Content Accelerator ( tux ) - A kernel-space Web server with FTP capabilities. vsftpd - A standalone, security oriented implementation of the FTP service. The following security guidelines are for setting up the vsftpd FTP service. 5.6.1. FTP Greeting Banner Before submitting a username and password, all users are presented with a greeting banner. By default, this banner includes version information useful to crackers trying to identify weaknesses in a system. To change the greeting banner for vsftpd , add the following directive to the /etc/vsftpd/vsftpd.conf file: Replace <insert_greeting_here> in the above directive with the text of the greeting message. For mutli-line banners, it is best to use a banner file. To simplify management of multiple banners, place all banners in a new directory called /etc/banners/ . The banner file for FTP connections in this example is /etc/banners/ftp.msg . Below is an example of what such a file may look like: Note It is not necessary to begin each line of the file with 220 as specified in Section 5.1.1.1, "TCP Wrappers and Connection Banners" . To reference this greeting banner file for vsftpd , add the following directive to the /etc/vsftpd/vsftpd.conf file: It also is possible to send additional banners to incoming connections using TCP wrappers as described in Section 5.1.1.1, "TCP Wrappers and Connection Banners" .
|
[
"ftpd_banner= <insert_greeting_here>",
"#################################################### # Hello, all activity on ftp.example.com is logged.# ####################################################",
"banner_file=/etc/banners/ftp.msg"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s1-server-ftp
|
Chapter 24. Feature support and limitations in RHEL 9 virtualization
|
Chapter 24. Feature support and limitations in RHEL 9 virtualization This document provides information about feature support and restrictions in Red Hat Enterprise Linux 9 (RHEL 9) virtualization. 24.1. How RHEL virtualization support works A set of support limitations applies to virtualization in Red Hat Enterprise Linux 9 (RHEL 9). This means that when you use certain features or exceed a certain amount of allocated resources when using virtual machines in RHEL 9, Red Hat will not support these guests unless you have a specific subscription plan. Features listed in Recommended features in RHEL 9 virtualization have been tested and certified by Red Hat to work with the KVM hypervisor on a RHEL 9 system. Therefore, they are fully supported and recommended for use in virtualization in RHEL 9. Features listed in Unsupported features in RHEL 9 virtualization may work, but are not supported and not intended for use in RHEL 9. Therefore, Red Hat strongly recommends not using these features in RHEL 9 with KVM. Resource allocation limits in RHEL 9 virtualization lists the maximum amount of specific resources supported on a KVM guest in RHEL 9. Guests that exceed these limits are not supported by Red Hat. In addition, unless stated otherwise, all features and solutions used by the documentation for RHEL 9 virtualization are supported. However, some of these have not been completely tested and therefore may not be fully optimized. Important Many of these limitations do not apply to other virtualization solutions provided by Red Hat, such as OpenShift Virtualization or Red Hat OpenStack Platform (RHOSP). 24.2. Recommended features in RHEL 9 virtualization The following features are recommended for use with the KVM hypervisor included with Red Hat Enterprise Linux 9 (RHEL 9): Host system architectures RHEL 9 with KVM is only supported on the following host architectures: AMD64 and Intel 64 IBM Z - IBM z13 systems and later ARM 64 Any other hardware architectures are not supported for using RHEL 9 as a KVM virtualization host, and Red Hat highly discourages doing so. Guest operating systems Red Hat provides support with KVM virtual machines that use specific guest operating systems (OSs). For a detailed list of supported guest OSs, see the Certified Guest Operating Systems in the Red Hat KnowledgeBase . Note, however, that by default, your guest OS does not use the same subscription as your host. Therefore, you must activate a separate licence or subscription for the guest OS to work properly. In addition, the pass-through devices that you attach to the VM must be supported by both the host OS and the guest OS. Similarly, for optimal function of your deployment, Red Hat recommends that the CPU model and features that you define in the XML configuration of a VM are supported by both the host OS and the guest OS. To view the certified CPUs and other hardware for various versions of RHEL, see the Red Hat Ecosystem Catalog . Machine types To ensure that your VM is compatible with your host architecture and that the guest OS runs optimally, the VM must use an appropriate machine type. Important In RHEL 9, pc-i440fx-rhel7.5.0 and earlier machine types, which were default in earlier major versions of RHEL, are no longer supported. As a consequence, attempting to start a VM with such machine types on a RHEL 9 host fails with an unsupported configuration error. If you encounter this problem after upgrading your host to RHEL 9, see the Red Hat Knowledgebase solution Invalid virtual machines that used to work with RHEL 9 and newer hypervisors . When creating a VM by using the command line , the virt-install utility provides multiple methods of setting the machine type. When you use the --os-variant option, virt-install automatically selects the machine type recommended for your host CPU and supported by the guest OS. If you do not use --os-variant or require a different machine type, use the --machine option to specify the machine type explicitly. If you specify a --machine value that is unsupported or not compatible with your host, virt-install fails and displays an error message. The recommended machine types for KVM virtual machines on supported architectures, and the corresponding values for the --machine option, are as follows. Y stands for the latest minor version of RHEL 9. On Intel 64 and AMD64 (x86_64): pc-q35-rhel9. Y .0 --machine=q35 On IBM Z (s390x): s390-ccw-virtio-rhel9. Y .0 --machine=s390-ccw-virtio On ARM 64 : virt-rhel9. Y .0 --machine=virt To obtain the machine type of an existing VM: # virsh dumpxml VM-name | grep machine= To view the full list of machine types supported on your host: # /usr/libexec/qemu-kvm -M help Additional resources Unsupported features in RHEL 9 virtualization Resource allocation limits in RHEL 9 virtualization 24.3. Unsupported features in RHEL 9 virtualization The following features are not supported by the KVM hypervisor included with Red Hat Enterprise Linux 9 (RHEL 9): Important Many of these limitations may not apply to other virtualization solutions provided by Red Hat, such as OpenShift Virtualization or Red Hat OpenStack Platform (RHOSP). Features supported by other virtualization solutions are described as such in the following paragraphs. Host system architectures RHEL 9 with KVM is not supported on any host architectures that are not listed in Recommended features in RHEL 9 virtualization . Guest operating systems KVM virtual machines (VMs) that use the following guest operating systems (OSs) are not supported on a RHEL 9 host: Windows 8.1 and earlier Windows Server 2012 R2 and earlier macOS Solaris for x86 systems Any operating system released before 2009 For a list of guest OSs supported on RHEL hosts and other virtualization solutions, see Certified Guest Operating Systems in Red Hat OpenStack Platform, Red Hat Virtualization, OpenShift Virtualization and Red Hat Enterprise Linux with KVM . Creating VMs in containers Red Hat does not support creating KVM virtual machines in any type of container that includes the elements of the RHEL 9 hypervisor (such as the QEMU emulator or the libvirt package). To create VMs in containers, Red Hat recommends using the OpenShift Virtualization offering. Specific virsh commands and options Not every parameter that you can use with the virsh utility has been tested and certified as production-ready by Red Hat. Therefore, any virsh commands and options that are not explicitly recommended by Red Hat documentation may not work correctly, and Red Hat recommends not using them in your production environment. Notably, unsupported virsh commands include the following: virsh iface-* commands, such as virsh iface-start and virsh iface-destroy virsh blkdeviotune virsh snapshot-* commands, such as virsh snapshot-create and virsh snapshot-revert The QEMU command line QEMU is an essential component of the virtualization architecture in RHEL 9, but it is difficult to manage manually, and improper QEMU configurations might cause security vulnerabilities. Therefore, using qemu-* command-line utilities, such as, qemu-kvm is not supported by Red Hat. Instead, use libvirt utilities, such as virt-install , virt-xml , and supported virsh commands, as these orchestrate QEMU according to the best practices. However, the qemu-img utility is supported for management of virtual disk images . vCPU hot unplug Removing a virtual CPU (vCPU) from a running VM, also referred to as a vCPU hot unplug, is not supported in RHEL 9. Memory hot unplug Removing a memory device attached to a running VM, also referred to as a memory hot unplug, is unsupported in RHEL 9. QEMU-side I/O throttling Using the virsh blkdeviotune utility to configure maximum input and output levels for operations on virtual disk, also known as QEMU-side I/O throttling, is not supported in RHEL 9. To set up I/O throttling in RHEL 9, use virsh blkiotune . This is also known as libvirt-side I/O throttling. For instructions, see Disk I/O throttling in virtual machines . Other solutions: QEMU-side I/O throttling is also supported in RHOSP. For more information, see Red Hat Knowledgebase solutions Setting Resource Limitation on Disk and the Use Quality-of-Service Specifications section in the RHOSP Storage Guide . In addition, OpenShift Virtualizaton supports QEMU-side I/O throttling as well. Storage live migration Migrating a disk image of a running VM between hosts is not supported in RHEL 9. Other solutions: Storage live migration is supported in RHOSP, but with some limitations. For details, see Migrate a Volume . Internal snapshots Creating and using internal snapshots for VMs is deprecated in RHEL 9, and highly discouraged for use in production environment. Instead, use external snapshots. For details, see Support limitations for virtual machine snapshots . Other solutions: RHOSP supports live snapshots. For details, see Importing virtual machines into the overcloud . Live snapshots are also supported on OpenShift Virtualization. vHost Data Path Acceleration On RHEL 9 hosts, it is possible to configure vHost Data Path Acceleration (vDPA) for virtio devices, but Red Hat currently does not support this feature, and strongly discourages its use in production environments. vhost-user RHEL 9 does not support the implementation of a user-space vHost interface. Other solutions: vhost-user is supported in RHOSP, but only for virtio-net interfaces. For more information, see the Red Hat Knowledgebase solution virtio-net implementation and vhost user ports . OpenShift Virtualization supports vhost-user as well. S3 and S4 system power states Suspending a VM to the Suspend to RAM (S3) or Suspend to disk (S4) system power states is not supported. Note that these features are disabled by default, and enabling them will make your VM not supportable by Red Hat. Note that the S3 and S4 states are also currently not supported in any other virtualization solution provided by Red Hat. S3-PR on a multipathed vDisk SCSI3 persistent reservation (S3-PR) on a multipathed vDisk is not supported in RHEL 9. As a consequence, Windows Cluster is not supported in RHEL 9. virtio-crypto Using the virtio-crypto device in RHEL 9 is not supported and RHEL strongly discourages its use. Note that virtio-crypto devices are also not supported in any other virtualization solution provided by Red Hat. virtio-multitouch-device, virtio-multitouch-pci Using the virtio-multitouch-device and virtio-multitouch-pci devices in RHEL 9 is not supported and RHEL strongly discourages their use. Incremental live backup Configuring a VM backup that only saves VM changes since the last backup, also known as incremental live backup, is not supported in RHEL 9, and Red Hat highly discourages its use. net_failover Using the net_failover driver to set up an automated network device failover mechanism is not supported in RHEL 9. Note that net_failover is also currently not supported in any other virtualization solution provided by Red Hat. TCG QEMU and libvirt include a dynamic translation mode using the QEMU Tiny Code Generator (TCG). This mode does not require hardware virtualization support. However, TCG is not supported by Red Hat. TCG-based guests can be recognized by examining its XML configuration, for example using the virsh dumpxml command. The configuration file of a TCG guest contains the following line: <domain type='qemu'> The configuration file of a KVM guest contains the following line: <domain type='kvm'> SR-IOV InfiniBand networking devices Attaching InfiniBand networking devices to VMs using Single-root I/O virtualization (SR-IOV) is not supported. SGIO Attaching SCSI devices to VMs by using SCSI generic I/O (SGIO) is not supported on RHEL 9. To detect whether your VM has an attached SGIO device, check the VM configuration for the following lines: <disk type="block" device="lun"> <hostdev mode='subsystem' type='scsi'> Additional resources Recommended features in RHEL 9 virtualization Resource allocation limits in RHEL 9 virtualization 24.4. Resource allocation limits in RHEL 9 virtualization The following limits apply to virtualized resources that can be allocated to a single KVM virtual machine (VM) on a Red Hat Enterprise Linux 9 (RHEL 9) host. Important Many of these limitations do not apply to other virtualization solutions provided by Red Hat, such as OpenShift Virtualization or Red Hat OpenStack Platform (RHOSP). Maximum vCPUs per VM For the maximum amount of vCPUs and memory that is supported on a single VM running on a RHEL 9 host, see: Virtualization limits for Red Hat Enterprise Linux with KVM PCI devices per VM RHEL 9 supports 32 PCI device slots per VM bus, and 8 PCI functions per device slot. This gives a theoretical maximum of 256 PCI functions per bus when multi-function capabilities are enabled in the VM, and no PCI bridges are used. Each PCI bridge adds a new bus, potentially enabling another 256 device addresses. However, some buses do not make all 256 device addresses available for the user; for example, the root bus has several built-in devices occupying slots. Virtualized IDE devices KVM is limited to a maximum of 4 virtualized IDE devices per VM. 24.5. Supported disk image formats To run a virtual machine (VM) on RHEL, you must use a disk image with a supported format. You can also convert certain unsupported disk images to a supported format. Supported disk image formats for VMs You can use disk images that use the following formats to run VMs in RHEL: qcow2 - Provides certain additional features, such as compression. raw - Might provide better performance. luks - Disk images encrypted by using the Linux Unified Key Setup (LUKS) specification. Supported disk image formats for conversion If required, you can convert your disk images between the raw and qcow2 formats by using the qemu-img convert command . If you require converting a vmdk disk image to a raw or qcow2 format, convert the VM that uses the disk to KVM by using the virt-v2v utility . To convert other disk image formats to raw or qcow2 , you can use the qemu-img convert command . For a list of formats that work with this command, see the QEMU documentation . Note that in most cases, converting the disk image format of a non-KVM virtual machine to qcow2 or raw is not sufficient for the VM to correctly run on RHEL KVM. In addition to converting the disk image, corresponding drivers must be installed and configured in the guest operating system of the VM. For supported hypervisor conversion, use the virt-v2v utility. Additional resources Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7, RHEL 8, and RHEL 9 Converting between virtual disk image formats 24.6. How virtualization on IBM Z differs from AMD64 and Intel 64 KVM virtualization in RHEL 9 on IBM Z systems differs from KVM on AMD64 and Intel 64 systems in the following: PCI and USB devices Virtual PCI and USB devices are not supported on IBM Z. This also means that virtio- * -pci devices are unsupported, and virtio- * -ccw devices should be used instead. For example, use virtio-net-ccw instead of virtio-net-pci . Note that direct attachment of PCI devices, also known as PCI passthrough, is supported. Supported guest operating system Red Hat only supports VMs hosted on IBM Z if they use RHEL 7, 8, or 9 as their guest operating system. Device boot order IBM Z does not support the <boot dev=' device '> XML configuration element. To define device boot order, use the <boot order=' number '> element in the <devices> section of the XML. Note Using <boot order=' number '> for boot order management is recommended on all host architectures. In addition, you can select the required boot entry by using the architecture-specific loadparm attribute in the <boot> element. For example, the following determines that the disk should be used first in the boot sequence and if a Linux distribution is available on that disk, it will select the second boot entry: <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/path/to/qcow2'/> <target dev='vda' bus='virtio'/> <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0000'/> <boot order='1' loadparm='2'/> </disk> Memory hot plug Adding memory to a running VM is not possible on IBM Z. Note that removing memory from a running VM ( memory hot unplug ) is also not possible on IBM Z, as well as on AMD64 and Intel 64. NUMA topology Non-Uniform Memory Access (NUMA) topology for CPUs is not supported by libvirt on IBM Z. Therefore, tuning vCPU performance by using NUMA is not possible on these systems. GPU devices Assigning GPU devices is not supported on IBM Z systems. vfio-ap VMs on an IBM Z host can use the vfio-ap cryptographic device passthrough, which is not supported on any other architecture. vfio-ccw VMs on an IBM Z host can use the vfio-ccw disk device passthrough, which is not supported on any other architecture. SMBIOS SMBIOS configuration is not available on IBM Z. Watchdog devices If using watchdog devices in your VM on an IBM Z host, use the diag288 model. For example: <devices> <watchdog model='diag288' action='poweroff'/> </devices> kvm-clock The kvm-clock service is specific to AMD64 and Intel 64 systems, and does not have to be configured for VM time management on IBM Z. v2v and p2v The virt-v2v and virt-p2v utilities are supported only on the AMD64 and Intel 64 architecture, and are not provided on IBM Z. Migrations To successfully migrate to a later host model (for example from IBM z14 to z15), or to update the hypervisor, use the host-model CPU mode. The host-passthrough and maximum CPU modes are not recommended, as they are generally not migration-safe. If you want to specify an explicit CPU model in the custom CPU mode, follow these guidelines: Do not use CPU models that end with -base . Do not use the qemu , max or host CPU model. To successfully migrate to an older host model (such as from z15 to z14), or to an earlier version of QEMU, KVM, or the RHEL kernel, use the CPU type of the oldest available host model without -base at the end. If you have both the source host and the destination host running, you can instead use the virsh hypervisor-cpu-baseline command on the destination host to obtain a suitable CPU model. For details, see Verifying host CPU compatibility for virtual machine migration . For more information about supported machine types in RHEL 9, see Recommended features in RHEL 9 virtualization . PXE installation and booting When using PXE to run a VM on IBM Z, a specific configuration is required for the pxelinux.cfg/default file. For example: Secure Execution You can boot a VM with a prepared secure guest image by defining <launchSecurity type="s390-pv"/> in the XML configuration of the VM. This encrypts the VM's memory to protect it from unwanted access by the hypervisor. Note that the following features are not supported when running a VM in secure execution mode: Device passthrough by using vfio Obtaining memory information by using virsh domstats and virsh memstat The memballoon and virtio-rng virtual devices Memory backing by using huge pages Live and non-live VM migrations Saving and restoring VMs VM snapshots, including memory snapshots (using the --memspec option) Full memory dumps. Instead, specify the --memory-only option for the virsh dump command. 248 or more vCPUs. The vCPU limit for secure guests is 247. Additional resources An overview of virtualization features support across architectures 24.7. How virtualization on ARM 64 differs from AMD64 and Intel 64 KVM virtualization in RHEL 9 on ARM 64 systems (also known as AArch64) is different from KVM on AMD64 and Intel 64 systems in several aspects. These include, but are not limited to, the following: Guest operating systems The only guest operating system currently supported on ARM 64 virtual machines (VMs) is RHEL 9. vCPU hot plug and hot unplug Attaching a virtual CPU (vCPU) to a running VM, also referred to as a vCPU hot plug, is currently not supported on ARM 64 hosts. In addition, like on AMD64 and Intel 64 hosts, removing a vCPU from a running VM (vCPU hot unplug), is not supported on ARM 64. SecureBoot The SecureBoot feature is not available on ARM 64 systems. Migration Migrating VMs between ARM 64 hosts is currently not supported. Saving and restoring VMs Saving and restoring a VM is currently unsupported on an ARM 64 host. Memory page sizes ARM 64 currently supports running VMs with 64 KB or 4 KB memory page sizes, however both the host and the guest must use the same memory page size. Configurations where host and guest have different memory page sizes are not supported. By default, RHEL 9 uses a 4 KB memory page size. If you want to run a VM with a 64 KB memory page size, your host must be using a kernel with 64 KB memory page size . When creating the VM, you must install it with the kernel-64k package , for example by including the following parameter in the kickstart file: %packages -kernel kernel-64k %end Huge pages ARM 64 hosts with 64 KB memory page size support huge memory pages with the following sizes: 2 MB 512 MB 16 GB When you use transparent huge pages (THP) on an ARM 64 host with 64 KB memory page size, it supports only 512 MB huge pages. ARM 64 hosts with 4 KB memory page size support huge memory pages with the following sizes: 64 KB 2 MB 32 MB 1024 MB When you use transparent huge pages (THP) on an ARM 64 host with 4 KB memory page size, it supports only 2 MB huge pages. SVE The ARM 64 architecture provides the Scalable Vector Expansion (SVE) feature. If the host supports the feature, using SVE in your VMs improves the speed of vector mathematics computation and string operations in these VMs. The base-line level of SVE is enabled by default on host CPUs that support it. However, Red Hat recommends configuring each vector length explicitly. This ensures that the VM can only be launched on compatible hosts. To do so: Verify that your CPU has the SVE feature: If the output of this command includes sve or if its exit code is 0, your CPU supports SVE. Open the XML configuration of the VM you want to modify: Edit the <cpu> element similarly to the following: This example explicitly enables SVE vector lengths 128, 256, and 512, and explicitly disables vector length 384. CPU models VMs on ARM 64 currently only support the host-passthrough CPU model. PXE Booting in the Preboot Execution Environment (PXE) is functional but not supported, Red Hat strongly discourages using it in production environments. If you require PXE booting, it is only possible with the virtio-net-pci network interface controller (NIC). EDK2 ARM 64 guests use UEFI firmware included in the edk2-aarch64 package, which provides a similar interface as OVMF UEFI on AMD64 and Intel 64, and implements a similar set of features. Specifically, edk2-aarch64 provides a built-in UEFI shell, but does not support the following functionality: SecureBoot Management Mode kvm-clock The kvm-clock service does not have to be configured for time management in VMs on ARM 64. Peripheral devices ARM 64 systems support a partly different set of peripheral devices than AMD64 and Intel 64 devices. Only PCIe topologies are supported. ARM 64 systems support virtio devices by using the virtio-*-pci drivers. In addition, the virtio-iommu and virtio-input devices are unsupported. The virtio-gpu driver is only supported for graphical installs. ARM 64 systems support usb-mouse and usb-tablet devices for graphical installs only. Other USB devices, USB passthrough, or USB redirect are not supported. Device assignment that uses Virtual Function I/O (VFIO) is supported only for NICs (physical and virtual functions). Emulated devices The following devices are not supported on ARM 64: Emulated sound devices, such as ICH9, ICH6 or AC97. Emulated graphics cards, such as VGA cards. Emulated network devices, such as rtl8139 . GPU devices Assigning GPU devices is currently not supported on ARM 64 systems. Nested virtualization Creating nested VMs is currently not possible on ARM 64 hosts. v2v and p2v The virt-v2v and virt-p2v utilities are only supported on the AMD64 and Intel 64 architecture and are, therefore, not provided on ARM 64. 24.8. An overview of virtualization features support in RHEL 9 The following tables provide comparative information about the support state of selected virtualization features in RHEL 9 across the available system architectures. Table 24.1. General support Intel 64 and AMD64 IBM Z ARM 64 Supported Supported Supported Table 24.2. Device hot plug and hot unplug Intel 64 and AMD64 IBM Z ARM 64 CPU hot plug Supported Supported UNSUPPORTED CPU hot unplug UNSUPPORTED UNSUPPORTED UNSUPPORTED Memory hot plug Supported UNSUPPORTED Supported Memory hot unplug UNSUPPORTED UNSUPPORTED UNSUPPORTED Peripheral device hot plug Supported Supported [a] Supported Peripheral device hot unplug Supported Supported [b] Supported [a] Requires using virtio- * -ccw devices instead of virtio- * -pci [b] Requires using virtio- * -ccw devices instead of virtio- * -pci Table 24.3. Other selected features Intel 64 and AMD64 IBM Z ARM 64 NUMA tuning Supported UNSUPPORTED Supported SR-IOV devices Supported UNSUPPORTED Supported virt-v2v and p2v Supported UNSUPPORTED UNAVAILABLE Note that some of the unsupported features are supported on other Red Hat products, such as Red Hat Virtualization and Red Hat OpenStack platform. For more information, see Unsupported features in RHEL 9 virtualization . Additional sources Unsupported features in RHEL 9 virtualization
|
[
"virsh dumpxml VM-name | grep machine=",
"/usr/libexec/qemu-kvm -M help",
"<domain type='qemu'>",
"<domain type='kvm'>",
"<disk type=\"block\" device=\"lun\">",
"<hostdev mode='subsystem' type='scsi'>",
"<disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/path/to/qcow2'/> <target dev='vda' bus='virtio'/> <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0000'/> <boot order='1' loadparm='2'/> </disk>",
"<devices> <watchdog model='diag288' action='poweroff'/> </devices>",
"pxelinux default linux label linux kernel kernel.img initrd initrd.img append ip=dhcp inst.repo=example.com/redhat/BaseOS/s390x/os/",
"%packages -kernel kernel-64k %end",
"grep -m 1 Features /proc/cpuinfo | grep -w sve Features: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm fcma dcpop sve",
"virsh edit vm-name",
"<cpu mode='host-passthrough' check='none'> <feature policy='require' name='sve'/> <feature policy='require' name='sve128'/> <feature policy='require' name='sve256'/> <feature policy='disable' name='sve384'/> <feature policy='require' name='sve512'/> </cpu>"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/assembly_feature-support-and-limitations-in-rhel-9-virtualization_configuring-and-managing-virtualization
|
Extensions
|
Extensions OpenShift Container Platform 4.17 Working with extensions in OpenShift Container Platform using Operator Lifecycle Manager (OLM) v1. OLM v1 is a Technology Preview feature only. Red Hat OpenShift Documentation Team
|
[
"apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: <operator_name> spec: packageName: <package_name> installNamespace: <namespace_name> channel: <channel_name> version: <version_number>",
"apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace_name> serviceAccount: name: <service_account> channel: latest 1",
"apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace_name> serviceAccount: name: <service_account> version: \"1.11.1\" 1",
"apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace_name> serviceAccount: name: <service_account> version: \">1.11.1\" 1",
"oc apply -f <extension_name>.yaml",
"CustomResourceDefinition 'logfilemetricexporters.logging.kubernetes.io' already exists in namespace 'kubernetes-logging' and cannot be managed by operator-controller",
"Ignore everything except non-object .json and .yaml files **/* !*.json !*.yaml **/objects/*.json **/objects/*.yaml",
"catalog ├── packageA │ └── index.yaml ├── packageB │ ├── .indexignore │ ├── index.yaml │ └── objects │ └── packageB.v0.1.0.clusterserviceversion.yaml └── packageC └── index.json └── deprecations.yaml",
"_Meta: { // schema is required and must be a non-empty string schema: string & !=\"\" // package is optional, but if it's defined, it must be a non-empty string package?: string & !=\"\" // properties is optional, but if it's defined, it must be a list of 0 or more properties properties?: [... #Property] } #Property: { // type is required type: string & !=\"\" // value is required, and it must not be null value: !=null }",
"#Package: { schema: \"olm.package\" // Package name name: string & !=\"\" // A description of the package description?: string // The package's default channel defaultChannel: string & !=\"\" // An optional icon icon?: { base64data: string mediatype: string } }",
"#Channel: { schema: \"olm.channel\" package: string & !=\"\" name: string & !=\"\" entries: [...#ChannelEntry] } #ChannelEntry: { // name is required. It is the name of an `olm.bundle` that // is present in the channel. name: string & !=\"\" // replaces is optional. It is the name of bundle that is replaced // by this entry. It does not have to be present in the entry list. replaces?: string & !=\"\" // skips is optional. It is a list of bundle names that are skipped by // this entry. The skipped bundles do not have to be present in the // entry list. skips?: [...string & !=\"\"] // skipRange is optional. It is the semver range of bundle versions // that are skipped by this entry. skipRange?: string & !=\"\" }",
"#Bundle: { schema: \"olm.bundle\" package: string & !=\"\" name: string & !=\"\" image: string & !=\"\" properties: [...#Property] relatedImages?: [...#RelatedImage] } #Property: { // type is required type: string & !=\"\" // value is required, and it must not be null value: !=null } #RelatedImage: { // image is the image reference image: string & !=\"\" // name is an optional descriptive name for an image that // helps identify its purpose in the context of the bundle name?: string & !=\"\" }",
"schema: olm.deprecations package: my-operator 1 entries: - reference: schema: olm.package 2 message: | 3 The 'my-operator' package is end of life. Please use the 'my-operator-new' package for support. - reference: schema: olm.channel name: alpha 4 message: | The 'alpha' channel is no longer supported. Please switch to the 'stable' channel. - reference: schema: olm.bundle name: my-operator.v1.68.0 5 message: | my-operator.v1.68.0 is deprecated. Uninstall my-operator.v1.68.0 and install my-operator.v1.72.0 for support.",
"my-catalog └── my-operator ├── index.yaml └── deprecations.yaml",
"#PropertyPackage: { type: \"olm.package\" value: { packageName: string & !=\"\" version: string & !=\"\" } }",
"#PropertyGVK: { type: \"olm.gvk\" value: { group: string & !=\"\" version: string & !=\"\" kind: string & !=\"\" } }",
"#PropertyPackageRequired: { type: \"olm.package.required\" value: { packageName: string & !=\"\" versionRange: string & !=\"\" } }",
"#PropertyGVKRequired: { type: \"olm.gvk.required\" value: { group: string & !=\"\" version: string & !=\"\" kind: string & !=\"\" } }",
"name: community-operators repo: quay.io/community-operators/catalog tag: latest references: - name: etcd-operator image: quay.io/etcd-operator/index@sha256:5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03 - name: prometheus-operator image: quay.io/prometheus-operator/index@sha256:e258d248fda94c63753607f7c4494ee0fcbe92f1a76bfdac795c9d84101eb317",
"name=USD(yq eval '.name' catalog.yaml) mkdir \"USDname\" yq eval '.name + \"/\" + .references[].name' catalog.yaml | xargs mkdir for l in USD(yq e '.name as USDcatalog | .references[] | .image + \"|\" + USDcatalog + \"/\" + .name + \"/index.yaml\"' catalog.yaml); do image=USD(echo USDl | cut -d'|' -f1) file=USD(echo USDl | cut -d'|' -f2) opm render \"USDimage\" > \"USDfile\" done opm generate dockerfile \"USDname\" indexImage=USD(yq eval '.repo + \":\" + .tag' catalog.yaml) docker build -t \"USDindexImage\" -f \"USDname.Dockerfile\" . docker push \"USDindexImage\"",
"registry.redhat.io/redhat/redhat-operator-index:v4.8",
"registry.redhat.io/redhat/redhat-operator-index:v4.9",
"apiVersion: catalogd.operatorframework.io/v1alpha1 kind: ClusterCatalog metadata: name: redhat-operators spec: source: type: image image: ref: registry.redhat.io/redhat/redhat-operator-index:v4.17 pullSecret: <pull_secret_name> pollInterval: <poll_interval_duration> 1",
"apiVersion: catalogd.operatorframework.io/v1alpha1 kind: ClusterCatalog metadata: name: certified-operators spec: source: type: image image: ref: registry.redhat.io/redhat/certified-operator-index:v4.17 pullSecret: <pull_secret_name> pollInterval: 24h",
"apiVersion: catalogd.operatorframework.io/v1alpha1 kind: ClusterCatalog metadata: name: community-operators spec: source: type: image image: ref: registry.redhat.io/redhat/community-operator-index:v4.17 pullSecret: <pull_secret_name> pollInterval: 24h",
"oc apply -f <catalog_name>.yaml 1",
"oc create secret generic <pull_secret_name> --from-file=.dockercfg=<file_path>/.dockercfg --type=kubernetes.io/dockercfg --namespace=openshift-catalogd",
"oc create secret generic redhat-cred --from-file=.dockercfg=/home/<username>/.dockercfg --type=kubernetes.io/dockercfg --namespace=openshift-catalogd",
"oc create secret generic <pull_secret_name> --from-file=.dockerconfigjson=<file_path>/.docker/config.json --type=kubernetes.io/dockerconfigjson --namespace=openshift-catalogd",
"oc create secret generic redhat-cred --from-file=.dockerconfigjson=/home/<username>/.docker/config.json --type=kubernetes.io/dockerconfigjson --namespace=openshift-catalogd",
"oc create secret docker-registry <pull_secret_name> --docker-server=<registry_server> --docker-username=<username> --docker-password=<password> --docker-email=<email> --namespace=openshift-catalogd",
"oc create secret docker-registry redhat-cred --docker-server=registry.redhat.io --docker-username=username --docker-password=password [email protected] --namespace=openshift-catalogd",
"apiVersion: catalogd.operatorframework.io/v1alpha1 kind: ClusterCatalog metadata: name: redhat-operators spec: source: type: image image: ref: registry.redhat.io/redhat/redhat-operator-index:v4.17 1 pullSecret: <pull_secret_name> 2 pollInterval: <poll_interval_duration> 3",
"oc apply -f redhat-operators.yaml",
"catalog.catalogd.operatorframework.io/redhat-operators created",
"oc get clustercatalog",
"NAME AGE redhat-operators 20s",
"oc describe clustercatalog",
"Name: redhat-operators Namespace: Labels: <none> Annotations: <none> API Version: catalogd.operatorframework.io/v1alpha1 Kind: ClusterCatalog Metadata: Creation Timestamp: 2024-06-10T17:34:53Z Finalizers: catalogd.operatorframework.io/delete-server-cache Generation: 1 Resource Version: 46075 UID: 83c0db3c-a553-41da-b279-9b3cddaa117d Spec: Source: Image: Pull Secret: redhat-cred Ref: registry.redhat.io/redhat/redhat-operator-index:v4.17 Type: image Status: 1 Conditions: Last Transition Time: 2024-06-10T17:35:15Z Message: Reason: UnpackSuccessful 2 Status: True Type: Unpacked Content URL: https://catalogd-catalogserver.openshift-catalogd.svc/catalogs/redhat-operators/all.json Observed Generation: 1 Phase: Unpacked 3 Resolved Source: Image: Last Poll Attempt: 2024-06-10T17:35:10Z Ref: registry.redhat.io/redhat/redhat-operator-index:v4.17 Resolved Ref: registry.redhat.io/redhat/redhat-operator-index@sha256:f2ccc079b5e490a50db532d1dc38fd659322594dcf3e653d650ead0e862029d9 4 Type: image Events: <none>",
"oc delete clustercatalog <catalog_name>",
"catalog.catalogd.operatorframework.io \"my-catalog\" deleted",
"oc get clustercatalog",
"mkdir <catalog_dir>",
"opm generate dockerfile <catalog_dir> -i registry.redhat.io/openshift4/ose-operator-registry-rhel9:v4.17 1",
". 1 ├── <catalog_dir> 2 └── <catalog_dir>.Dockerfile 3",
"opm init <operator_name> \\ 1 --default-channel=preview \\ 2 --description=./README.md \\ 3 --icon=./operator-icon.svg \\ 4 --output yaml \\ 5 > <catalog_dir>/index.yaml 6",
"opm render <registry>/<namespace>/<bundle_image_name>:<tag> \\ 1 --output=yaml >> <catalog_dir>/index.yaml 2",
"--- schema: olm.channel package: <operator_name> name: preview entries: - name: <operator_name>.v0.1.0 1",
"opm validate <catalog_dir>",
"echo USD?",
"0",
"podman build . -f <catalog_dir>.Dockerfile -t <registry>/<namespace>/<catalog_image_name>:<tag>",
"podman login <registry>",
"podman push <registry>/<namespace>/<catalog_image_name>:<tag>",
"opm render <registry>/<namespace>/<catalog_image_name>:<tag> -o yaml > <catalog_dir>/index.yaml",
"--- defaultChannel: release-2.7 icon: base64data: <base64_string> mediatype: image/svg+xml name: example-operator schema: olm.package --- entries: - name: example-operator.v2.7.0 skipRange: '>=2.6.0 <2.7.0' - name: example-operator.v2.7.1 replaces: example-operator.v2.7.0 skipRange: '>=2.6.0 <2.7.1' - name: example-operator.v2.7.2 replaces: example-operator.v2.7.1 skipRange: '>=2.6.0 <2.7.2' - name: example-operator.v2.7.3 replaces: example-operator.v2.7.2 skipRange: '>=2.6.0 <2.7.3' - name: example-operator.v2.7.4 replaces: example-operator.v2.7.3 skipRange: '>=2.6.0 <2.7.4' name: release-2.7 package: example-operator schema: olm.channel --- image: example.com/example-inc/example-operator-bundle@sha256:<digest> name: example-operator.v2.7.0 package: example-operator properties: - type: olm.gvk value: group: example-group.example.io kind: MyObject version: v1alpha1 - type: olm.gvk value: group: example-group.example.io kind: MyOtherObject version: v1beta1 - type: olm.package value: packageName: example-operator version: 2.7.0 - type: olm.bundle.object value: data: <base64_string> - type: olm.bundle.object value: data: <base64_string> relatedImages: - image: example.com/example-inc/example-related-image@sha256:<digest> name: example-related-image schema: olm.bundle ---",
"opm validate <catalog_dir>",
"podman build . -f <catalog_dir>.Dockerfile -t <registry>/<namespace>/<catalog_image_name>:<tag>",
"podman push <registry>/<namespace>/<catalog_image_name>:<tag>",
"oc -n openshift-catalogd port-forward svc/catalogd-catalogserver 8080:443",
"curl -L -k https://localhost:8080/catalogs/<catalog_name>/all.json -C - -o /<path>/<catalog_name>.json",
"curl -L -k https://localhost:8080/catalogs/redhat-operators/all.json -C - -o /home/username/catalogs/rhoc.json",
"jq -s '.[] | select(.schema == \"olm.package\") | .name' /<path>/<filename>.json",
"jq -s '.[] | select(.schema == \"olm.package\") | .name' /home/username/catalogs/rhoc.json",
"NAME AGE \"3scale-operator\" \"advanced-cluster-management\" \"amq-broker-rhel8\" \"amq-online\" \"amq-streams\" \"amq7-interconnect-operator\" \"ansible-automation-platform-operator\" \"ansible-cloud-addons-operator\" \"apicast-operator\" \"aws-efs-csi-driver-operator\" \"aws-load-balancer-operator\" \"bamoe-businessautomation-operator\" \"bamoe-kogito-operator\" \"bare-metal-event-relay\" \"businessautomation-operator\"",
"jq -c 'select(.schema == \"olm.bundle\") | {\"package\":.package, \"version\":.properties[] | select(.type == \"olm.bundle.object\").value.data | @base64d | fromjson | select(.kind == \"ClusterServiceVersion\" and (.spec.installModes[] | select(.type == \"AllNamespaces\" and .supported == true) != null) and .spec.webhookdefinitions == null).spec.version}' /<path>/<catalog_name>.json",
"{\"package\":\"3scale-operator\",\"version\":\"0.10.0-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.10.5\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.0-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.1-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.2-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.3-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.5-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.6-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.7-mas\"} {\"package\":\"3scale-operator\",\"version\":\"0.11.8-mas\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.0-opr-1\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.0-opr-2\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.0-opr-3\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.0-opr-4\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.1-opr-1\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.1-opr-2\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.2-opr-1\"} {\"package\":\"amq-broker-rhel8\",\"version\":\"7.10.2-opr-2\"}",
"jq -s '.[] | select( .schema == \"olm.package\") | select( .name == \"<package_name>\")' /<path>/<catalog_name>.json",
"jq -s '.[] | select( .schema == \"olm.package\") | select( .name == \"openshift-pipelines-operator-rh\")' /home/username/rhoc.json",
"{ \"defaultChannel\": \"stable\", \"icon\": { \"base64data\": \"PHN2ZyB4bWxu...\" \"mediatype\": \"image/png\" }, \"name\": \"openshift-pipelines-operator-rh\", \"schema\": \"olm.package\" }",
"jq -s '.[] | select( .schema == \"olm.package\") | .name' <catalog_name>.json",
"jq -c 'select(.schema == \"olm.bundle\") | {\"package\":.package, \"version\":.properties[] | select(.type == \"olm.bundle.object\").value.data | @base64d | fromjson | select(.kind == \"ClusterServiceVersion\" and (.spec.installModes[] | select(.type == \"AllNamespaces\" and .supported == true) != null) and .spec.webhookdefinitions == null).spec.version}' <catalog_name>.json",
"jq -s '.[] | select( .schema == \"olm.package\") | select( .name == \"<package_name>\")' <catalog_name>.json",
"jq -s '.[] | select( .package == \"<package_name>\")' <catalog_name>.json",
"jq -s '.[] | select( .schema == \"olm.channel\" ) | select( .package == \"<package_name>\") | .name' <catalog_name>.json",
"jq -s '.[] | select( .package == \"<package_name>\" ) | select( .schema == \"olm.channel\" ) | select( .name == \"<channel_name>\" ) | .entries | .[] | .name' <catalog_name>.json",
"jq -s '.[] | select( .schema == \"olm.channel\" ) | select ( .name == \"<channel>\") | select( .package == \"<package_name>\")' <catalog_name>.json",
"jq -s '.[] | select( .schema == \"olm.bundle\" ) | select( .package == \"<package_name>\") | .name' <catalog_name>.json",
"jq -s '.[] | select( .schema == \"olm.bundle\" ) | select ( .name == \"<bundle_name>\") | select( .package == \"<package_name>\")' <catalog_name>.json",
"apiVersion: v1 kind: ServiceAccount metadata: name: <extension>-installer namespace: <namespace>",
"apiVersion: v1 kind: ServiceAccount metadata: name: pipelines-installer namespace: pipelines",
"oc apply -f extension-service-account.yaml",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: <extension>-installer-clusterrole rules: - apiGroups: [\"*\"] resources: [\"*\"] verbs: [\"*\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: pipelines-installer-clusterrole rules: - apiGroups: [\"*\"] resources: [\"*\"] verbs: [\"*\"]",
"oc apply -f pipelines-role.yaml",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: <extension>-installer-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <extension>-installer-clusterrole subjects: - kind: ServiceAccount name: <extension>-installer namespace: <namespace>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: pipelines-installer-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: pipelines-installer-clusterrole subjects: - kind: ServiceAccount name: pipelines-installer namespace: pipelines",
"oc apply -f pipelines-cluster-role-binding.yaml",
"jq -s '.[] | select( .schema == \"olm.channel\" ) | select( .package == \"<package_name>\") | .name' /<path>/<catalog_name>.json",
"jq -s '.[] | select( .schema == \"olm.channel\" ) | select( .package == \"openshift-pipelines-operator-rh\") | .name' /home/username/rhoc.json",
"\"latest\" \"pipelines-1.11\" \"pipelines-1.12\" \"pipelines-1.13\" \"pipelines-1.14\"",
"jq -s '.[] | select( .package == \"<package_name>\" ) | select( .schema == \"olm.channel\" ) | select( .name == \"<channel_name>\" ) | .entries | .[] | .name' /<path>/<catalog_name>.json",
"jq -s '.[] | select( .package == \"openshift-pipelines-operator-rh\" ) | select( .schema == \"olm.channel\" ) | select( .name == \"latest\" ) | .entries | .[] | .name' /home/username/rhoc.json",
"\"openshift-pipelines-operator-rh.v1.12.0\" \"openshift-pipelines-operator-rh.v1.12.1\" \"openshift-pipelines-operator-rh.v1.12.2\" \"openshift-pipelines-operator-rh.v1.13.0\" \"openshift-pipelines-operator-rh.v1.13.1\" \"openshift-pipelines-operator-rh.v1.11.1\" \"openshift-pipelines-operator-rh.v1.12.0\" \"openshift-pipelines-operator-rh.v1.12.1\" \"openshift-pipelines-operator-rh.v1.12.2\" \"openshift-pipelines-operator-rh.v1.13.0\" \"openshift-pipelines-operator-rh.v1.14.1\" \"openshift-pipelines-operator-rh.v1.14.2\" \"openshift-pipelines-operator-rh.v1.14.3\" \"openshift-pipelines-operator-rh.v1.14.4\"",
"oc adm new-project <new_namespace>",
"apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace> serviceAccount: name: <service_account> channel: <channel> version: \"<version>\"",
"oc apply -f pipeline-operator.yaml",
"clusterextension.olm.operatorframework.io/pipelines-operator created",
"oc get clusterextension pipelines-operator -o yaml",
"apiVersion: v1 items: - apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"olm.operatorframework.io/v1alpha1\",\"kind\":\"ClusterExtension\",\"metadata\":{\"annotations\":{},\"name\":\"pipelines-operator\"},\"spec\":{\"channel\":\"latest\",\"installNamespace\":\"pipelines\",\"packageName\":\"openshift-pipelines-operator-rh\",\"serviceAccount\":{\"name\":\"pipelines-installer\"},\"pollInterval\":\"30m\"}} creationTimestamp: \"2024-06-10T17:50:51Z\" finalizers: - olm.operatorframework.io/cleanup-unpack-cache generation: 1 name: pipelines-operator resourceVersion: \"53324\" uid: c54237be-cde4-46d4-9b31-d0ec6acc19bf spec: channel: latest installNamespace: pipelines packageName: openshift-pipelines-operator-rh serviceAccount: name: pipelines-installer upgradeConstraintPolicy: Enforce status: conditions: - lastTransitionTime: \"2024-06-10T17:50:58Z\" message: resolved to \"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:dd3d18367da2be42539e5dde8e484dac3df33ba3ce1d5bcf896838954f3864ec\" observedGeneration: 1 reason: Success status: \"True\" type: Resolved - lastTransitionTime: \"2024-06-10T17:51:11Z\" message: installed from \"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:dd3d18367da2be42539e5dde8e484dac3df33ba3ce1d5bcf896838954f3864ec\" observedGeneration: 1 reason: Success status: \"True\" type: Installed - lastTransitionTime: \"2024-06-10T17:50:58Z\" message: \"\" observedGeneration: 1 reason: Deprecated status: \"False\" type: Deprecated - lastTransitionTime: \"2024-06-10T17:50:58Z\" message: \"\" observedGeneration: 1 reason: Deprecated status: \"False\" type: PackageDeprecated - lastTransitionTime: \"2024-06-10T17:50:58Z\" message: \"\" observedGeneration: 1 reason: Deprecated status: \"False\" type: ChannelDeprecated - lastTransitionTime: \"2024-06-10T17:50:58Z\" message: \"\" observedGeneration: 1 reason: Deprecated status: \"False\" type: BundleDeprecated - lastTransitionTime: \"2024-06-10T17:50:58Z\" message: 'unpack successful: observedGeneration: 1 reason: UnpackSuccess status: \"True\" type: Unpacked installedBundle: name: openshift-pipelines-operator-rh.v1.14.4 version: 1.14.4 resolvedBundle: name: openshift-pipelines-operator-rh.v1.14.4 version: 1.14.4",
"jq -s '.[] | select( .schema == \"olm.channel\" ) | select( .package == \"<package_name>\") | .name' /<path>/<catalog_name>.json",
"jq -s '.[] | select( .schema == \"olm.channel\" ) | select( .package == \"openshift-pipelines-operator-rh\") | .name' /home/username/rhoc.json",
"\"latest\" \"pipelines-1.11\" \"pipelines-1.12\" \"pipelines-1.13\" \"pipelines-1.14\"",
"jq -s '.[] | select( .package == \"<package_name>\" ) | select( .schema == \"olm.channel\" ) | select( .name == \"<channel_name>\" ) | .entries | .[] | .name' /<path>/<catalog_name>.json",
"jq -s '.[] | select( .package == \"openshift-pipelines-operator-rh\" ) | select( .schema == \"olm.channel\" ) | select( .name == \"latest\" ) | .entries | .[] | .name' /home/username/rhoc.json",
"\"openshift-pipelines-operator-rh.v1.11.1\" \"openshift-pipelines-operator-rh.v1.12.0\" \"openshift-pipelines-operator-rh.v1.12.1\" \"openshift-pipelines-operator-rh.v1.12.2\" \"openshift-pipelines-operator-rh.v1.13.0\" \"openshift-pipelines-operator-rh.v1.14.1\" \"openshift-pipelines-operator-rh.v1.14.2\" \"openshift-pipelines-operator-rh.v1.14.3\" \"openshift-pipelines-operator-rh.v1.14.4\"",
"oc get clusterextension <operator_name> -o yaml",
"oc get clusterextension pipelines-operator -o yaml",
"apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"olm.operatorframework.io/v1alpha1\",\"kind\":\"ClusterExtension\",\"metadata\":{\"annotations\":{},\"name\":\"pipelines-operator\"},\"spec\":{\"channel\":\"latest\",\"installNamespace\":\"openshift-operators\",\"packageName\":\"openshift-pipelines-operator-rh\",\"pollInterval\":\"30m\",\"version\":\"\\u003c1.12\"}} creationTimestamp: \"2024-06-11T15:55:37Z\" generation: 1 name: pipelines-operator resourceVersion: \"69776\" uid: 6a11dff3-bfa3-42b8-9e5f-d8babbd6486f spec: channel: latest installNamespace: openshift-operators packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: Enforce version: <1.12 status: conditions: - lastTransitionTime: \"2024-06-11T15:56:09Z\" message: installed from \"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280\" observedGeneration: 1 reason: Success status: \"True\" type: Installed - lastTransitionTime: \"2024-06-11T15:55:50Z\" message: resolved to \"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:e09d37bb1e754db42324fd18c1cb3e7ce77e7b7fcbf4932d0535391579938280\" observedGeneration: 1 reason: Success status: \"True\" type: Resolved - lastTransitionTime: \"2024-06-11T15:55:50Z\" message: \"\" observedGeneration: 1 reason: Deprecated status: \"False\" type: Deprecated - lastTransitionTime: \"2024-06-11T15:55:50Z\" message: \"\" observedGeneration: 1 reason: Deprecated status: \"False\" type: PackageDeprecated - lastTransitionTime: \"2024-06-11T15:55:50Z\" message: \"\" observedGeneration: 1 reason: Deprecated status: \"False\" type: ChannelDeprecated - lastTransitionTime: \"2024-06-11T15:55:50Z\" message: \"\" observedGeneration: 1 reason: Deprecated status: \"False\" type: BundleDeprecated installedBundle: name: openshift-pipelines-operator-rh.v1.11.1 version: 1.11.1 resolvedBundle: name: openshift-pipelines-operator-rh.v1.11.1 version: 1.11.1",
"apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace> version: \"1.12.1\" 1",
"apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace> version: \">1.11.1, <1.13\" 1",
"apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace> channel: pipelines-1.13 1",
"apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace> channel: latest version: \"<1.13\"",
"oc apply -f pipelines-operator.yaml",
"clusterextension.olm.operatorframework.io/pipelines-operator configured",
"oc patch clusterextension/pipelines-operator -p '{\"spec\":{\"version\":\"<1.13\"}}' --type=merge",
"clusterextension.olm.operatorframework.io/pipelines-operator patched",
"oc get clusterextension pipelines-operator -o yaml",
"apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"olm.operatorframework.io/v1alpha1\",\"kind\":\"ClusterExtension\",\"metadata\":{\"annotations\":{},\"name\":\"pipelines-operator\"},\"spec\":{\"channel\":\"latest\",\"installNamespace\":\"openshift-operators\",\"packageName\":\"openshift-pipelines-operator-rh\",\"pollInterval\":\"30m\",\"version\":\"\\u003c1.13\"}} creationTimestamp: \"2024-06-11T18:23:26Z\" generation: 2 name: pipelines-operator resourceVersion: \"66310\" uid: ce0416ba-13ea-4069-a6c8-e5efcbc47537 spec: channel: latest installNamespace: openshift-operators packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: Enforce version: <1.13 status: conditions: - lastTransitionTime: \"2024-06-11T18:23:33Z\" message: resolved to \"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:814742c8a7cc7e2662598e114c35c13993a7b423cfe92548124e43ea5d469f82\" observedGeneration: 2 reason: Success status: \"True\" type: Resolved - lastTransitionTime: \"2024-06-11T18:23:52Z\" message: installed from \"registry.redhat.io/openshift-pipelines/pipelines-operator-bundle@sha256:814742c8a7cc7e2662598e114c35c13993a7b423cfe92548124e43ea5d469f82\" observedGeneration: 2 reason: Success status: \"True\" type: Installed - lastTransitionTime: \"2024-06-11T18:23:33Z\" message: \"\" observedGeneration: 2 reason: Deprecated status: \"False\" type: Deprecated - lastTransitionTime: \"2024-06-11T18:23:33Z\" message: \"\" observedGeneration: 2 reason: Deprecated status: \"False\" type: PackageDeprecated - lastTransitionTime: \"2024-06-11T18:23:33Z\" message: \"\" observedGeneration: 2 reason: Deprecated status: \"False\" type: ChannelDeprecated - lastTransitionTime: \"2024-06-11T18:23:33Z\" message: \"\" observedGeneration: 2 reason: Deprecated status: \"False\" type: BundleDeprecated installedBundle: name: openshift-pipelines-operator-rh.v1.12.2 version: 1.12.2 resolvedBundle: name: openshift-pipelines-operator-rh.v1.12.2 version: 1.12.2",
"oc get clusterextension <operator_name> -o yaml",
"apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"olm.operatorframework.io/v1alpha1\",\"kind\":\"ClusterExtension\",\"metadata\":{\"annotations\":{},\"name\":\"pipelines-operator\"},\"spec\":{\"channel\":\"latest\",\"installNamespace\":\"openshift-operators\",\"packageName\":\"openshift-pipelines-operator-rh\",\"pollInterval\":\"30m\",\"version\":\"3.0\"}} creationTimestamp: \"2024-06-11T18:23:26Z\" generation: 3 name: pipelines-operator resourceVersion: \"71852\" uid: ce0416ba-13ea-4069-a6c8-e5efcbc47537 spec: channel: latest installNamespace: openshift-operators packageName: openshift-pipelines-operator-rh upgradeConstraintPolicy: Enforce version: \"3.0\" status: conditions: - lastTransitionTime: \"2024-06-11T18:29:02Z\" message: 'error upgrading from currently installed version \"1.12.2\": no package \"openshift-pipelines-operator-rh\" matching version \"3.0\" found in channel \"latest\"' observedGeneration: 3 reason: ResolutionFailed status: \"False\" type: Resolved - lastTransitionTime: \"2024-06-11T18:29:02Z\" message: installation has not been attempted as resolution failed observedGeneration: 3 reason: InstallationStatusUnknown status: Unknown type: Installed - lastTransitionTime: \"2024-06-11T18:29:02Z\" message: deprecation checks have not been attempted as resolution failed observedGeneration: 3 reason: Deprecated status: Unknown type: Deprecated - lastTransitionTime: \"2024-06-11T18:29:02Z\" message: deprecation checks have not been attempted as resolution failed observedGeneration: 3 reason: Deprecated status: Unknown type: PackageDeprecated - lastTransitionTime: \"2024-06-11T18:29:02Z\" message: deprecation checks have not been attempted as resolution failed observedGeneration: 3 reason: Deprecated status: Unknown type: ChannelDeprecated - lastTransitionTime: \"2024-06-11T18:29:02Z\" message: deprecation checks have not been attempted as resolution failed observedGeneration: 3 reason: Deprecated status: Unknown type: BundleDeprecated",
"oc delete clusterextension <operator_name>",
"clusterextension.olm.operatorframework.io \"<operator_name>\" deleted",
"oc get clusterextensions",
"No resources found",
"oc get ns <operator_name>-system",
"Error from server (NotFound): namespaces \"<operator_name>-system\" not found",
"- name: example.v3.0.0 skips: [\"example.v2.0.0\"] - name: example.v2.0.0 skipRange: >=1.0.0 <2.0.0",
"apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace_name> version: \">=1.11, <1.13\"",
"apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace_name> serviceAccount: name: <service_account> channel: latest 1",
"apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace_name> serviceAccount: name: <service_account> version: \"1.11.1\" 1",
"apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: pipelines-operator spec: packageName: openshift-pipelines-operator-rh installNamespace: <namespace_name> serviceAccount: name: <service_account> version: \">1.11.1\" 1",
"oc apply -f <extension_name>.yaml",
"apiVersion: olm.operatorframework.io/v1alpha1 kind: Operator metadata: name: <operator_name> 1 spec: packageName: <package_name> 2 installNamespace: <namespace_name> serviceAccount: name: <service_account> version: <version> 3 upgradeConstraintPolicy: Ignore 4",
"oc apply -f <extension_name>.yaml",
"oc edit clusterextension <clusterextension_name>",
"apiVersion: olm.operatorframework.io/v1alpha1 kind: ClusterExtension metadata: name: clusterextension-sample spec: installNamespace: default packageName: argocd-operator version: 0.6.0 preflight: crdUpgradeSafety: disabled: true 1",
"apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: controller-gen.kubebuilder.io/version: v0.13.0 name: example.test.example.com spec: group: test.example.com names: kind: Sample listKind: SampleList plural: samples singular: sample scope: Namespaced versions: - name: v1alpha1 schema: openAPIV3Schema: properties: apiVersion: type: string kind: type: string metadata: type: object spec: type: object status: type: object pollInterval: type: string type: object served: true storage: true subresources: status: {}",
"spec: group: test.example.com names: kind: Sample listKind: SampleList plural: samples singular: sample scope: Cluster versions: - name: v1alpha1",
"validating upgrade for CRD \"test.example.com\" failed: CustomResourceDefinition test.example.com failed upgrade safety validation. \"NoScopeChange\" validation failed: scope changed from \"Namespaced\" to \"Cluster\"",
"versions: - name: v1alpha2 schema: openAPIV3Schema: properties: apiVersion: type: string kind: type: string metadata: type: object spec: type: object status: type: object pollInterval: type: string type: object",
"validating upgrade for CRD \"test.example.com\" failed: CustomResourceDefinition test.example.com failed upgrade safety validation. \"NoStoredVersionRemoved\" validation failed: stored version \"v1alpha1\" removed",
"versions: - name: v1alpha1 schema: openAPIV3Schema: properties: apiVersion: type: string kind: type: string metadata: type: object spec: type: object status: type: object type: object",
"validating upgrade for CRD \"test.example.com\" failed: CustomResourceDefinition test.example.com failed upgrade safety validation. \"NoExistingFieldRemoved\" validation failed: crd/test.example.com version/v1alpha1 field/^.spec.pollInterval may not be removed",
"versions: - name: v1alpha2 schema: openAPIV3Schema: properties: apiVersion: type: string kind: type: string metadata: type: object spec: type: object status: type: object pollInterval: type: string type: object required: - pollInterval",
"validating upgrade for CRD \"test.example.com\" failed: CustomResourceDefinition test.example.com failed upgrade safety validation. \"ChangeValidator\" validation failed: version \"v1alpha1\", field \"^\": new required fields added: [pollInterval]"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/extensions/index
|
Deploying and managing OpenShift Data Foundation using Red Hat OpenStack Platform
|
Deploying and managing OpenShift Data Foundation using Red Hat OpenStack Platform Red Hat OpenShift Data Foundation 4.17 Instructions on deploying and managing OpenShift Data Foundation on Red Hat OpenStack Platform Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install and manage Red Hat OpenShift Data Foundation using Red Hat OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP). Important Deploying and managing OpenShift Data Foundation on Red Hat OpenStack Platform is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) using Red Hat OpenStack Platform clusters. Note Both internal and external OpenShift Data Foundation clusters are supported on Red Hat OpenStack Platform. See Planning your deployment for more information about deployment requirements. To deploy OpenShift Data Foundation, start with the requirements in Preparing to deploy OpenShift Data Foundation chapter and then follow the appropriate deployment process based on your requirement: Internal mode Deploying OpenShift Data Foundation on Red Hat OpenStack Platform in internal mode Deploy standalone Multicloud Object Gateway component External mode Deploying OpenShift Data Foundation on Red Hat OpenStack Platform in external mode Chapter 1. Preparing to deploy OpenShift Data Foundation Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Before you begin the deployment of Red Hat OpenShift Data Foundation, follow these steps: Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) HashiCorp Vault, follow these steps: Ensure that you have a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . When the Token authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Token authentication using KMS . When the Kubernetes authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Kubernetes authentication using KMS . Ensure that you are using signed certificates on your Vault servers. Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) Thales CipherTrust Manager, you must first enable the Key Management Interoperability Protocol (KMIP) and use signed certificates on your server. Follow these steps: Create a KMIP client if one does not exist. From the user interface, select KMIP -> Client Profile -> Add Profile . Add the CipherTrust username to the Common Name field during profile creation. Create a token by navigating to KMIP -> Registration Token -> New Registration Token . Copy the token for the step. To register the client, navigate to KMIP -> Registered Clients -> Add Client . Specify the Name . Paste the Registration Token from the step, then click Save . Download the Private Key and Client Certificate by clicking Save Private Key and Save Certificate respectively. To create a new KMIP interface, navigate to Admin Settings -> Interfaces -> Add Interface . Select KMIP Key Management Interoperability Protocol and click . Select a free Port . Select Network Interface as all . Select Interface Mode as TLS, verify client cert, user name taken from client cert, auth request is optional . (Optional) You can enable hard delete to delete both metadata and material when the key is deleted. It is disabled by default. Select the CA to be used, and click Save . To get the server CA certificate, click on the Action menu (...) on the right of the newly created interface, and click Download Certificate . Optional: If StorageClass encryption is to be enabled during deployment, create a key to act as the Key Encryption Key (KEK): Navigate to Keys -> Add Key . Enter Key Name . Set the Algorithm and Size to AES and 256 respectively. Enable Create a key in Pre-Active state and set the date and time for activation. Ensure that Encrypt and Decrypt are enabled under Key Usage . Copy the ID of the newly created Key to be used as the Unique Identifier during deployment. Minimum starting node requirements An OpenShift Data Foundation cluster will be deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in Planning guide. Disaster recovery requirements Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation. Chapter 2. Deploying OpenShift Data Foundation on Red Hat OpenStack Platform in internal mode Deploying OpenShift Data Foundation on OpenShift Container Platform in internal mode using dynamic storage devices provided by Red Hat OpenStack Platform installer-provisioned infrastructure (IPI) enables you to create internal cluster resources. This results in internal provisioning of the base services, which helps to make additional storage classes available to applications. Ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the below steps for deploying using dynamic storage devices: Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.17 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.2. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: 2.3. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS). Prerequisites Administrator access to Vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . The OpenShift Data Foundation operator must be installed from the Operator Hub. Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later. Procedure Create a service account: where, <serviceaccount_name> specifies the name of the service account. For example: Create clusterrolebindings and clusterroles : For example: Create a secret for the serviceaccount token and CA certificate. where, <serviceaccount_name> is the service account created in the earlier step. Get the token and the CA certificate from the secret. Retrieve the OCP cluster endpoint. Fetch the service account issuer: Use the information collected in the step to setup the Kubernetes authentication method in Vault: Important To configure the Kubernetes authentication method in Vault when the issuer is empty: Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Generate the roles: The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system. 2.3.1. Enabling key rotation when using KMS Security common practices require periodic encryption key rotation. You can enable key rotation when using KMS using this procedure. To enable key rotation, add the annotation keyrotation.csiaddons.openshift.io/schedule: <value> to either Namespace , StorageClass , or PersistentVolumeClaims (in order of precedence). <value> can be either @hourly , @daily , @weekly , @monthly , or @yearly . If <value> is empty, the default is @weekly . The below examples use @weekly . Important Key rotation is only supported for RBD backed volumes. Annotating Namespace Annotating StorageClass Annotating PersistentVolumeClaims 2.4. Creating an OpenShift Data Foundation cluster Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Prerequisites The OpenShift Data Foundation operator must be installed from the Operator Hub. For more information, see Installing OpenShift Data Foundation Operator using the Operator Hub . Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, select the following: Select Full Deployment for the Deployment type option. Select the Use an existing StorageClass option. Select the Storage Class . By default, it is set to standard . Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . In the Capacity and nodes page, provide the necessary information: Select a value for Requested Capacity from the dropdown list. It is set to 2 TiB by default. Note Once you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage). In the Select Nodes section, select at least three available nodes. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. For cloud platforms with multiple availability zones, ensure that the Nodes are spread across different Locations/availability zones. If the nodes selected do not match the OpenShift Data Foundation cluster requirements of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Select either one or both the encryption levels: Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, select one of the following providers and provide the necessary details: Vault Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . To enable in-transit encryption, select In-transit encryption . Select a Network . Click . In the Data Protection page, if you are configuring Regional-DR solution for Openshift Data Foundation then select the Prepare cluster for disaster recovery (Regional-DR only) checkbox, else click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back . Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System -> ocs-storagecluster-storagesystem -> Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify that OpenShift Data Foundation is successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To enable Overprovision Control alerts, refer to Alerts in Monitoring guide. 2.5. Verifying OpenShift Data Foundation deployment Use this section to verify that OpenShift Data Foundation is deployed correctly. 2.5.1. Verifying the state of the pods Procedure Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see the following table: Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) ux-backend-server- * (1 pod on any storage node) * ocs-client-operator -* (1 pod on any storage node) ocs-client-operator-console -* (1 pod on any storage node) ocs-provider-server -* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) 2.5.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 2.5.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 2.5.4. Verifying that the specific storage classes exist Procedure Click Storage -> Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io 2.6. Uninstalling OpenShift Data Foundation 2.6.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledgebase article on Uninstalling OpenShift Data Foundation . Chapter 3. Deploying OpenShift Data Foundation on Red Hat OpenStack Platform in external mode Red Hat OpenShift Data Foundation can use an externally hosted Red Hat Ceph Storage (RHCS) cluster as the storage provider on Red Hat OpenStack Platform. See Planning your deployment for more information. For instructions regarding how to install a RHCS cluster, see the installation guide . Follow these steps to deploy OpenShift Data Foundation in external mode: Install the OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster . 3.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.17 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 3.2. Creating an OpenShift Data foundation Cluster for external mode You need to create a new OpenShift Data Foundation cluster after you install OpenShift Data Foundation operator on OpenShift Container Platform deployed on Red Hat OpenStack platform. Prerequisites Ensure the OpenShift Container Platform version is 4.17 or above before deploying OpenShift Data Foundation 4.17. OpenShift Data Foundation operator must be installed. For more information, see Installing OpenShift Data Foundation Operator using the Operator Hub . To check the supportability and interoperability of Red Hat Ceph Storage (RHCS) with Red Hat OpenShift Data Foundation in external mode, go to the lab Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . Select Service Type as ODF as Self-Managed Service . Select appropriate Version from the drop down. On the Versions tab, click the Supported RHCS Compatibility tab. If you have updated the Red Hat Ceph Storage cluster from a version lower than 4.1.1 to the latest release and is not a freshly deployed cluster, you must manually set the application type for the CephFS pool on the Red Hat Ceph Storage cluster to enable CephFS PVC creation in external mode. For more details, see Troubleshooting CephFS PVC creation in external mode . Red Hat Ceph Storage must have Ceph Dashboard installed and configured. For more information, see Ceph Dashboard installation and access . Red Hat recommends that the external Red Hat Ceph Storage cluster has the PG Autoscaler enabled. For more information, see The placement group autoscaler section in the Red Hat Ceph Storage documentation. The external Ceph cluster should have an existing RBD pool pre-configured for use. If it does not exist, contact your Red Hat Ceph Storage administrator to create one before you move ahead with OpenShift Data Foundation deployment. Red Hat recommends to use a separate pool for each OpenShift Data Foundation cluster. Procedure Click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation -> Create Instance link of Storage Cluster. Select Mode as External . By default, Internal is selected as deployment mode. Figure 3.1. Connect to external cluster section on Create Storage Cluster form In the Connect to external cluster section, click on the Download Script link to download the python script for extracting Ceph cluster details. For extracting the Red Hat Ceph Storage (RHCS) cluster details, contact the RHCS administrator to run the downloaded python script on a Red Hat Ceph Storage node with admin key . Run the following command on the RHCS node to view the list of available arguments. Important Use python instead of python3 if the Red Hat Ceph Storage 4.x cluster is deployed on Red Hat Enterprise Linux 7.x (RHEL 7.x) cluster. Note You can also run the script from inside a MON container (containerized deployment) or from a MON node (rpm deployment). To retrieve the external cluster details from the RHCS cluster, run the following command For example: In the above example, --rbd-data-pool-name is a mandatory parameter used for providing block storage in OpenShift Data Foundation. --rgw-endpoint is optional. Provide this parameter if object storage is to be provisioned through Ceph Rados Gateway for OpenShift Data Foundation. Provide the endpoint in the following format: <ip_address>:<port> --monitoring-endpoint is optional. It is the IP address of the active ceph-mgr reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated. --monitoring-endpoint-port is optional. It is the port associated with the ceph-mgr Prometheus exporter specified by --monitoring-endpoint . If not provided, the value is automatically populated. -- run-as-user is an optional parameter used for providing a name for the Ceph user which is created by the script. If this parameter is not specified, a default user name client.healthchecker is created. The permissions for the new user is set as: caps: [mgr] allow command config caps: [mon] allow r, allow command quorum_status, allow command version caps: [osd] allow rwx pool= RGW_POOL_PREFIX.rgw.meta , allow r pool= .rgw.root , allow rw pool= RGW_POOL_PREFIX.rgw.control , allow rx pool= RGW_POOL_PREFIX.rgw.log , allow x pool= RGW_POOL_PREFIX.rgw.buckets.index Example of JSON output generated using the python script: Save the JSON output to a file with .json extension Note For OpenShift Data Foundation to work seamlessly, ensure that the parameters (RGW endpoint, CephFS details, RBD pool, and so on) to be uploaded using the JSON file remain unchanged on the RHCS external cluster after the storage cluster creation. Click External cluster metadata -> Browse to select and upload the JSON file. The content of the JSON file is populated and displayed in the text box. Figure 3.2. Json file content Click Create . The Create button is enabled only after you upload the .json file. Verification steps Verify that the final Status of the installed storage cluster shows as Phase: Ready with a green tick mark. Click Operators -> Installed Operators -> Storage Cluster link to view the storage cluster installation status. Alternatively, when you are on the Operator Details tab, you can click on the Storage Cluster tab to view the status. To verify that OpenShift Data Foundation, pods and StorageClass are successfully installed, see Verifying your external mode OpenShift Data Foundation installation . 3.3. Verifying your OpenShift Data Foundation installation for external mode Use this section to verify that OpenShift Data Foundation is deployed correctly. 3.3.1. Verifying the state of the pods Click Workloads -> Pods from the left pane of the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see Table 3.1, "Pods corresponding to OpenShift Data Foundation components" Verify that the following pods are in running state: Table 3.1. Pods corresponding to OpenShift Data Foundation components Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any worker node) ocs-metrics-exporter-* (1 pod on any worker node) odf-operator-controller-manager-* (1 pod on any worker node) odf-console-* (1 pod on any worker node) csi-addons-controller-manager-* (1 pod on any worker node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any worker node) Multicloud Object Gateway noobaa-operator-* (1 pod on any worker node) noobaa-core-* (1 pod on any worker node) noobaa-db-pg-* (1 pod on any worker node) noobaa-endpoint-* (1 pod on any worker node) CSI cephfs csi-cephfsplugin-* (1 pod on each worker node) csi-cephfsplugin-provisioner-* (2 pods distributed across worker nodes) Note If an MDS is not deployed in the external cluster, the csi-cephfsplugin pods will not be created. rbd csi-rbdplugin-* (1 pod on each worker node) csi-rbdplugin-provisioner-* (2 pods distributed across worker nodes) 3.3.2. Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that both Storage Cluster and Data Resiliency have a green tick. In the Details card, verify that the cluster information is displayed as follows. + Service Name:: OpenShift Data Foundation Cluster Name:: ocs-external-storagecluster Provider:: OpenStack Mode:: External Version:: ocs-operator-4.17.0 For more information on the health of OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 3.3.3. Verifying that the Multicloud Object Gateway is healthy In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the Multicloud Object Gateway (MCG) information is displayed. Note The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details were included while deploying OpenShift Data Foundation in external mode. For more information on the health of OpenShift Data Foundation cluster using the object dashboard, see Monitoring OpenShift Data Foundation . 3.3.4. Verifying that the storage classes are created and listed Click Storage -> Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-external-storagecluster-ceph-rbd ocs-external-storagecluster-ceph-rgw ocs-external-storagecluster-cephfs openshift-storage.noobaa.io Note If MDS is not deployed in the external cluster, ocs-external-storagecluster-cephfs storage class will not be created. If RGW is not deployed in the external cluster, the ocs-external-storagecluster-ceph-rgw storage class will not be created. For more information regarding MDS and RGW, see Red Hat Ceph Storage documentation 3.3.5. Verifying that Ceph cluster is connected Run the following command to verify if the OpenShift Data Foundation cluster is connected to the external Red Hat Ceph Storage cluster. 3.3.6. Verifying that storage cluster is ready Run the following command to verify if the storage cluster is ready and the External option is set to true. 3.4. Uninstalling OpenShift Data Foundation 3.4.1. Uninstalling OpenShift Data Foundation from external storage system Use the steps in this section to uninstall OpenShift Data Foundation. Uninstalling OpenShift Data Foundation does not remove the RBD pool from the external cluster, or uninstall the external Red Hat Ceph Storage cluster. Uninstall Annotations Annotations on the Storage Cluster are used to change the behavior of the uninstall process. To define the uninstall behavior, the following two annotations have been introduced in the storage cluster: uninstall.ocs.openshift.io/cleanup-policy: delete uninstall.ocs.openshift.io/mode: graceful Note The uninstall.ocs.openshift.io/cleanup-policy is not applicable for external mode. The below table provides information on the different values that can used with these annotations: Table 3.2. uninstall.ocs.openshift.io uninstall annotations descriptions Annotation Value Default Behavior cleanup-policy delete Yes Rook cleans up the physical drives and the DataDirHostPath cleanup-policy retain No Rook does not clean up the physical drives and the DataDirHostPath mode graceful Yes Rook and NooBaa pauses the uninstall process until the PVCs and the OBCs are removed by the administrator/user mode forced No Rook and NooBaa proceeds with uninstall even if PVCs/OBCs provisioned using Rook and NooBaa exist respectively You can change the uninstall mode by editing the value of the annotation by using the following commands: Prerequisites Ensure that the OpenShift Data Foundation cluster is in a healthy state. The uninstall process can fail when some of the pods are not terminated successfully due to insufficient resources or nodes. In case the cluster is in an unhealthy state, contact Red Hat Customer Support before uninstalling OpenShift Data Foundation. Ensure that applications are not consuming persistent volume claims (PVCs) or object bucket claims (OBCs) using the storage classes provided by OpenShift Data Foundation. Procedure Delete the volume snapshots that are using OpenShift Data Foundation. List the volume snapshots from all the namespaces From the output of the command, identify and delete the volume snapshots that are using OpenShift Data Foundation. Delete PVCs and OBCs that are using OpenShift Data Foundation. In the default uninstall mode (graceful), the uninstaller waits till all the PVCs and OBCs that use OpenShift Data Foundation are deleted. If you wish to delete the Storage Cluster without deleting the PVCs beforehand, you may set the uninstall mode annotation to "forced" and skip this step. Doing so will result in orphan PVCs and OBCs in the system. Delete OpenShift Container Platform monitoring stack PVCs using OpenShift Data Foundation. See Removing monitoring stack from OpenShift Data Foundation Delete OpenShift Container Platform Registry PVCs using OpenShift Data Foundation. Removing OpenShift Container Platform registry from OpenShift Data Foundation Delete OpenShift Container Platform logging PVCs using OpenShift Data Foundation. Removing the cluster logging operator from OpenShift Data Foundation Delete other PVCs and OBCs provisioned using OpenShift Data Foundation. Given below is a sample script to identify the PVCs and OBCs provisioned using OpenShift Data Foundation. The script ignores the PVCs and OBCs that are used internally by OpenShift Data Foundation. Delete the OBCs. Delete the PVCs. Ensure that you have removed any custom backing stores, bucket classes, and so on that are created in the cluster. Delete the Storage Cluster object and wait for the removal of the associated resources. Delete the namespace and wait until the deletion is complete. You will need to switch to another project if openshift-storage is the active project. For example: The project is deleted if the following command returns a NotFound error. Note While uninstalling OpenShift Data Foundation, if the namespace is not deleted completely and remains in Terminating state, perform the steps in Troubleshooting and deleting remaining resources during Uninstall to identify objects that are blocking the namespace from being terminated. Confirm all PVs provisioned using OpenShift Data Foundation are deleted. If there is any PV left in the Released state, delete it. Remove CustomResourceDefinitions . To ensure that OpenShift Data Foundation is uninstalled completely: In the OpenShift Container Platform Web Console, click Storage . Verify that OpenShift Data Foundation no longer appears under Storage. 3.4.2. Removing monitoring stack from OpenShift Data Foundation Use this section to clean up the monitoring stack from OpenShift Data Foundation. The PVCs that are created as a part of configuring the monitoring stack are in the openshift-monitoring namespace. Prerequisites PVCs are configured to use the OpenShift Container Platform monitoring stack. For information, see configuring monitoring stack . Procedure List the pods and PVCs that are currently running in the openshift-monitoring namespace. Edit the monitoring configmap . Remove any config sections that reference the OpenShift Data Foundation storage classes as shown in the following example and save it. Before editing After editing In this example, alertmanagerMain and prometheusK8s monitoring components are using the OpenShift Data Foundation PVCs. List the pods consuming the PVC. In this example, the alertmanagerMain and prometheusK8s pods that were consuming the PVCs are in the Terminating state. You can delete the PVCs once these pods are no longer using OpenShift Data Foundation PVC. Delete relevant PVCs. Make sure you delete all the PVCs that are consuming the storage classes. 3.4.3. Removing OpenShift Container Platform registry from OpenShift Data Foundation Use this section to clean up the OpenShift Container Platform registry from OpenShift Data Foundation. If you want to configure an alternative storage, see image registry The PVCs that are created as a part of configuring OpenShift Container Platform registry are in the openshift-image-registry namespace. Prerequisites The image registry should have been configured to use an OpenShift Data Foundation PVC. Procedure Edit the configs.imageregistry.operator.openshift.io object and remove the content in the storage section. Before editing After editing In this example, the PVC is called registry-cephfs-rwx-pvc , which is now safe to delete. Delete the PVC. 3.4.4. Removing the cluster logging operator from OpenShift Data Foundation Use this section to clean up the cluster logging operator from OpenShift Data Foundation. The Persistent Volume Claims (PVCs) that are created as a part of configuring the cluster logging operator are in the openshift-logging namespace. Prerequisites The cluster logging instance should have been configured to use the OpenShift Data Foundation PVCs. Procedure Remove the ClusterLogging instance in the namespace. The PVCs in the openshift-logging namespace are now safe to delete. Delete the PVCs. <pvc-name> Is the name of the PVC Chapter 4. Deploy standalone Multicloud Object Gateway in internal mode Deploying only the Multicloud Object Gateway component with the OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. Use this section to deploy only the standalone Multicloud Object Gateway component in internal mode, which involves the following steps: Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway Note Deploying standalone Multicloud Object Gateway component is not supported in external mode deployments. 4.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.17 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 4.2. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway component while deploying OpenShift Data Foundation. Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Select the Use an existing StorageClass option. Click . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) Chapter 5. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage -> Data Foundation -> Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window. Chapter 6. Storage classes and storage pools The OpenShift Data Foundation operator installs a default storage class depending on the platform in use. This default storage class is owned and controlled by the operator and it cannot be deleted or modified. However, you can create a custom storage class if you want the storage class to have a different behavior. You can create multiple storage pools which map to storage classes that provide the following features: Enable applications with their own high availability to use persistent volumes with two replicas, potentially improving application performance. Save space for persistent volume claims using storage classes with compression enabled. Note Multiple storage classes and multiple pools are not supported for external mode OpenShift Data Foundation clusters. Note With a minimal cluster of a single device set, only two new storage classes can be created. Every storage cluster expansion allows two new additional storage classes. 6.1. Creating storage classes and pools You can create a storage class using an existing pool or you can create a new pool for the storage class while creating it. Prerequisites Ensure that you are logged into the OpenShift Container Platform web console and OpenShift Data Foundation cluster is in Ready state. Procedure Click Storage -> StorageClasses . Click Create Storage Class . Enter the storage class Name and Description . Reclaim Policy is set to Delete as the default option. Use this setting. If you change the reclaim policy to Retain in the storage class, the persistent volume (PV) remains in Released state even after deleting the persistent volume claim (PVC). Volume binding mode is set to WaitForConsumer as the default option. If you choose the Immediate option, then the PV gets created immediately when creating the PVC. Select RBD or CephFS Provisioner as the plugin for provisioning the persistent volumes. Choose a Storage system for your workloads. Select an existing Storage Pool from the list or create a new pool. Note The 2-way replication data protection policy is only supported for the non-default RBD pool. 2-way replication can be used by creating an additional pool. To know about Data Availability and Integrity considerations for replica 2 pools, see Knowledgebase Customer Solution Article . Create new pool Click Create New Pool . Enter Pool name . Choose 2-way-Replication or 3-way-Replication as the Data Protection Policy. Select Enable compression if you need to compress the data. Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression will not be compressed. Click Create to create the new storage pool. Click Finish after the pool is created. Optional: Select Enable Encryption checkbox. Click Create to create the storage class. 6.2. Creating a storage class for persistent volume encryption Prerequisites Based on your use case, you must ensure to configure access to KMS for one of the following: Using vaulttokens : Ensure to configure access as described in Configuring access to KMS using vaulttokens Using vaulttenantsa (Technology Preview): Ensure to configure access as described in Configuring access to KMS using vaulttenantsa Using Thales CipherTrust Manager (using KMIP): Ensure to configure access as described in Configuring access to KMS using Thales CipherTrust Manager (For users on Azure platform only) Using Azure Vault: Ensure to set up client authentication and fetch the client credentials from Azure using the following steps: Create Azure Vault. For more information, see Quickstart: Create a key vault using the Azure portal in Microsoft product documentation. Create Service Principal with certificate based authentication. For more information, see Create an Azure service principal with Azure CLI in Microsoft product documentation. Set Azure Key Vault role based access control (RBAC). For more information, see Enable Azure RBAC permissions on Key Vault . Procedure In the OpenShift Web Console, navigate to Storage -> StorageClasses . Click Create Storage Class . Enter the storage class Name and Description . Select either Delete or Retain for the Reclaim Policy . By default, Delete is selected. Select either Immediate or WaitForFirstConsumer as the Volume binding mode . WaitForConsumer is set as the default option. Select RBD Provisioner openshift-storage.rbd.csi.ceph.com which is the plugin used for provisioning the persistent volumes. Select Storage Pool where the volume data is stored from the list or create a new pool. Select the Enable encryption checkbox. Choose one of the following options to set the KMS connection details: Choose existing KMS connection : Select an existing KMS connection from the drop-down list. The list is populated from the the connection details available in the csi-kms-connection-details ConfigMap. Select the Provider from the drop down. Select the Key service for the given provider from the list. Create new KMS connection : This is applicable for vaulttokens and Thales CipherTrust Manager (using KMIP) only. Select one of the following Key Management Service Provider and provide the required details. Vault Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name . In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example, Address : 123.34.3.2, Port : 5696. Upload the Client Certificate , CA certificate , and Client Private Key . Enter the Unique Identifier for the key to be used for encryption and decryption, generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Azure Key Vault (Only for Azure users on Azure platform) For information about setting up client authentication and fetching the client credentials, see the Prerequisites in Creating an OpenShift Data Foundation cluster section of the Deploying OpenShift Data Foundation using Microsoft Azure guide. Enter a unique Connection name for the key management service within the project. Enter Azure Vault URL . Enter Client ID . Enter Tenant ID . Upload Certificate file in .PEM format and the certificate file must include a client certificate and a private key. Click Save . Click Create . Edit the ConfigMap to add the vaultBackend parameter if the HashiCorp Vault setup does not allow automatic detection of the Key/Value (KV) secret engine API version used by the backend path. Note vaultBackend is an optional parameters that is added to the configmap to specify the version of the KV secret engine API associated with the backend path. Ensure that the value matches the KV secret engine API version that is set for the backend path, otherwise it might result in a failure during persistent volume claim (PVC) creation. Identify the encryptionKMSID being used by the newly created storage class. On the OpenShift Web Console, navigate to Storage -> Storage Classes . Click the Storage class name -> YAML tab. Capture the encryptionKMSID being used by the storage class. Example: On the OpenShift Web Console, navigate to Workloads -> ConfigMaps . To view the KMS connection details, click csi-kms-connection-details . Edit the ConfigMap. Click Action menu (...) -> Edit ConfigMap . Add the vaultBackend parameter depending on the backend that is configured for the previously identified encryptionKMSID . You can assign kv for KV secret engine API, version 1 and kv-v2 for KV secret engine API, version 2. Example: Click Save steps The storage class can be used to create encrypted persistent volumes. For more information, see managing persistent volume claims . Important Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the HashiCorp product. For technical assistance with this product, contact HashiCorp . Chapter 7. Configure storage for OpenShift Container Platform services You can use OpenShift Data Foundation to provide storage for OpenShift Container Platform services such as image registry, monitoring, and logging. The process for configuring storage for these services depends on the infrastructure used in your OpenShift Data Foundation deployment. Warning Always ensure that you have plenty of storage capacity for these services. If the storage for these critical services runs out of space, the cluster becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Configuring the Curator schedule and the Modifying retention time for Prometheus metrics data sub section of Configuring persistent storage in the OpenShift Container Platform documentation for details. If you do run out of storage space for these services, contact Red Hat Customer Support. 7.1. Configuring Image Registry to use OpenShift Data Foundation OpenShift Container Platform provides a built in Container Image Registry which runs as a standard workload on the cluster. A registry is typically used as a publication target for images built on the cluster as well as a source of images for workloads running on the cluster. Warning This process does not migrate data from an existing image registry to the new image registry. If you already have container images in your existing registry, back up your registry before you complete this process, and re-register your images when this process is complete. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In OpenShift Web Console, click Operators -> Installed Operators to view installed operators. Image Registry Operator is installed and running in the openshift-image-registry namespace. In OpenShift Web Console, click Administration -> Cluster Settings -> Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.cephfs.csi.ceph.com is available. In OpenShift Web Console, click Storage -> StorageClasses to view available storage classes. Procedure Create a Persistent Volume Claim for the Image Registry to use. In the OpenShift Web Console, click Storage -> Persistent Volume Claims . Set the Project to openshift-image-registry . Click Create Persistent Volume Claim . From the list of available storage classes retrieved above, specify the Storage Class with the provisioner openshift-storage.cephfs.csi.ceph.com . Specify the Persistent Volume Claim Name , for example, ocs4registry . Specify an Access Mode of Shared Access (RWX) . Specify a Size of at least 100 GB. Click Create . Wait until the status of the new Persistent Volume Claim is listed as Bound . Configure the cluster's Image Registry to use the new Persistent Volume Claim. Click Administration -> Custom Resource Definitions . Click the Config custom resource definition associated with the imageregistry.operator.openshift.io group. Click the Instances tab. Beside the cluster instance, click the Action Menu (...) -> Edit Config . Add the new Persistent Volume Claim as persistent storage for the Image Registry. Add the following under spec: , replacing the existing storage: section if necessary. For example: Click Save . Verify that the new configuration is being used. Click Workloads -> Pods . Set the Project to openshift-image-registry . Verify that the new image-registry-* pod appears with a status of Running , and that the image-registry-* pod terminates. Click the new image-registry-* pod to view pod details. Scroll down to Volumes and verify that the registry-storage volume has a Type that matches your new Persistent Volume Claim, for example, ocs4registry . 7.2. Configuring monitoring to use OpenShift Data Foundation OpenShift Data Foundation provides a monitoring stack that comprises of Prometheus and Alert Manager. Follow the instructions in this section to configure OpenShift Data Foundation as storage for the monitoring stack. Important Monitoring will not function if it runs out of storage space. Always ensure that you have plenty of storage capacity for monitoring. Red Hat recommends configuring a short retention interval for this service. See the Modifying retention time for Prometheus metrics data of Monitoring guide in the OpenShift Container Platform documentation for details. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In the OpenShift Web Console, click Operators -> Installed Operators to view installed operators. Monitoring Operator is installed and running in the openshift-monitoring namespace. In the OpenShift Web Console, click Administration -> Cluster Settings -> Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.rbd.csi.ceph.com is available. In the OpenShift Web Console, click Storage -> StorageClasses to view available storage classes. Procedure In the OpenShift Web Console, go to Workloads -> Config Maps . Set the Project dropdown to openshift-monitoring . Click Create Config Map . Define a new cluster-monitoring-config Config Map using the following example. Replace the content in angle brackets ( < , > ) with your own values, for example, retention: 24h or storage: 40Gi . Replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd . Example cluster-monitoring-config Config Map Click Create to save and create the Config Map. Verification steps Verify that the Persistent Volume Claims are bound to the pods. Go to Storage -> Persistent Volume Claims . Set the Project dropdown to openshift-monitoring . Verify that 5 Persistent Volume Claims are visible with a state of Bound , attached to three alertmanager-main-* pods, and two prometheus-k8s-* pods. Figure 7.1. Monitoring storage created and bound Verify that the new alertmanager-main-* pods appear with a state of Running . Go to Workloads -> Pods . Click the new alertmanager-main-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-alertmanager-claim that matches one of your new Persistent Volume Claims, for example, ocs-alertmanager-claim-alertmanager-main-0 . Figure 7.2. Persistent Volume Claims attached to alertmanager-main-* pod Verify that the new prometheus-k8s-* pods appear with a state of Running . Click the new prometheus-k8s-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-prometheus-claim that matches one of your new Persistent Volume Claims, for example, ocs-prometheus-claim-prometheus-k8s-0 . Figure 7.3. Persistent Volume Claims attached to prometheus-k8s-* pod 7.3. Cluster logging for OpenShift Data Foundation You can deploy cluster logging to aggregate logs for a range of OpenShift Container Platform services. For information about how to deploy cluster logging, see Deploying cluster logging . Upon initial OpenShift Container Platform deployment, OpenShift Data Foundation is not configured by default and the OpenShift Container Platform cluster will solely rely on default storage available from the nodes. You can edit the default configuration of OpenShift logging (ElasticSearch) to be backed by OpenShift Data Foundation to have OpenShift Data Foundation backed logging (Elasticsearch). Important Always ensure that you have plenty of storage capacity for these services. If you run out of storage space for these critical services, the logging application becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Cluster logging curator in the OpenShift Container Platform documentation for details. If you run out of storage space for these services, contact Red Hat Customer Support. 7.3.1. Configuring persistent storage You can configure a persistent storage class and size for the Elasticsearch cluster using the storage class name and size parameters. The Cluster Logging Operator creates a Persistent Volume Claim for each data node in the Elasticsearch cluster based on these parameters. For example: This example specifies that each data node in the cluster will be bound to a Persistent Volume Claim that requests 200GiB of ocs-storagecluster-ceph-rbd storage. Each primary shard will be backed by a single replica. A copy of the shard is replicated across all the nodes and are always available and the copy can be recovered if at least two nodes exist due to the single redundancy policy. For information about Elasticsearch replication policies, see Elasticsearch replication policy in About deploying and configuring cluster logging . Note Omission of the storage block will result in a deployment backed by default storage. For example: For more information, see Configuring cluster logging . 7.3.2. Configuring cluster logging to use OpenShift data Foundation Follow the instructions in this section to configure OpenShift Data Foundation as storage for the OpenShift cluster logging. Note You can obtain all the logs when you configure logging for the first time in OpenShift Data Foundation. However, after you uninstall and reinstall logging, the old logs are removed and only the new logs are processed. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. Cluster logging Operator is installed and running in the openshift-logging namespace. Procedure Click Administration -> Custom Resource Definitions from the left pane of the OpenShift Web Console. On the Custom Resource Definitions page, click ClusterLogging . On the Custom Resource Definition Overview page, select View Instances from the Actions menu or click the Instances Tab. On the Cluster Logging page, click Create Cluster Logging . You might have to refresh the page to load the data. In the YAML, replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd : If you have tainted the OpenShift Data Foundation nodes, you must add toleration to enable scheduling of the daemonset pods for logging. Click Save . Verification steps Verify that the Persistent Volume Claims are bound to the elasticsearch pods. Go to Storage -> Persistent Volume Claims . Set the Project dropdown to openshift-logging . Verify that Persistent Volume Claims are visible with a state of Bound , attached to elasticsearch- * pods. Figure 7.4. Cluster logging created and bound Verify that the new cluster logging is being used. Click Workload -> Pods . Set the Project to openshift-logging . Verify that the new elasticsearch- * pods appear with a state of Running . Click the new elasticsearch- * pod to view pod details. Scroll down to Volumes and verify that the elasticsearch volume has a Type that matches your new Persistent Volume Claim, for example, elasticsearch-elasticsearch-cdm-9r624biv-3 . Click the Persistent Volume Claim name and verify the storage class name in the PersistentVolumeClaim Overview page. Note Make sure to use a shorter curator time to avoid PV full scenario on PVs attached to Elasticsearch pods. You can configure Curator to delete Elasticsearch data based on retention settings. It is recommended that you set the following default index data retention of 5 days as a default. For more details, see Curation of Elasticsearch Data . Note To uninstall the cluster logging backed by Persistent Volume Claim, use the procedure removing the cluster logging operator from OpenShift Data Foundation in the uninstall chapter of the respective deployment guide. Chapter 8. Backing OpenShift Container Platform applications with OpenShift Data Foundation You cannot directly install OpenShift Data Foundation during the OpenShift Container Platform installation. However, you can install OpenShift Data Foundation on an existing OpenShift Container Platform by using the Operator Hub and then configure the OpenShift Container Platform applications to be backed by OpenShift Data Foundation. Prerequisites OpenShift Container Platform is installed and you have administrative access to OpenShift Web Console. OpenShift Data Foundation is installed and running in the openshift-storage namespace. Procedure In the OpenShift Web Console, perform one of the following: Click Workloads -> Deployments . In the Deployments page, you can do one of the following: Select any existing deployment and click Add Storage option from the Action menu (...). Create a new deployment and then add storage. Click Create Deployment to create a new deployment. Edit the YAML based on your requirement to create a deployment. Click Create . Select Add Storage from the Actions drop-down menu on the top right of the page. Click Workloads -> Deployment Configs . In the Deployment Configs page, you can do one of the following: Select any existing deployment and click Add Storage option from the Action menu (...). Create a new deployment and then add storage. Click Create Deployment Config to create a new deployment. Edit the YAML based on your requirement to create a deployment. Click Create . Select Add Storage from the Actions drop-down menu on the top right of the page. In the Add Storage page, you can choose one of the following options: Click the Use existing claim option and select a suitable PVC from the drop-down list. Click the Create new claim option. Select the appropriate CephFS or RBD storage class from the Storage Class drop-down list. Provide a name for the Persistent Volume Claim. Select ReadWriteOnce (RWO) or ReadWriteMany (RWX) access mode. Note ReadOnlyMany (ROX) is deactivated as it is not supported. Select the size of the desired storage capacity. Note You can expand the block PVs but cannot reduce the storage capacity after the creation of Persistent Volume Claim. Specify the mount path and subpath (if required) for the mount path volume inside the container. Click Save . Verification steps Depending on your configuration, perform one of the following: Click Workloads -> Deployments . Click Workloads -> Deployment Configs . Set the Project as required. Click the deployment for which you added storage to display the deployment details. Scroll down to Volumes and verify that your deployment has a Type that matches the Persistent Volume Claim that you assigned. Click the Persistent Volume Claim name and verify the storage class name in the Persistent Volume Claim Overview page. Chapter 9. How to use dedicated worker nodes for Red Hat OpenShift Data Foundation Any Red Hat OpenShift Container Platform subscription requires an OpenShift Data Foundation subscription. However, you can save on the OpenShift Container Platform subscription costs if you are using infrastructure nodes to schedule OpenShift Data Foundation resources. It is important to maintain consistency across environments with or without Machine API support. Because of this, it is highly recommended in all cases to have a special category of nodes labeled as either worker or infra or have both roles. See the Section 9.3, "Manual creation of infrastructure nodes" section for more information. 9.1. Anatomy of an Infrastructure node Infrastructure nodes for use with OpenShift Data Foundation have a few attributes. The infra node-role label is required to ensure the node does not consume RHOCP entitlements. The infra node-role label is responsible for ensuring only OpenShift Data Foundation entitlements are necessary for the nodes running OpenShift Data Foundation. Labeled with node-role.kubernetes.io/infra Adding an OpenShift Data Foundation taint with a NoSchedule effect is also required so that the infra node will only schedule OpenShift Data Foundation resources. Tainted with node.ocs.openshift.io/storage="true" The label identifies the RHOCP node as an infra node so that RHOCP subscription cost is not applied. The taint prevents non OpenShift Data Foundation resources to be scheduled on the tainted nodes. Note Adding storage taint on nodes might require toleration handling for the other daemonset pods such as openshift-dns daemonset . For information about how to manage the tolerations, see Knowledgebase article: Openshift-dns daemonsets doesn't include toleration to run on nodes with taints . Example of the taint and labels required on infrastructure node that will be used to run OpenShift Data Foundation services: 9.2. Machine sets for creating Infrastructure nodes If the Machine API is supported in the environment, then labels should be added to the templates for the Machine Sets that will be provisioning the infrastructure nodes. Avoid the anti-pattern of adding labels manually to nodes created by the machine API. Doing so is analogous to adding labels to pods created by a deployment. In both cases, when the pod/node fails, the replacement pod/node will not have the appropriate labels. Note In EC2 environments, you will need three machine sets, each configured to provision infrastructure nodes in a distinct availability zone (such as us-east-2a, us-east-2b, us-east-2c). Currently, OpenShift Data Foundation does not support deploying in more than three availability zones. The following Machine Set template example creates nodes with the appropriate taint and labels required for infrastructure nodes. This will be used to run OpenShift Data Foundation services. Important If you add a taint to the infrastructure nodes, you also need to add tolerations to the taint for other workloads, for example, the fluentd pods. For more information, see the Red Hat Knowledgebase solution Infrastructure Nodes in OpenShift 4 . 9.3. Manual creation of infrastructure nodes Only when the Machine API is not supported in the environment should labels be directly applied to nodes. Manual creation requires that at least 3 RHOCP worker nodes are available to schedule OpenShift Data Foundation services, and that these nodes have sufficient CPU and memory resources. To avoid the RHOCP subscription cost, the following is required: Adding a NoSchedule OpenShift Data Foundation taint is also required so that the infra node will only schedule OpenShift Data Foundation resources and repel any other non-OpenShift Data Foundation workloads. Warning Do not remove the node-role node-role.kubernetes.io/worker="" The removal of the node-role.kubernetes.io/worker="" can cause issues unless changes are made both to the OpenShift scheduler and to MachineConfig resources. If already removed, it should be added again to each infra node. Adding node-role node-role.kubernetes.io/infra="" and OpenShift Data Foundation taint is sufficient to conform to entitlement exemption requirements. 9.4. Taint a node from the user interface This section explains the procedure to taint nodes after the OpenShift Data Foundation deployment. Procedure In the OpenShift Web Console, click Compute -> Nodes , and then select the node which has to be tainted. In the Details page click on Edit taints . Enter the values in the Key <nodes.openshift.ocs.io/storage>, Value <true> and in the Effect <Noschedule> field. Click Save. Verification steps Follow the steps to verify that the node has tainted successfully: Navigate to Compute -> Nodes . Select the node to verify its status, and then click on the YAML tab. In the specs section check the values of the following parameters: Additional resources For more information, refer to Creating the OpenShift Data Foundation cluster on VMware vSphere . Chapter 10. Scaling storage nodes To scale the storage capacity of OpenShift Data Foundation, you can do either of the following: Scale up storage nodes - Add storage capacity to the existing OpenShift Data Foundation worker nodes Scale out storage nodes - Add new worker nodes containing storage capacity 10.1. Requirements for scaling storage nodes Before you proceed to scale the storage nodes, refer to the following sections to understand the node requirements for your specific Red Hat OpenShift Data Foundation instance: Platform requirements Storage device requirements Dynamic storage devices Capacity planning Warning Always ensure that you have plenty of storage capacity. If storage ever fills completely, it is not possible to add capacity or delete or migrate content away from the storage to free up space. Completely full storage is very difficult to recover. Capacity alerts are issued when cluster storage capacity reaches 75% (near-full) and 85% (full) of total capacity. Always address capacity warnings promptly, and review your storage regularly to ensure that you do not run out of storage space. If you do run out of storage space completely, contact Red Hat Customer Support. 10.2. Scaling up storage by adding capacity to your OpenShift Data Foundation nodes on Red Hat OpenStack Platform infrastructure To increase the storage capacity in a dynamically created storage cluster on an user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. The disk should be of the same size and type as used during initial deployment. Procedure Log in to the OpenShift Web Console. Click Operators -> Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action Menu (...) on the far right of the storage system name to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class . Choose the storage class which you wish to use to provision new storage devices. The storage class should be set to standard if you are using the default storage class generated during deployment. If you have created other storage classes, select whichever is appropriate. + The Raw Capacity field shows the size set during storage class creation. The total amount of storage consumed is three times this amount, because OpenShift Data Foundation uses a replica count of 3. Click Add . To check the status, navigate to Storage -> Data Foundation and verify that the Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected hosts. <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 10.3. Scaling out storage capacity by adding new nodes To scale out storage capacity, you need to perform the following: Add a new node to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs, which is the increment of 3 OSDs of the capacity selected during initial configuration. Verify that the new node is added successfully Scale up the storage capacity after the node is added 10.3.1. Adding a node to an installer-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Navigate to Compute -> Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute -> Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) -> Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster . Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads -> Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 10.3.2. Scaling up storage capacity After you add a new node to OpenShift Data Foundation, you must scale up the storage capacity as described in Scaling up storage by adding capacity . Chapter 11. Multicloud Object Gateway 11.1. About the Multicloud Object Gateway The Multicloud Object Gateway (MCG) is a lightweight object storage service for OpenShift, allowing users to start small and then scale as needed on-premise, in multiple clusters, and with cloud-native storage. 11.2. Accessing the Multicloud Object Gateway with your applications You can access the object service with any application targeting AWS S3 or code that uses AWS S3 Software Development Kit (SDK). Applications need to specify the Multicloud Object Gateway (MCG) endpoint, an access key, and a secret access key. You can use your terminal or the MCG CLI to retrieve this information. Prerequisites A running OpenShift Data Foundation Platform. 11.3. Adding storage resources for hybrid or Multicloud 11.3.1. Creating a new backing store Use this procedure to create a new backing store in OpenShift Data Foundation. Prerequisites Administrator access to OpenShift Data Foundation. Procedure In the OpenShift Web Console, click Storage -> Object Storage . Click the Backing Store tab. Click Create Backing Store . On the Create New Backing Store page, perform the following: Enter a Backing Store Name . Select a Provider . Select a Region . Optional: Enter an Endpoint . Select a Secret from the drop-down list, or create your own secret. Optionally, you can Switch to Credentials view which lets you fill in the required secrets. For more information on creating an OCP secret, see the section Creating the secret in the Openshift Container Platform documentation. Each backingstore requires a different secret. For more information on creating the secret for a particular backingstore, see the Section 11.3.2, "Adding storage resources for hybrid or Multicloud using the MCG command line interface" and follow the procedure for the addition of storage resources using a YAML. Note This menu is relevant for all providers except Google Cloud and local PVC. Enter the Target bucket . The target bucket is a container storage that is hosted on the remote cloud service. It allows you to create a connection that tells the MCG that it can use this bucket for the system. Click Create Backing Store . Verification steps In the OpenShift Web Console, click Storage -> Object Storage . Click the Backing Store tab to view all the backing stores. 11.3.2. Adding storage resources for hybrid or Multicloud using the MCG command line interface The Multicloud Object Gateway (MCG) simplifies the process of spanning data across cloud provider and clusters. You must add a backing storage that can be used by the MCG. Depending on the type of your deployment, you can choose one of the following procedures to create a backing storage: For creating an AWS-backed backingstore, see Section 11.3.2.1, "Creating an AWS-backed backingstore" For creating an IBM COS-backed backingstore, see Section 11.3.2.2, "Creating an IBM COS-backed backingstore" For creating an Azure-backed backingstore, see Section 11.3.2.3, "Creating an Azure-backed backingstore" For creating a GCP-backed backingstore, see Section 11.3.2.4, "Creating a GCP-backed backingstore" For creating a local Persistent Volume-backed backingstore, see Section 11.3.2.5, "Creating a local Persistent Volume-backed backingstore" For VMware deployments, skip to Section 11.3.3, "Creating an s3 compatible Multicloud Object Gateway backingstore" for further instructions. 11.3.2.1. Creating an AWS-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Using MCG command-line interface From the MCG command-line interface, run the following command: <backingstore_name> The name of the backingstore. <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> The AWS access key ID and secret access key you created for this purpose. <bucket-name> The existing AWS bucket name. This argument indicates to the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using a YAML Create a secret with the credentials: <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> Supply and encode your own AWS access key ID and secret access key using Base64, and use the results for <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . <backingstore-secret-name> The name of the backingstore secret created in the step. Apply the following YAML for a specific backing store: <bucket-name> The existing AWS bucket name. <backingstore-secret-name> The name of the backingstore secret created in the step. 11.3.2.2. Creating an IBM COS-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Using command-line interface From the MCG command-line interface, run the following command: <backingstore_name> The name of the backingstore. <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , and <IBM COS ENDPOINT> An IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. To generate the above keys on IBM cloud, you must include HMAC credentials while creating the service credentials for your target bucket. <bucket-name> An existing IBM bucket name. This argument indicates MCG about the bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using an YAML Create a secret with the credentials: <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> Provide and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of these attributes respectively. <backingstore-secret-name> The name of the backingstore secret. Apply the following YAML for a specific backing store: <bucket-name> an existing IBM COS bucket name. This argument indicates to MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration. <endpoint> A regional endpoint that corresponds to the location of the existing IBM bucket name. This argument indicates to MCG about the endpoint to use for its backingstore, and subsequently, data storage and administration. <backingstore-secret-name> The name of the secret created in the step. 11.3.2.3. Creating an Azure-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Using the MCG command-line interface From the MCG command-line interface, run the following command: <backingstore_name> The name of the backingstore. <AZURE ACCOUNT KEY> and <AZURE ACCOUNT NAME> An AZURE account key and account name you created for this purpose. <blob container name> An existing Azure blob container name. This argument indicates to MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using a YAML Create a secret with the credentials: <AZURE ACCOUNT NAME ENCODED IN BASE64> and <AZURE ACCOUNT KEY ENCODED IN BASE64> Supply and encode your own Azure Account Name and Account Key using Base64, and use the results in place of these attributes respectively. <backingstore-secret-name> A unique name of backingstore secret. Apply the following YAML for a specific backing store: <blob-container-name> An existing Azure blob container name. This argument indicates to the MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration. <backingstore-secret-name> with the name of the secret created in the step. 11.3.2.4. Creating a GCP-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Using the MCG command-line interface From the MCG command-line interface, run the following command: <backingstore_name> Name of the backingstore. <PATH TO GCP PRIVATE KEY JSON FILE> A path to your GCP private key created for this purpose. <GCP bucket name> An existing GCP object storage bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using a YAML Create a secret with the credentials: <GCP PRIVATE KEY ENCODED IN BASE64> Provide and encode your own GCP service account private key using Base64, and use the results for this attribute. <backingstore-secret-name> A unique name of the backingstore secret. Apply the following YAML for a specific backing store: <target bucket> An existing Google storage bucket. This argument indicates to the MCG about the bucket to use as a target bucket for its backing store, and subsequently, data storage dfdand administration. <backingstore-secret-name> The name of the secret created in the step. 11.3.2.5. Creating a local Persistent Volume-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Adding storage resources using the MCG command-line interface From the MCG command-line interface, run the following command: Note This command must be run from within the openshift-storage namespace. Adding storage resources using YAML Apply the following YAML for a specific backing store: <backingstore_name > The name of the backingstore. <NUMBER OF VOLUMES> The number of volumes you would like to create. Note that increasing the number of volumes scales up the storage. <VOLUME SIZE> Required size in GB of each volume. <CPU REQUEST> Guaranteed amount of CPU requested in CPU unit m . <MEMORY REQUEST> Guaranteed amount of memory requested. <CPU LIMIT> Maximum amount of CPU that can be consumed in CPU unit m . <MEMORY LIMIT> Maximum amount of memory that can be consumed. <LOCAL STORAGE CLASS> The local storage class name, recommended to use ocs-storagecluster-ceph-rbd . The output will be similar to the following: 11.3.3. Creating an s3 compatible Multicloud Object Gateway backingstore The Multicloud Object Gateway (MCG) can use any S3 compatible object storage as a backing store, for example, Red Hat Ceph Storage's RADOS Object Gateway (RGW). The following procedure shows how to create an S3 compatible MCG backing store for Red Hat Ceph Storage's RGW. Note that when the RGW is deployed, OpenShift Data Foundation operator creates an S3 compatible backingstore for MCG automatically. Procedure From the MCG command-line interface, run the following command: Note This command must be run from within the openshift-storage namespace. To get the <RGW ACCESS KEY> and <RGW SECRET KEY> , run the following command using your RGW user secret name: Decode the access key ID and the access key from Base64 and keep them. Replace <RGW USER ACCESS KEY> and <RGW USER SECRET ACCESS KEY> with the appropriate, decoded data from the step. Replace <bucket-name> with an existing RGW bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. To get the <RGW endpoint> , see Accessing the RADOS Object Gateway S3 endpoint . The output will be similar to the following: You can also create the backingstore using a YAML: Create a CephObjectStore user. This also creates a secret containing the RGW credentials: Replace <RGW-Username> and <Display-name> with a unique username and display name. Apply the following YAML for an S3-Compatible backing store: Replace <backingstore-secret-name> with the name of the secret that was created with CephObjectStore in the step. Replace <bucket-name> with an existing RGW bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. To get the <RGW endpoint> , see Accessing the RADOS Object Gateway S3 endpoint . 11.3.4. Adding storage resources for hybrid and Multicloud using the user interface Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Storage Systems tab, select the storage system and then click Overview -> Object tab. Select the Multicloud Object Gateway link. Select the Resources tab in the left, highlighted below. From the list that populates, select Add Cloud Resource . Select Add new connection . Select the relevant native cloud provider or S3 compatible option and fill in the details. Select the newly created connection and map it to the existing bucket. Repeat these steps to create as many backing stores as needed. Note Resources created in NooBaa UI cannot be used by OpenShift UI or MCG CLI. 11.3.5. Creating a new bucket class Bucket class is a CRD representing a class of buckets that defines tiering policies and data placements for an Object Bucket Class. Use this procedure to create a bucket class in OpenShift Data Foundation. Procedure In the OpenShift Web Console, click Storage -> Object Storage . Click the Bucket Class tab. Click Create Bucket Class . On the Create new Bucket Class page, perform the following: Select the bucket class type and enter a bucket class name. Select the BucketClass type . Choose one of the following options: Standard : data will be consumed by a Multicloud Object Gateway (MCG), deduped, compressed and encrypted. Namespace : data is stored on the NamespaceStores without performing de-duplication, compression or encryption. By default, Standard is selected. Enter a Bucket Class Name . Click . In Placement Policy , select Tier 1 - Policy Type and click . You can choose either one of the options as per your requirements. Spread allows spreading of the data across the chosen resources. Mirror allows full duplication of the data across the chosen resources. Click Add Tier to add another policy tier. Select at least one Backing Store resource from the available list if you have selected Tier 1 - Policy Type as Spread and click . Alternatively, you can also create a new backing store . Note You need to select at least 2 backing stores when you select Policy Type as Mirror in step. Review and confirm Bucket Class settings. Click Create Bucket Class . Verification steps In the OpenShift Web Console, click Storage -> Object Storage . Click the Bucket Class tab and search the new Bucket Class. 11.3.6. Editing a bucket class Use the following procedure to edit the bucket class components through the YAML file by clicking the edit button on the Openshift web console. Prerequisites Administrator access to OpenShift Web Console. Procedure In the OpenShift Web Console, click Storage -> Object Storage . Click the Bucket Class tab. Click the Action Menu (...) to the Bucket class you want to edit. Click Edit Bucket Class . You are redirected to the YAML file, make the required changes in this file and click Save . 11.3.7. Editing backing stores for bucket class Use the following procedure to edit an existing Multicloud Object Gateway (MCG) bucket class to change the underlying backing stores used in a bucket class. Prerequisites Administrator access to OpenShift Web Console. A bucket class. Backing stores. Procedure In the OpenShift Web Console, click Storage -> Object Storage . Click the Bucket Class tab. Click the Action Menu (...) to the Bucket class you want to edit. Click Edit Bucket Class Resources . On the Edit Bucket Class Resources page, edit the bucket class resources either by adding a backing store to the bucket class or by removing a backing store from the bucket class. You can also edit bucket class resources created with one or two tiers and different placement policies. To add a backing store to the bucket class, select the name of the backing store. To remove a backing store from the bucket class, uncheck the name of the backing store. Click Save . 11.4. Managing namespace buckets Namespace buckets let you connect data repositories on different providers together, so you can interact with all of your data through a single unified view. Add the object bucket associated with each provider to the namespace bucket, and access your data through the namespace bucket to see all of your object buckets at once. This lets you write to your preferred storage provider while reading from multiple other storage providers, greatly reducing the cost of migrating to a new storage provider. You can interact with objects in a namespace bucket using the S3 API. See S3 API endpoints for objects in namespace buckets for more information. Note A namespace bucket can only be used if its write target is available and functional. 11.4.1. Amazon S3 API endpoints for objects in namespace buckets You can interact with objects in the namespace buckets using the Amazon Simple Storage Service (S3) API. Ensure that the credentials provided for the Multicloud Object Gateway (MCG) enables you to perform the AWS S3 namespace bucket operations. You can use the AWS tool, aws-cli to verify that all the operations can be performed on the target bucket. Also, the list bucket which is using this MCG account shows the target bucket. Red Hat OpenShift Data Foundation supports the following namespace bucket operations: ListBuckets ListObjects ListMultipartUploads ListObjectVersions GetObject HeadObject CopyObject PutObject CreateMultipartUpload UploadPartCopy UploadPart ListParts AbortMultipartUpload PubObjectTagging DeleteObjectTagging GetObjectTagging GetObjectAcl PutObjectAcl DeleteObject DeleteObjects See the Amazon S3 API reference documentation for the most up-to-date information about these operations and how to use them. Additional resources Amazon S3 REST API Reference Amazon S3 CLI Reference 11.4.2. Adding a namespace bucket using the Multicloud Object Gateway CLI and YAML For more information about namespace buckets, see Managing namespace buckets . Depending on the type of your deployment and whether you want to use YAML or the Multicloud Object Gateway CLI, choose one of the following procedures to add a namespace bucket: Adding an AWS S3 namespace bucket using YAML Adding an IBM COS namespace bucket using YAML Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI 11.4.2.1. Adding an AWS S3 namespace bucket using YAML Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). For information, see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: where <namespacestore-secret-name> is a unique NamespaceStore name. You must provide and encode your own AWS access key ID and secret access key using Base64 , and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: <resource-name> The name you want to give to the resource. <namespacestore-secret-name> The secret created in the step. <namespace-secret> The namespace where the secret can be found. <target-bucket> The target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . A namespace policy of type single requires the following configuration: <my-bucket-class> The unique namespace bucket class name. <resource> The name of a single NamespaceStore that defines the read and write target of the namespace bucket. A namespace policy of type multi requires the following configuration: <my-bucket-class> A unique bucket class name. <write-resource> The name of a single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A list of the names of the NamespaceStores that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step using the following YAML: <resource-name> The name you want to give to the resource. <my-bucket> The name you want to give to the bucket. <my-bucket-class> The bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 11.4.2.2. Adding an IBM COS namespace bucket using YAML Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: <namespacestore-secret-name> A unique NamespaceStore name. You must provide and encode your own IBM COS access key ID and secret access key using Base64 , and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: <IBM COS ENDPOINT> The appropriate IBM COS endpoint. <namespacestore-secret-name> The secret created in the step. <namespace-secret> The namespace where the secret can be found. <target-bucket> The target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . The namespace policy of type single requires the following configuration: <my-bucket-class> The unique namespace bucket class name. <resource> The name of a single NamespaceStore that defines the read and write target of the namespace bucket. The namespace policy of type multi requires the following configuration: <my-bucket-class> The unique bucket class name. <write-resource> The name of a single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A list of the NamespaceStores names that defines the read targets of the namespace bucket. To create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the step, apply the following YAML: <resource-name> The name you want to give to the resource. <my-bucket> The name you want to give to the bucket. <my-bucket-class> The bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 11.4.2.3. Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS Procedure In the MCG command-line interface, create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in MCG namespace buckets. <namespacestore> The name of the NamespaceStore. <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> The AWS access key ID and secret access key you created for this purpose. <bucket-name> The existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy can be either single or multi . To create a namespace bucket class with a namespace policy of type single : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <resource> A single namespace-store that defines the read and write target of the namespace bucket. To create a namespace bucket class with a namespace policy of type multi : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <write-resource> A single namespace-store that defines the write target of the namespace bucket. <read-resources>s A list of namespace-stores separated by commas that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the step. <bucket-name> A bucket name of your choice. <custom-bucket-class> The name of the bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and a ConfigMap with the same name and in the same namespace as that of the OBC. 11.4.2.4. Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the MCG command-line interface binary from the customer portal and make it executable. Note Choose either Linux(x86_64), Windows, or Mac OS. Procedure In the MCG command-line interface, create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. <namespacestore> The name of the NamespaceStore. <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> An IBM access key ID, secret access key, and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. <bucket-name> An existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . To create a namespace bucket class with a namespace policy of type single : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <resource> A single NamespaceStore that defines the read and write target of the namespace bucket. To create a namespace bucket class with a namespace policy of type multi : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <write-resource> A single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A comma-separated list of NamespaceStores that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step. <bucket-name> A bucket name of your choice. <custom-bucket-class> The name of the bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 11.4.3. Adding a namespace bucket using the OpenShift Container Platform user interface You can add namespace buckets using the OpenShift Container Platform user interface. For information about namespace buckets, see Managing namespace buckets . Prerequisites Ensure that Openshift Container Platform with OpenShift Data Foundation operator is already installed. Access to the Multicloud Object Gateway (MCG). Procedure On the OpenShift Web Console, navigate to Storage -> Object Storage -> Namespace Store tab. Click Create namespace store to create a namespacestore resources to be used in the namespace bucket. Enter a namespacestore name. Choose a provider and region. Either select an existing secret, or click Switch to credentials to create a secret by entering a secret key and secret access key. Enter a target bucket. Click Create . On the Namespace Store tab, verify that the newly created namespacestore is in the Ready state. Repeat steps 2 and 3 until you have created all the desired amount of resources. Navigate to Bucket Class tab and click Create Bucket Class . Choose Namespace BucketClass type radio button. Enter a BucketClass name and click . Choose a Namespace Policy Type for your namespace bucket, and then click . If your namespace policy type is Single , you need to choose a read resource. If your namespace policy type is Multi , you need to choose read resources and a write resource. If your namespace policy type is Cache , you need to choose a Hub namespace store that defines the read and write target of the namespace bucket. Select one Read and Write NamespaceStore which defines the read and write targets of the namespace bucket and click . Review your new bucket class details, and then click Create Bucket Class . Navigate to Bucket Class tab and verify that your newly created resource is in the Ready phase. Navigate to Object Bucket Claims tab and click Create Object Bucket Claim . Enter ObjectBucketClaim Name for the namespace bucket. Select StorageClass as openshift-storage.noobaa.io . Select the BucketClass that you created earlier for your namespacestore from the list. By default, noobaa-default-bucket-class gets selected. Click Create . The namespace bucket is created along with Object Bucket Claim for your namespace. Navigate to Object Bucket Claims tab and verify that the Object Bucket Claim created is in Bound state. Navigate to Object Buckets tab and verify that the your namespace bucket is present in the list and is in Bound state. 11.5. Mirroring data for hybrid and Multicloud buckets You can use the simplified process of the Multicloud Object Gateway (MCG) to span data across cloud providers and clusters. Before you create a bucket class that reflects the data management policy and mirroring, you must add a backing storage that can be used by the MCG. For information, see Chapter 4, Section 11.3, "Adding storage resources for hybrid or Multicloud" . You can set up mirroring data by using the OpenShift UI, YAML or MCG command-line interface. See the following sections: Section 11.5.1, "Creating bucket classes to mirror data using the MCG command-line-interface" Section 11.5.2, "Creating bucket classes to mirror data using a YAML" 11.5.1. Creating bucket classes to mirror data using the MCG command-line-interface Prerequisites Ensure to download Multicloud Object Gateway (MCG) command-line interface. Procedure From the Multicloud Object Gateway (MCG) command-line interface, run the following command to create a bucket class with a mirroring policy: Set the newly created bucket class to a new bucket claim to generate a new bucket that will be mirrored between two locations: 11.5.2. Creating bucket classes to mirror data using a YAML Apply the following YAML. This YAML is a hybrid example that mirrors data between local Ceph storage and AWS: Add the following lines to your standard Object Bucket Claim (OBC): For more information about OBCs, see Section 11.7, "Object Bucket Claim" . 11.6. Bucket policies in the Multicloud Object Gateway OpenShift Data Foundation supports AWS S3 bucket policies. Bucket policies allow you to grant users access permissions for buckets and the objects in them. 11.6.1. Introduction to bucket policies Bucket policies are an access policy option available for you to grant permission to your AWS S3 buckets and objects. Bucket policies use JSON-based access policy language. For more information about access policy language, see AWS Access Policy Language Overview . 11.6.2. Using bucket policies in Multicloud Object Gateway Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway (MCG), see Section 11.2, "Accessing the Multicloud Object Gateway with your applications" A valid Multicloud Object Gateway user account. See Creating a user in the Multicloud Object Gateway for instructions to create a user account. Procedure To use bucket policies in the MCG: Create the bucket policy in JSON format. For example: Replace [email protected] with a valid Multicloud Object Gateway user account. Using AWS S3 client, use the put-bucket-policy command to apply the bucket policy to your S3 bucket: Replace ENDPOINT with the S3 endpoint. Replace MyBucket with the bucket to set the policy on. Replace BucketPolicy with the bucket policy JSON file. Add --no-verify-ssl if you are using the default self signed certificates. For example: For more information on the put-bucket-policy command, see the AWS CLI Command Reference for put-bucket-policy . Note The principal element specifies the user that is allowed or denied access to a resource, such as a bucket. Currently, Only NooBaa accounts can be used as principals. In the case of object bucket claims, NooBaa automatically create an account obc-account.<generated bucket name>@noobaa.io . Note Bucket policy conditions are not supported. Additional resources There are many available elements for bucket policies with regard to access permissions. For details on these elements and examples of how they can be used to control the access permissions, see AWS Access Policy Language Overview . For more examples of bucket policies, see AWS Bucket Policy Examples . OpenShift Data Foundation version 4.17 introduces the bucket policy elements NotPrincipal , NotAction , and NotResource . For more information on these elements, see IAM JSON policy elements reference . 11.6.3. Creating a user in the Multicloud Object Gateway Prerequisites A running OpenShift Data Foundation Platform. Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Execute the following command to create an MCG user account: <noobaa-account-name> Specify the name of the new MCG user account. --allow_bucket_create Allows the user to create new buckets. --allowed_buckets Sets the user's allowed bucket list (use commas or multiple flags). --default_resource Sets the default resource.The new buckets are created on this default resource (including the future ones). --full_permission Allows this account to access all existing and future buckets. Important You need to provide permission to access atleast one bucket or full permission to access all the buckets. 11.7. Object Bucket Claim An Object Bucket Claim can be used to request an S3 compatible bucket backend for your workloads. You can create an Object Bucket Claim in three ways: Section 11.7.1, "Dynamic Object Bucket Claim" Section 11.7.2, "Creating an Object Bucket Claim using the command line interface" Section 11.7.3, "Creating an Object Bucket Claim using the OpenShift Web Console" An object bucket claim creates a new bucket and an application account in NooBaa with permissions to the bucket, including a new access key and secret access key. The application account is allowed to access only a single bucket and can't create new buckets by default. 11.7.1. Dynamic Object Bucket Claim Similar to Persistent Volumes, you can add the details of the Object Bucket claim (OBC) to your application's YAML, and get the object service endpoint, access key, and secret access key available in a configuration map and secret. It is easy to read this information dynamically into environment variables of your application. Note The Multicloud Object Gateway endpoints uses self-signed certificates only if OpenShift uses self-signed certificates. Using signed certificates in OpenShift automatically replaces the Multicloud Object Gateway endpoints certificates with signed certificates. Get the certificate currently used by Multicloud Object Gateway by accessing the endpoint via the browser. See Accessing the Multicloud Object Gateway with your applications for more information. Procedure Add the following lines to your application YAML: These lines are the OBC itself. Replace <obc-name> with the a unique OBC name. Replace <obc-bucket-name> with a unique bucket name for your OBC. To automate the use of the OBC add more lines to the YAML file. For example: The example is the mapping between the bucket claim result, which is a configuration map with data and a secret with the credentials. This specific job claims the Object Bucket from NooBaa, which creates a bucket and an account. Replace all instances of <obc-name> with your OBC name. Replace <your application image> with your application image. Apply the updated YAML file: Replace <yaml.file> with the name of your YAML file. To view the new configuration map, run the following: Replace obc-name with the name of your OBC. You can expect the following environment variables in the output: BUCKET_HOST - Endpoint to use in the application. BUCKET_PORT - The port available for the application. The port is related to the BUCKET_HOST . For example, if the BUCKET_HOST is https://my.example.com , and the BUCKET_PORT is 443, the endpoint for the object service would be https://my.example.com:443 . BUCKET_NAME - Requested or generated bucket name. AWS_ACCESS_KEY_ID - Access key that is part of the credentials. AWS_SECRET_ACCESS_KEY - Secret access key that is part of the credentials. Important Retrieve the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY . The names are used so that it is compatible with the AWS S3 API. You need to specify the keys while performing S3 operations, especially when you read, write or list from the Multicloud Object Gateway (MCG) bucket. The keys are encoded in Base64. Decode the keys before using them. <obc_name> Specify the name of the object bucket claim. 11.7.2. Creating an Object Bucket Claim using the command line interface When creating an Object Bucket Claim (OBC) using the command-line interface, you get a configuration map and a Secret that together contain all the information your application needs to use the object storage service. Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Use the command-line interface to generate the details of a new bucket and credentials. Run the following command: Replace <obc-name> with a unique OBC name, for example, myappobc . Additionally, you can use the --app-namespace option to specify the namespace where the OBC configuration map and secret will be created, for example, myapp-namespace . For example: The MCG command-line-interface has created the necessary configuration and has informed OpenShift about the new OBC. Run the following command to view the OBC: For example: Run the following command to view the YAML file for the new OBC: For example: Inside of your openshift-storage namespace, you can find the configuration map and the secret to use this OBC. The CM and the secret have the same name as the OBC. Run the following command to view the secret: For example: The secret gives you the S3 access credentials. Run the following command to view the configuration map: For example: The configuration map contains the S3 endpoint information for your application. 11.7.3. Creating an Object Bucket Claim using the OpenShift Web Console You can create an Object Bucket Claim (OBC) using the OpenShift Web Console. Prerequisites Administrative access to the OpenShift Web Console. In order for your applications to communicate with the OBC, you need to use the configmap and secret. For more information about this, see Section 11.7.1, "Dynamic Object Bucket Claim" . Procedure Log into the OpenShift Web Console. On the left navigation bar, click Storage -> Object Storage -> Object Bucket Claims -> Create Object Bucket Claim . Enter a name for your object bucket claim and select the appropriate storage class based on your deployment, internal or external, from the dropdown menu: Internal mode The following storage classes, which were created after deployment, are available for use: ocs-storagecluster-ceph-rgw uses the Ceph Object Gateway (RGW) openshift-storage.noobaa.io uses the Multicloud Object Gateway (MCG) External mode The following storage classes, which were created after deployment, are available for use: ocs-external-storagecluster-ceph-rgw uses the RGW openshift-storage.noobaa.io uses the MCG Note The RGW OBC storage class is only available with fresh installations of OpenShift Data Foundation version 4.5. It does not apply to clusters upgraded from OpenShift Data Foundation releases. Click Create . Once you create the OBC, you are redirected to its detail page. 11.7.4. Attaching an Object Bucket Claim to a deployment Once created, Object Bucket Claims (OBCs) can be attached to specific deployments. Prerequisites Administrative access to the OpenShift Web Console. Procedure On the left navigation bar, click Storage -> Object Storage -> Object Bucket Claims . Click the Action menu (...) to the OBC you created. From the drop-down menu, select Attach to Deployment . Select the desired deployment from the Deployment Name list, then click Attach . 11.7.5. Viewing object buckets using the OpenShift Web Console You can view the details of object buckets created for Object Bucket Claims (OBCs) using the OpenShift Web Console. Prerequisites Administrative access to the OpenShift Web Console. Procedure Log into the OpenShift Web Console. On the left navigation bar, click Storage -> Object Storage -> Object Buckets . Optonal: You can also navigate to the details page of a specific OBC, and click the Resource link to view the object buckets for that OBC. Select the object bucket of which you want to see the details. Once selected you are navigated to the Object Bucket Details page. 11.7.6. Deleting Object Bucket Claims Prerequisites Administrative access to the OpenShift Web Console. Procedure On the left navigation bar, click Storage -> Object Storage -> Object Bucket Claims . Click the Action menu (...) to the Object Bucket Claim (OBC) you want to delete. Select Delete Object Bucket Claim . Click Delete . 11.8. Caching policy for object buckets A cache bucket is a namespace bucket with a hub target and a cache target. The hub target is an S3 compatible large object storage bucket. The cache bucket is the local Multicloud Object Gateway bucket. You can create a cache bucket that caches an AWS bucket or an IBM COS bucket. Important Cache buckets are a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Technology Preview Features Support Scope . AWS S3 IBM COS 11.8.1. Creating an AWS cache bucket Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the namespacestore. Replace <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> with an AWS access key ID and secret access key you created for this purpose. Replace <bucket-name> with an existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. You can also add storage resources by applying a YAML. First create a secret with credentials: You must supply and encode your own AWS access key ID and secret access key using Base64, and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Then apply the following YAML: Replace <namespacestore> with a unique name. Replace <namespacestore-secret-name> with the secret created in the step. Replace <namespace-secret> with the namespace used to create the secret in the step. Replace <target-bucket> with the AWS S3 bucket you created for the namespacestore. Run the following command to create a bucket class: Replace <my-cache-bucket-class> with a unique bucket class name. Replace <backing-store> with the relevant backing store. You can list one or more backingstores separated by commas in this field. Replace <namespacestore> with the namespacestore created in the step. Run the following command to create a bucket using an Object Bucket Claim (OBC) resource that uses the bucket class defined in step 2. Replace <my-bucket-claim> with a unique name. Replace <custom-bucket-class> with the name of the bucket class created in step 2. 11.8.2. Creating an IBM COS cache bucket Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the NamespaceStore. Replace <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> with an IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. Replace <bucket-name> with an existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. You can also add storage resources by applying a YAML. First, Create a secret with the credentials: You must supply and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Then apply the following YAML: Replace <namespacestore> with a unique name. Replace <IBM COS ENDPOINT> with the appropriate IBM COS endpoint. Replace <backingstore-secret-name> with the secret created in the step. Replace <namespace-secret> with the namespace used to create the secret in the step. Replace <target-bucket> with the AWS S3 bucket you created for the namespacestore. Run the following command to create a bucket class: Replace <my-bucket-class> with a unique bucket class name. Replace <backing-store> with the relevant backing store. You can list one or more backingstores separated by commas in this field. Replace <namespacestore> with the namespacestore created in the step. Run the following command to create a bucket using an Object Bucket Claim resource that uses the bucket class defined in step 2. Replace <my-bucket-claim> with a unique name. Replace <custom-bucket-class> with the name of the bucket class created in step 2. 11.9. Scaling Multicloud Object Gateway performance by adding endpoints The Multicloud Object Gateway performance may vary from one environment to another. In some cases, specific applications require faster performance which can be easily addressed by scaling S3 endpoints. The Multicloud Object Gateway resource pool is a group of NooBaa daemon containers that provide two types of services enabled by default: Storage service S3 endpoint service 11.9.1. Scaling the Multicloud Object Gateway with storage nodes Prerequisites A running OpenShift Data Foundation cluster on OpenShift Container Platform with access to the Multicloud Object Gateway (MCG). A storage node in the MCG is a NooBaa daemon container attached to one or more Persistent Volumes (PVs) and used for local object service data storage. NooBaa daemons can be deployed on Kubernetes nodes. This can be done by creating a Kubernetes pool consisting of StatefulSet pods. Procedure Log in to OpenShift Web Console . From the MCG user interface, click Overview -> Add Storage Resources . In the window, click Deploy Kubernetes Pool . In the Create Pool step create the target pool for the future installed nodes. In the Configure step, configure the number of requested pods and the size of each PV. For each new pod, one PV is to be created. In the Review step, you can find the details of the new pool and select the deployment method you wish to use: local or external deployment. If local deployment is selected, the Kubernetes nodes will deploy within the cluster. If external deployment is selected, you will be provided with a YAML file to run externally. All nodes will be assigned to the pool you chose in the first step, and can be found under Resources -> Storage resources -> Resource name . 11.10. Automatic scaling of MultiCloud Object Gateway endpoints The number of MultiCloud Object Gateway (MCG) endpoints scale automatically when the load on the MCG S3 service increases or decreases. OpenShift Data Foundation clusters are deployed with one active MCG endpoint. Each MCG endpoint pod is configured by default with 1 CPU and 2Gi memory request, with limits matching the request. When the CPU load on the endpoint crosses over an 80% usage threshold for a consistent period of time, a second endpoint is deployed lowering the load on the first endpoint. When the average CPU load on both endpoints falls below the 80% threshold for a consistent period of time, one of the endpoints is deleted. This feature improves performance and serviceability of the MCG. You can scale the Horizontal Pod Autoscaler (HPA) for noobaa-endpoint using the following oc patch command, for example: The example above sets the minCount to 3 and the maxCount to `10 . Chapter 12. Managing persistent volume claims Important Expanding PVCs is not supported for PVCs backed by OpenShift Data Foundation. 12.1. Configuring application pods to use OpenShift Data Foundation Follow the instructions in this section to configure OpenShift Data Foundation as storage for an application pod. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In OpenShift Web Console, click Operators -> Installed Operators to view installed operators. The default storage classes provided by OpenShift Data Foundation are available. In OpenShift Web Console, click Storage -> StorageClasses to view default storage classes. Procedure Create a Persistent Volume Claim (PVC) for the application to use. In OpenShift Web Console, click Storage -> Persistent Volume Claims . Set the Project for the application pod. Click Create Persistent Volume Claim . Specify a Storage Class provided by OpenShift Data Foundation. Specify the PVC Name , for example, myclaim . Select the required Access Mode . Note The Access Mode , Shared access (RWX) is not supported in IBM FlashSystem. For Rados Block Device (RBD), if the Access mode is ReadWriteOnce ( RWO ), select the required Volume mode . The default volume mode is Filesystem . Specify a Size as per application requirement. Click Create and wait until the PVC is in Bound status. Configure a new or existing application pod to use the new PVC. For a new application pod, perform the following steps: Click Workloads -> Pods . Create a new application pod. Under the spec: section, add volumes: section to add the new PVC as a volume for the application pod. For example: For an existing application pod, perform the following steps: Click Workloads -> Deployment Configs . Search for the required deployment config associated with the application pod. Click on its Action menu (...) -> Edit Deployment Config . Under the spec: section, add volumes: section to add the new PVC as a volume for the application pod and click Save . For example: Verify that the new configuration is being used. Click Workloads -> Pods . Set the Project for the application pod. Verify that the application pod appears with a status of Running . Click the application pod name to view pod details. Scroll down to Volumes section and verify that the volume has a Type that matches your new Persistent Volume Claim, for example, myclaim . 12.2. Viewing Persistent Volume Claim request status Use this procedure to view the status of a PVC request. Prerequisites Administrator access to OpenShift Data Foundation. Procedure Log in to OpenShift Web Console. Click Storage -> Persistent Volume Claims Search for the required PVC name by using the Filter textbox. You can also filter the list of PVCs by Name or Label to narrow down the list Check the Status column corresponding to the required PVC. Click the required Name to view the PVC details. 12.3. Reviewing Persistent Volume Claim request events Use this procedure to review and address Persistent Volume Claim (PVC) request events. Prerequisites Administrator access to OpenShift Web Console. Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Storage systems tab, select the storage system and then click Overview -> Block and File . Locate the Inventory card to see the number of PVCs with errors. Click Storage -> Persistent Volume Claims Search for the required PVC using the Filter textbox. Click on the PVC name and navigate to Events Address the events as required or as directed. 12.4. Dynamic provisioning 12.4.1. About dynamic provisioning The StorageClass resource object describes and classifies storage that can be requested, as well as provides a means for passing parameters for dynamically provisioned storage on demand. StorageClass objects can also serve as a management mechanism for controlling different levels of storage and access to the storage. Cluster Administrators ( cluster-admin ) or Storage Administrators ( storage-admin ) define and create the StorageClass objects that users can request without needing any intimate knowledge about the underlying storage volume sources. The OpenShift Container Platform persistent volume framework enables this functionality and allows administrators to provision a cluster with persistent storage. The framework also gives users a way to request those resources without having any knowledge of the underlying infrastructure. Many storage types are available for use as persistent volumes in OpenShift Container Platform. Storage plug-ins might support static provisioning, dynamic provisioning or both provisioning types. 12.4.2. Dynamic provisioning in OpenShift Data Foundation Red Hat OpenShift Data Foundation is software-defined storage that is optimised for container environments. It runs as an operator on OpenShift Container Platform to provide highly integrated and simplified persistent storage management for containers. OpenShift Data Foundation supports a variety of storage types, including: Block storage for databases Shared file storage for continuous integration, messaging, and data aggregation Object storage for archival, backup, and media storage Version 4 uses Red Hat Ceph Storage to provide the file, block, and object storage that backs persistent volumes, and Rook.io to manage and orchestrate provisioning of persistent volumes and claims. NooBaa provides object storage, and its Multicloud Gateway allows object federation across multiple cloud environments (available as a Technology Preview). In OpenShift Data Foundation 4, the Red Hat Ceph Storage Container Storage Interface (CSI) driver for RADOS Block Device (RBD) and Ceph File System (CephFS) handles the dynamic provisioning requests. When a PVC request comes in dynamically, the CSI driver has the following options: Create a PVC with ReadWriteOnce (RWO) and ReadWriteMany (RWX) access that is based on Ceph RBDs with volume mode Block . Create a PVC with ReadWriteOnce (RWO) access that is based on Ceph RBDs with volume mode Filesystem . Create a PVC with ReadWriteOnce (RWO) and ReadWriteMany (RWX) access that is based on CephFS for volume mode Filesystem . Create a PVC with ReadWriteOncePod (RWOP) access that is based on CephFS,NFS and RBD. With RWOP access mode, you mount the volume as read-write by a single pod on a single node. The judgment of which driver (RBD or CephFS) to use is based on the entry in the storageclass.yaml file. 12.4.3. Available dynamic provisioning plug-ins OpenShift Container Platform provides the following provisioner plug-ins, which have generic implementations for dynamic provisioning that use the cluster's configured provider's API to create new storage resources: Storage type Provisioner plug-in name Notes OpenStack Cinder kubernetes.io/cinder AWS Elastic Block Store (EBS) kubernetes.io/aws-ebs For dynamic provisioning when using multiple clusters in different zones, tag each node with Key=kubernetes.io/cluster/<cluster_name>,Value=<cluster_id> where <cluster_name> and <cluster_id> are unique per cluster. AWS Elastic File System (EFS) Dynamic provisioning is accomplished through the EFS provisioner pod and not through a provisioner plug-in. Azure Disk kubernetes.io/azure-disk Azure File kubernetes.io/azure-file The persistent-volume-binder ServiceAccount requires permissions to create and get Secrets to store the Azure storage account and keys. GCE Persistent Disk (gcePD) kubernetes.io/gce-pd In multi-zone configurations, it is advisable to run one OpenShift Container Platform cluster per GCE project to avoid PVs from being created in zones where no node in the current cluster exists. VMware vSphere kubernetes.io/vsphere-volume Red Hat Virtualization csi.ovirt.org Important Any chosen provisioner plug-in also requires configuration for the relevant cloud, host, or third-party provider as per the relevant documentation. Chapter 13. Volume Snapshots A volume snapshot is the state of the storage volume in a cluster at a particular point in time. These snapshots help to use storage more efficiently by not having to make a full copy each time and can be used as building blocks for developing an application. You can create multiple snapshots of the same persistent volume claim (PVC). For CephFS, you can create up to 100 snapshots per PVC. For RADOS Block Device (RBD), you can create up to 512 snapshots per PVC. Note You cannot schedule periodic creation of snapshots. 13.1. Creating volume snapshots You can create a volume snapshot either from the Persistent Volume Claim (PVC) page or the Volume Snapshots page. Prerequisites For a consistent snapshot, the PVC should be in Bound state and not be in use. Ensure to stop all IO before taking the snapshot. Note OpenShift Data Foundation only provides crash consistency for a volume snapshot of a PVC if a pod is using it. For application consistency, be sure to first tear down a running pod to ensure consistent snapshots or use any quiesce mechanism provided by the application to ensure it. Procedure From the Persistent Volume Claims page Click Storage -> Persistent Volume Claims from the OpenShift Web Console. To create a volume snapshot, do one of the following: Beside the desired PVC, click Action menu (...) -> Create Snapshot . Click on the PVC for which you want to create the snapshot and click Actions -> Create Snapshot . Enter a Name for the volume snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. From the Volume Snapshots page Click Storage -> Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, click Create Volume Snapshot . Choose the required Project from the drop-down list. Choose the Persistent Volume Claim from the drop-down list. Enter a Name for the snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. Verification steps Go to the Details page of the PVC and click the Volume Snapshots tab to see the list of volume snapshots. Verify that the new volume snapshot is listed. Click Storage -> Volume Snapshots from the OpenShift Web Console. Verify that the new volume snapshot is listed. Wait for the volume snapshot to be in Ready state. 13.2. Restoring volume snapshots When you restore a volume snapshot, a new Persistent Volume Claim (PVC) gets created. The restored PVC is independent of the volume snapshot and the parent PVC. You can restore a volume snapshot from either the Persistent Volume Claim page or the Volume Snapshots page. Procedure From the Persistent Volume Claims page You can restore volume snapshot from the Persistent Volume Claims page only if the parent PVC is present. Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name with the volume snapshot to restore a volume snapshot as a new PVC. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. From the Volume Snapshots page Click Storage -> Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. Verification steps Click Storage -> Persistent Volume Claims from the OpenShift Web Console and confirm that the new PVC is listed in the Persistent Volume Claims page. Wait for the new PVC to reach Bound state. 13.3. Deleting volume snapshots Prerequisites For deleting a volume snapshot, the volume snapshot class which is used in that particular volume snapshot should be present. Procedure From Persistent Volume Claims page Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name which has the volume snapshot that needs to be deleted. In the Volume Snapshots tab, beside the desired volume snapshot, click Action menu (...) -> Delete Volume Snapshot . From Volume Snapshots page Click Storage -> Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, beside the desired volume snapshot click Action menu (...) -> Delete Volume Snapshot . Verfication steps Ensure that the deleted volume snapshot is not present in the Volume Snapshots tab of the PVC details page. Click Storage -> Volume Snapshots and ensure that the deleted volume snapshot is not listed. Chapter 14. Volume cloning A clone is a duplicate of an existing storage volume that is used as any standard volume. You create a clone of a volume to make a point in time copy of the data. A persistent volume claim (PVC) cannot be cloned with a different size. You can create up to 512 clones per PVC for both CephFS and RADOS Block Device (RBD). 14.1. Creating a clone Prerequisites Source PVC must be in Bound state and must not be in use. Note Do not create a clone of a PVC if a Pod is using it. Doing so might cause data corruption because the PVC is not quiesced (paused). Procedure Click Storage -> Persistent Volume Claims from the OpenShift Web Console. To create a clone, do one of the following: Beside the desired PVC, click Action menu (...) -> Clone PVC . Click on the PVC that you want to clone and click Actions -> Clone PVC . Enter a Name for the clone. Select the access mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Enter the required size of the clone. Select the storage class in which you want to create the clone. The storage class can be any RBD storage class and it need not necessarily be the same as the parent PVC. Click Clone . You are redirected to the new PVC details page. Wait for the cloned PVC status to become Bound . The cloned PVC is now available to be consumed by the pods. This cloned PVC is independent of its dataSource PVC. Chapter 15. Replacing storage nodes You can choose one of the following procedures to replace storage nodes: Section 15.1, "Replacing operational nodes on Red Hat OpenStack Platform installer-provisioned infrastructure" Section 15.2, "Replacing failed nodes on Red Hat OpenStack Platform installer-provisioned infrastructure" 15.1. Replacing operational nodes on Red Hat OpenStack Platform installer-provisioned infrastructure Procedure Log in to the OpenShift Web Console, and click Compute -> Nodes . Identify the node that you need to replace. Take a note of its Machine Name . Mark the node as unschedulable: <node_name> Specify the name of node that you need to replace. Drain the node: Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Click Compute -> Machines . Search for the required machine. Besides the required machine, click Action menu (...) -> Delete Machine . Click Delete to confirm that the machine is deleted. A new machine is automatically created. Wait for the new machine to start and transition into Running state. Important This activity might take at least 5 - 10 minutes or more. Click Compute -> Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node: From the user interface For the new node, click Action Menu (...) -> Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Verification steps Verify that the new node is present in the output: Click Workloads -> Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 15.2. Replacing failed nodes on Red Hat OpenStack Platform installer-provisioned infrastructure Procedure Log in to the OpenShift Web Console, and click Compute -> Nodes . Identify the faulty node, and click on its Machine Name . Click Actions -> Edit Annotations , and click Add More . Add machine.openshift.io/exclude-node-draining , and click Save . Click Actions -> Delete Machine , and click Delete . A new machine is automatically created, wait for new machine to start. Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Click Compute -> Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) -> Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Optional: If the failed Red Hat OpenStack Platform instance is not removed automatically, terminate the instance from Red Hat OpenStack Platform console. Verification steps Verify that the new node is present in the output: Click Workloads -> Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . Chapter 16. Replacing storage devices 16.1. Replacing operational or failed storage devices on Red Hat OpenStack Platform installer-provisioned infrastructure Use this procedure to replace storage device in OpenShift Data Foundation which is deployed on Red Hat OpenStack Platform. This procedure helps to create a new Persistent Volume Claim (PVC) on a new volume and remove the old object storage device (OSD). Procedure Identify the OSD that needs to be replaced and the OpenShift Container Platform node that has the OSD scheduled on it. Example output: In this example, rook-ceph-osd-0-6d77d6c7c6-m8xj6 needs to be replaced and compute-2 is the OpenShift Container platform node on which the OSD is scheduled. Note If the OSD to be replaced is healthy, the status of the pod will be Running . Scale down the OSD deployment for the OSD to be replaced. where, osd_id_to_remove is the integer in the pod name immediately after the rook-ceph-osd prefix. In this example, the deployment name is rook-ceph-osd-0 . Example output: Verify that the rook-ceph-osd pod is terminated. Example output: Note If the rook-ceph-osd pod is in terminating state, use the force option to delete the pod. Example output: Incase, the persistent volume associated with the failed OSD fails, get the failed persistent volumes details and delete them using the following commands: Remove the old OSD from the cluster so that a new OSD can be added. Delete any old ocs-osd-removal jobs. Example output: Change to the openshift-storage project. Remove the old OSD from the cluster. You can add comma separated OSD IDs in the command to remove more than one OSD. (For example, FAILED_OSD_IDS=0,1,2). The FORCE_OSD_REMOVAL value must be changed to "true" in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Warning This step results in OSD being completely removed from the cluster. Ensure that the correct value of osd_id_to_remove is provided. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job fails and the pod is not in the expected Completed state, check the pod logs for further debugging. For example: If encryption was enabled at the time of install, remove dm-crypt managed device-mapper mapping from the OSD devices that are removed from the respective OpenShift Data Foundation nodes. Get PVC name(s) of the replaced OSD(s) from the logs of ocs-osd-removal-job pod : For example: For each of the nodes identified in step #1, do the following: Create a debug pod and chroot to the host on the storage node. Find relevant device name based on the PVC names identified in the step Remove the mapped device. Note If the above command gets stuck due to insufficient privileges, run the following commands: Press CTRL+Z to exit the above command. Find PID of the process which was stuck. Terminate the process using kill command. Verify that the device name is removed. Delete the ocs-osd-removal job. Example output: Verfication steps Verify that there is a new OSD running. Example output: Verify that there is a new PVC created which is in Bound state. Example output: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in step, do the following: Create a debug pod and open a chroot environment for the selected host(s). Run "lsblk" and check for the "crypt" keyword beside the ocs-deviceset name(s) Log in to OpenShift Web Console and view the storage dashboard. Figure 16.1. OSD status in OpenShift Container Platform storage dashboard after device replacement Chapter 17. Upgrading to OpenShift Data Foundation 17.1. Overview of the OpenShift Data Foundation update process This chapter helps you to upgrade between the minor releases and z-streams for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. You can upgrade OpenShift Data Foundation and its components, either between minor releases like 4.16 and 4.17, or between z-stream updates like 4.16.0 and 4.16.1 by enabling automatic updates (if not done so during operator installation) or performing manual updates. When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic. Extended Update Support (EUS) EUS to EUS upgrade in OpenShift Data Foundation is sequential and it is aligned with OpenShift upgrade. For more information, see Performing an EUS-to-EUS update and EUS-to-EUS update for layered products and Operators installed through Operator Lifecycle Manager . For EUS upgrade of OpenShift Container Platform and OpenShift Data Foundation, make sure that OpenShift Data Foundation is upgraded along with OpenShift Container Platform and compatibility between OpenShift Data Foundation and OpenShift Container Platform is always maintained. Example workflow of EUS upgrade: Pause the worker machine pools. Update OpenShift <4.y> -> OpenShift <4.y+1>. Update OpenShift Data Foundation <4.y> -> OpenShift Data Foundation <4.y+1>. Update OpenShift <4.y+1> -> OpenShift <4.y+2>. Update to OpenShift Data Foundation <4.y+2>. Unpause the worker machine pools. Note You can update to ODF <4.y+2> either before or after worker machine pools are unpaused. Important When you update OpenShift Data Foundation in external mode, make sure that the Red Had Ceph Storage and OpenShift Data Foundation versions are compatible. For more information about supported Red Had Ceph Storage version in external mode, refer to Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . Provide the required OpenShift Data Foundation version in the checker to see the supported Red Had Ceph version corresponding to the version in use. You also need to upgrade the different parts of Red Hat OpenShift Data Foundation in the following order for both internal and external mode deployments: Update OpenShift Container Platform according to the Updating clusters documentation for OpenShift Container Platform. Update Red Hat OpenShift Data Foundation. To prepare a disconnected environment for updates , see Operators guide to using Operator Lifecycle Manager on restricted networks to be able to update OpenShift Data Foundation as well as Local Storage Operator when in use. For updating between minor releases , see Updating Red Hat OpenShift Data Foundation 4.14 to 4.15 . For updating between z-stream releases , see Updating Red Hat OpenShift Data Foundation 4.15.x to 4.15.y . For updating external mode deployments , you must also perform the steps from section Updating the Red Hat OpenShift Data Foundation external secret . If you use local storage, then update the Local Storage operator . See Checking for Local Storage Operator deployments if you are unsure. Important If you have an existing setup of OpenShift Data Foundation 4.12 with disaster recovery (DR) enabled, ensure to update all your clusters in the environment at the same time and avoid updating a single cluster. This is to avoid any potential issues and maintain best compatibility. It is also important to maintain consistency across all OpenShift Data Foundation DR instances. Update considerations Review the following important considerations before you begin. The Red Hat OpenShift Container Platform version is the same as Red Hat OpenShift Data Foundation. See the Interoperability Matrix for more information about supported combinations of OpenShift Container Platform and Red Hat OpenShift Data Foundation. To know whether your cluster was deployed in internal or external mode, refer to the knowledgebase article on How to determine if ODF cluster has storage in internal or external mode . The Local Storage Operator is fully supported only when the Local Storage Operator version matches the Red Hat OpenShift Container Platform version. Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 17.2. Updating Red Hat OpenShift Data Foundation 4.16 to 4.17 This chapter helps you to upgrade between the minor releases for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what's not. For Internal and Internal-attached deployments, upgrading OpenShift Data Foundation upgrades all OpenShift Data Foundation services including the backend Red Hat Ceph Storage (RHCS) cluster. For External mode deployments, upgrading OpenShift Data Foundation only upgrades the OpenShift Data Foundation service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately. You must upgrade Red Hat Ceph Storage along with OpenShift Data Foundation to get new feature support, security fixes, and other bug fixes. As there is no dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. For more information about RHCS releases, see the knowledgebase solution, solution . Important Upgrading to 4.17 directly from any version older than 4.16 is not supported. Prerequisites Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.17.X, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage -> Data Foundation -> Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of both Overview - Block and File and Object tabs. Green tick indicates that the storage cluster , object service and data resiliency are all healthy. Ensure that all OpenShift Data Foundation Pods, including the operator pods, are in Running state in the openshift-storage namespace. To view the state of the pods, on the OpenShift Web Console, click Workloads -> Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster. Prerequisite relevant only for OpenShift Data Foundation deployments on AWS using AWS Security Token Service (STS) Add another entry in the trust policy for noobaa-core account as follows: Log into AWS web console where the AWS role resides using http://console.aws.amazon.com/ . Enter the IAM management tool and click Roles . Find the name of the role created for AWS STS to support Multicloud Object Gateway (MCG) authentication using the following command in OpenShift CLI: Search for the role name that you obtained from the step in the tool and click on the role name. Under the role summary, click Trust relationships . In the Trusted entities tab, click Edit trust policy on the right. Under the "Action": "sts:AssumeRoleWithWebIdentity" field, there are two fields to enable access for two NooBaa service accounts noobaa and noobaa-endpoint . Add another entry for the core pod's new service account name, system:serviceaccount:openshift-storage:noobaa-core . Click Update policy at the bottom right of the page. The update might take about 5 minutes to get in place. Procedure On the OpenShift Web Console, navigate to Operators -> Installed Operators . Select openshift-storage project. Click the OpenShift Data Foundation operator name. Click the Subscription tab and click the link under Update Channel . Select the stable-4.17 update channel and Save it. If the Upgrade status shows requires approval , click on requires approval . On the Install Plan Details page, click Preview Install Plan . Review the install plan and click Approve . Wait for the Status to change from Unknown to Created . Navigate to Operators -> Installed Operators . Select the openshift-storage project. Wait for the OpenShift Data Foundation Operator Status to change to Up to date . After the operator is successfully upgraded, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. Note After upgrading, if your cluster has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps Check the Version below the OpenShift Data Foundation name and check the operator status. Navigate to Operators -> Installed Operators and select the openshift-storage project. When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick. Verify that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage -> Data Foundation -> Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview- Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy. If verification steps fail, contact Red Hat Support . Important After updating external mode deployments, you must also update the external secret. For instructions, see Updating the OpenShift Data Foundation external secret . Additional Resources If you face any issues while updating OpenShift Data Foundation, see the Commonly required logs for troubleshooting section in the Troubleshooting guide . 17.3. Updating Red Hat OpenShift Data Foundation 4.17.x to 4.17.y This chapter helps you to upgrade between the z-stream release for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what's not. For Internal and Internal-attached deployments, upgrading OpenShift Data Foundation upgrades all OpenShift Data Foundation services including the backend Red Hat Ceph Storage (RHCS) cluster. For External mode deployments, upgrading OpenShift Data Foundation only upgrades the OpenShift Data Foundation service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately. Hence, we recommend upgrading RHCS along with OpenShift Data Foundation in order to get new feature support, security fixes, and other bug fixes. Since we do not have a strong dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. See solution to know more about RHCS releases. When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic . If the update strategy is set to Manual then use the following procedure. Prerequisites Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.17.X, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage -> Data Foundation -> Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy. Ensure that all OpenShift Data Foundation Pods, including the operator pods, are in Running state in the openshift-storage namespace. To view the state of the pods, on the OpenShift Web Console, click Workloads -> Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster. Procedure On the OpenShift Web Console, navigate to Operators -> Installed Operators . Select openshift-storage project. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click the OpenShift Data Foundation operator name. Click the Subscription tab. If the Upgrade Status shows require approval , click on requires approval link. On the InstallPlan Details page, click Preview Install Plan . Review the install plan and click Approve . Wait for the Status to change from Unknown to Created . After the operator is successfully upgraded, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. Verification steps Check the Version below the OpenShift Data Foundation name and check the operator status. Navigate to Operators -> Installed Operators and select the openshift-storage project. When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick. Verify that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage -> Data Foundation -> Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy If verification steps fail, contact Red Hat Support . 17.4. Changing the update approval strategy To ensure that the storage system gets updated automatically when a new update is available in the same channel, we recommend keeping the update approval strategy to Automatic . Changing the update approval strategy to Manual will need manual approval for each upgrade. Procedure Navigate to Operators -> Installed Operators . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click on OpenShift Data Foundation operator name Go to the Subscription tab. Click on the pencil icon for changing the Update approval . Select the update approval strategy and click Save . Verification steps Verify that the Update approval shows the newly selected approval strategy below it.
|
[
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault token create -policy=odf -format json",
"oc -n openshift-storage create serviceaccount <serviceaccount_name>",
"oc -n openshift-storage create serviceaccount odf-vault-auth",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF",
"SA_JWT_TOKEN=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)",
"OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")",
"oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid",
"vault auth enable kubernetes",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h",
"vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h",
"oc get namespace default NAME STATUS AGE default Active 5d2h",
"oc annotate namespace default \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" namespace/default annotated",
"oc get storageclass rbd-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rbd-sc rbd.csi.ceph.com Delete Immediate true 5d2h",
"oc annotate storageclass rbd-sc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" storageclass.storage.k8s.io/rbd-sc annotated",
"oc get pvc data-pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-pvc Bound pvc-f37b8582-4b04-4676-88dd-e1b95c6abf74 1Gi RWO default 20h",
"oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" persistentvolumeclaim/data-pvc annotated",
"oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642663516 @weekly 3s",
"oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=*/1 * * * *\" --overwrite=true persistentvolumeclaim/data-pvc annotated",
"oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642664617 */1 * * * * 3s",
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"python3 ceph-external-cluster-details-exporter.py --help",
"python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name <rbd block pool name> [optional arguments]",
"python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name ceph-rbd --monitoring-endpoint xxx.xxx.xxx.xxx --monitoring-endpoint-port xxxx --rgw-endpoint xxx.xxx.xxx.xxx:xxxx --run-as-user client.ocs",
"[{\"name\": \"rook-ceph-mon-endpoints\", \"kind\": \"ConfigMap\", \"data\": {\"data\": \"xxx.xxx.xxx.xxx:xxxx\", \"maxMonId\": \"0\", \"mapping\": \"{}\"}}, {\"name\": \"rook-ceph-mon\", \"kind\": \"Secret\", \"data\": {\"admin-secret\": \"admin-secret\", \"fsid\": \"<fs-id>\", \"mon-secret\": \"mon-secret\"}}, {\"name\": \"rook-ceph-operator-creds\", \"kind\": \"Secret\", \"data\": {\"userID\": \"client.healthchecker\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-node\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-node\", \"userKey\": \"<user-key>\"}}, {\"name\": \"ceph-rbd\", \"kind\": \"StorageClass\", \"data\": {\"pool\": \"ceph-rbd\"}}, {\"name\": \"monitoring-endpoint\", \"kind\": \"CephCluster\", \"data\": {\"MonitoringEndpoint\": \"xxx.xxx.xxx.xxx\", \"MonitoringPort\": \"xxxx\"}}, {\"name\": \"rook-csi-rbd-provisioner\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-provisioner\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-cephfs-provisioner\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-provisioner\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"rook-csi-cephfs-node\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-node\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"cephfs\", \"kind\": \"StorageClass\", \"data\": {\"fsName\": \"cephfs\", \"pool\": \"cephfs_data\"}}, {\"name\": \"ceph-rgw\", \"kind\": \"StorageClass\", \"data\": {\"endpoint\": \"xxx.xxx.xxx.xxx:xxxx\", \"poolPrefix\": \"default\"}}]",
"oc get cephcluster -n openshift-storage",
"NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH ocs-external-storagecluster-cephcluster 31m15s Connected Cluster connected successfully HEALTH_OK",
"oc get storagecluster -n openshift-storage",
"NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-external-storagecluster 31m15s Ready true 2021-02-29T20:43:04Z 4.17.0",
"oc annotate storagecluster ocs-external-storagecluster -n openshift-storage uninstall.ocs.openshift.io/mode=\"forced\" --overwrite storagecluster.ocs.openshift.io/ocs-external-storagecluster annotated",
"oc get volumesnapshot --all-namespaces",
"oc delete volumesnapshot <VOLUME-SNAPSHOT-NAME> -n <NAMESPACE>",
"#!/bin/bash RBD_PROVISIONER=\"openshift-storage.rbd.csi.ceph.com\" CEPHFS_PROVISIONER=\"openshift-storage.cephfs.csi.ceph.com\" NOOBAA_PROVISIONER=\"openshift-storage.noobaa.io/obc\" RGW_PROVISIONER=\"openshift-storage.ceph.rook.io/bucket\" NOOBAA_DB_PVC=\"noobaa-db\" NOOBAA_BACKINGSTORE_PVC=\"noobaa-default-backing-store-noobaa-pvc\" Find all the OCS StorageClasses OCS_STORAGECLASSES=USD(oc get storageclasses | grep -e \"USDRBD_PROVISIONER\" -e \"USDCEPHFS_PROVISIONER\" -e \"USDNOOBAA_PROVISIONER\" -e \"USDRGW_PROVISIONER\" | awk '{print USD1}') List PVCs in each of the StorageClasses for SC in USDOCS_STORAGECLASSES do echo \"======================================================================\" echo \"USDSC StorageClass PVCs and OBCs\" echo \"======================================================================\" oc get pvc --all-namespaces --no-headers 2>/dev/null | grep USDSC | grep -v -e \"USDNOOBAA_DB_PVC\" -e \"USDNOOBAA_BACKINGSTORE_PVC\" oc get obc --all-namespaces --no-headers 2>/dev/null | grep USDSC echo done",
"oc delete obc <obc name> -n <project name>",
"oc delete pvc <pvc name> -n <project-name>",
"oc delete -n openshift-storage storagesystem --all --wait=true",
"oc project default oc delete project openshift-storage --wait=true --timeout=5m",
"oc get project openshift-storage",
"oc get pv oc delete pv <pv name>",
"oc delete crd backingstores.noobaa.io bucketclasses.noobaa.io cephblockpools.ceph.rook.io cephclusters.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io noobaas.noobaa.io ocsinitializations.ocs.openshift.io storageclusters.ocs.openshift.io cephclients.ceph.rook.io cephobjectrealms.ceph.rook.io cephobjectzonegroups.ceph.rook.io cephobjectzones.ceph.rook.io cephrbdmirrors.ceph.rook.io storagesystems.odf.openshift.io --wait=true --timeout=5m",
"oc get pod,pvc -n openshift-monitoring NAME READY STATUS RESTARTS AGE pod/alertmanager-main-0 3/3 Running 0 8d pod/alertmanager-main-1 3/3 Running 0 8d pod/alertmanager-main-2 3/3 Running 0 8d pod/cluster-monitoring- operator-84457656d-pkrxm 1/1 Running 0 8d pod/grafana-79ccf6689f-2ll28 2/2 Running 0 8d pod/kube-state-metrics- 7d86fb966-rvd9w 3/3 Running 0 8d pod/node-exporter-25894 2/2 Running 0 8d pod/node-exporter-4dsd7 2/2 Running 0 8d pod/node-exporter-6p4zc 2/2 Running 0 8d pod/node-exporter-jbjvg 2/2 Running 0 8d pod/node-exporter-jj4t5 2/2 Running 0 6d18h pod/node-exporter-k856s 2/2 Running 0 6d18h pod/node-exporter-rf8gn 2/2 Running 0 8d pod/node-exporter-rmb5m 2/2 Running 0 6d18h pod/node-exporter-zj7kx 2/2 Running 0 8d pod/openshift-state-metrics- 59dbd4f654-4clng 3/3 Running 0 8d pod/prometheus-adapter- 5df5865596-k8dzn 1/1 Running 0 7d23h pod/prometheus-adapter- 5df5865596-n2gj9 1/1 Running 0 7d23h pod/prometheus-k8s-0 6/6 Running 1 8d pod/prometheus-k8s-1 6/6 Running 1 8d pod/prometheus-operator- 55cfb858c9-c4zd9 1/1 Running 0 6d21h pod/telemeter-client- 78fc8fc97d-2rgfp 3/3 Running 0 8d NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-0 Bound pvc-0d519c4f-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-1 Bound pvc-0d5a9825-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-alertmanager-claim-alertmanager-main-2 Bound pvc-0d6413dc-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-0 Bound pvc-0b7c19b0-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d persistentvolumeclaim/my-prometheus-claim-prometheus-k8s-1 Bound pvc-0b8aed3f-15a5-11ea-baa0-026d231574aa 40Gi RWO ocs-external-storagecluster-ceph-rbd 8d",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
". . . apiVersion: v1 data: config.yaml: | alertmanagerMain: volumeClaimTemplate: metadata: name: my-alertmanager-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-external-storagecluster-ceph-rbd prometheusK8s: volumeClaimTemplate: metadata: name: my-prometheus-claim spec: resources: requests: storage: 40Gi storageClassName: ocs-external-storagecluster-ceph-rbd kind: ConfigMap metadata: creationTimestamp: \"2019-12-02T07:47:29Z\" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: \"22110\" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: fd6d988b-14d7-11ea-84ff-066035b9efa8 . . .",
". . . apiVersion: v1 data: config.yaml: | kind: ConfigMap metadata: creationTimestamp: \"2019-11-21T13:07:05Z\" name: cluster-monitoring-config namespace: openshift-monitoring resourceVersion: \"404352\" selfLink: /api/v1/namespaces/openshift-monitoring/configmaps/cluster-monitoring-config uid: d12c796a-0c5f-11ea-9832-063cd735b81c . . .",
"oc get pod,pvc -n openshift-monitoring NAME READY STATUS RESTARTS AGE pod/alertmanager-main-0 3/3 Terminating 0 10h pod/alertmanager-main-1 3/3 Terminating 0 10h pod/alertmanager-main-2 3/3 Terminating 0 10h pod/cluster-monitoring-operator-84cd9df668-zhjfn 1/1 Running 0 18h pod/grafana-5db6fd97f8-pmtbf 2/2 Running 0 10h pod/kube-state-metrics-895899678-z2r9q 3/3 Running 0 10h pod/node-exporter-4njxv 2/2 Running 0 18h pod/node-exporter-b8ckz 2/2 Running 0 11h pod/node-exporter-c2vp5 2/2 Running 0 18h pod/node-exporter-cq65n 2/2 Running 0 18h pod/node-exporter-f5sm7 2/2 Running 0 11h pod/node-exporter-f852c 2/2 Running 0 18h pod/node-exporter-l9zn7 2/2 Running 0 11h pod/node-exporter-ngbs8 2/2 Running 0 18h pod/node-exporter-rv4v9 2/2 Running 0 18h pod/openshift-state-metrics-77d5f699d8-69q5x 3/3 Running 0 10h pod/prometheus-adapter-765465b56-4tbxx 1/1 Running 0 10h pod/prometheus-adapter-765465b56-s2qg2 1/1 Running 0 10h pod/prometheus-k8s-0 6/6 Terminating 1 9m47s pod/prometheus-k8s-1 6/6 Terminating 1 9m47s pod/prometheus-operator-cbfd89f9-ldnwc 1/1 Running 0 43m pod/telemeter-client-7b5ddb4489-2xfpz 3/3 Running 0 10h NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/ocs-alertmanager-claim-alertmanager-main-0 Bound pvc-2eb79797-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h persistentvolumeclaim/ocs-alertmanager-claim-alertmanager-main-1 Bound pvc-2ebeee54-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h persistentvolumeclaim/ocs-alertmanager-claim-alertmanager-main-2 Bound pvc-2ec6a9cf-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h persistentvolumeclaim/ocs-prometheus-claim-prometheus-k8s-0 Bound pvc-3162a80c-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h persistentvolumeclaim/ocs-prometheus-claim-prometheus-k8s-1 Bound pvc-316e99e2-1fed-11ea-93e1-0a88476a6a64 40Gi RWO ocs-external-storagecluster-ceph-rbd 19h",
"oc delete -n openshift-monitoring pvc <pvc-name> --wait=true --timeout=5m",
"oc edit configs.imageregistry.operator.openshift.io",
". . . storage: pvc: claim: registry-cephfs-rwx-pvc . . .",
". . . storage: emptyDir: {} . . .",
"oc delete pvc <pvc-name> -n openshift-image-registry --wait=true --timeout=5m",
"oc delete clusterlogging instance -n openshift-logging --wait=true --timeout=5m",
"oc delete pvc <pvc-name> -n openshift-logging --wait=true --timeout=5m",
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"encryptionKMSID: 1-vault",
"kind: ConfigMap apiVersion: v1 metadata: name: csi-kms-connection-details [...] data: 1-vault: |- { \"encryptionKMSType\": \"vaulttokens\", \"kmsServiceName\": \"1-vault\", [...] \"vaultBackend\": \"kv-v2\" } 2-vault: |- { \"encryptionKMSType\": \"vaulttenantsa\", [...] \"vaultBackend\": \"kv\" }",
"storage: pvc: claim: <new-pvc-name>",
"storage: pvc: claim: ocs4registry",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time to retain monitoring files, for example 24h> volumeClaimTemplate: metadata: name: ocs-prometheus-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi> alertmanagerMain: volumeClaimTemplate: metadata: name: ocs-alertmanager-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi>",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"ocs-storagecluster-ceph-rbd\" size: \"200G\"",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: ocs-storagecluster-ceph-rbd size: 200G # Change as per your requirement redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: replicas: 1 curation: type: \"curator\" curator: schedule: \"30 3 * * *\" collection: logs: type: \"fluentd\" fluentd: {}",
"spec: [...] collection: logs: fluentd: tolerations: - effect: NoSchedule key: node.ocs.openshift.io/storage value: 'true' type: fluentd",
"config.yaml: | openshift-storage: delete: days: 5",
"spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/worker: \"\" node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"",
"template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: kb-s25vf machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: kb-s25vf-infra-us-west-2a spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"",
"label node <node> node-role.kubernetes.io/infra=\"\" label node <node> cluster.ocs.openshift.io/openshift-storage=\"\"",
"adm taint node <node> node.ocs.openshift.io/storage=\"true\":NoSchedule",
"Taints: Key: node.openshift.ocs.io/storage Value: true Effect: Noschedule",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"NODE compute-1",
"oc debug node/ <node-name>",
"chroot /host",
"lsblk",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"noobaa backingstore create aws-s3 <backingstore_name> --access-key=<AWS ACCESS KEY> --secret-key=<AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"aws-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-aws-resource\"",
"apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> namespace: openshift-storage type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: awsS3: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <bucket-name> type: aws-s3",
"noobaa backingstore create ibm-cos <backingstore_name> --access-key=<IBM ACCESS KEY> --secret-key=<IBM SECRET ACCESS KEY> --endpoint=<IBM COS ENDPOINT> --target-bucket <bucket-name> -n openshift-storage",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"ibm-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-ibm-resource\"",
"apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> namespace: openshift-storage type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: ibmCos: endpoint: <endpoint> secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <bucket-name> type: ibm-cos",
"noobaa backingstore create azure-blob <backingstore_name> --account-key=<AZURE ACCOUNT KEY> --account-name=<AZURE ACCOUNT NAME> --target-blob-container <blob container name> -n openshift-storage",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"azure-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-azure-resource\"",
"apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> type: Opaque data: AccountName: <AZURE ACCOUNT NAME ENCODED IN BASE64> AccountKey: <AZURE ACCOUNT KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: azureBlob: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBlobContainer: <blob-container-name> type: azure-blob",
"noobaa backingstore create google-cloud-storage <backingstore_name> --private-key-json-file=<PATH TO GCP PRIVATE KEY JSON FILE> --target-bucket <GCP bucket name> -n openshift-storage",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"google-gcp\" INFO[0002] ✅ Created: Secret \"backing-store-google-cloud-storage-gcp\"",
"apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> type: Opaque data: GoogleServiceAccountPrivateKeyJson: <GCP PRIVATE KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: googleCloudStorage: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <target bucket> type: google-cloud-storage",
"noobaa -n openshift-storage backingstore create pv-pool <backingstore_name> --num-volumes <NUMBER OF VOLUMES> --pv-size-gb <VOLUME SIZE> --request-cpu <CPU REQUEST> --request-memory <MEMORY REQUEST> --limit-cpu <CPU LIMIT> --limit-memory <MEMORY LIMIT> --storage-class <LOCAL STORAGE CLASS>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <backingstore_name> namespace: openshift-storage spec: pvPool: numVolumes: <NUMBER OF VOLUMES> resources: requests: storage: <VOLUME SIZE> cpu: <CPU REQUEST> memory: <MEMORY REQUEST> limits: cpu: <CPU LIMIT> memory: <MEMORY LIMIT> storageClass: <LOCAL STORAGE CLASS> type: pv-pool",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Exists: BackingStore \"local-mcg-storage\"",
"noobaa backingstore create s3-compatible rgw-resource --access-key=<RGW ACCESS KEY> --secret-key=<RGW SECRET KEY> --target-bucket=<bucket-name> --endpoint=<RGW endpoint> -n openshift-storage",
"get secret <RGW USER SECRET NAME> -o yaml -n openshift-storage",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"rgw-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-rgw-resource\"",
"apiVersion: ceph.rook.io/v1 kind: CephObjectStoreUser metadata: name: <RGW-Username> namespace: openshift-storage spec: store: ocs-storagecluster-cephobjectstore displayName: \"<Display-name>\"",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <backingstore-name> namespace: openshift-storage spec: s3Compatible: endpoint: <RGW endpoint> secret: name: <backingstore-secret-name> namespace: openshift-storage signatureVersion: v4 targetBucket: <RGW-bucket-name> type: s3-compatible",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <resource-name> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <namespacestore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>",
"noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage",
"noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>",
"noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage",
"noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>",
"noobaa bucketclass create placement-bucketclass mirror-to-aws --backingstores=azure-resource,aws-resource --placement Mirror",
"noobaa obc create mirrored-bucket --bucketclass=mirror-to-aws",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <bucket-class-name> namespace: openshift-storage spec: placementPolicy: tiers: - backingStores: - <backing-store-1> - <backing-store-2> placement: Mirror",
"additionalConfig: bucketclass: mirror-to-aws",
"{ \"Version\": \"NewVersion\", \"Statement\": [ { \"Sid\": \"Example\", \"Effect\": \"Allow\", \"Principal\": [ \"[email protected]\" ], \"Action\": [ \"s3:GetObject\" ], \"Resource\": [ \"arn:aws:s3:::john_bucket\" ] } ] }",
"aws --endpoint ENDPOINT --no-verify-ssl s3api put-bucket-policy --bucket MyBucket --policy file:// BucketPolicy",
"aws --endpoint https://s3-openshift-storage.apps.gogo44.noobaa.org --no-verify-ssl s3api put-bucket-policy -bucket MyBucket --policy file://BucketPolicy",
"noobaa account create <noobaa-account-name> [--allow_bucket_create=true] [--allowed_buckets=[]] [--default_resource=''] [--full_permission=false]",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <obc-name> spec: generateBucketName: <obc-bucket-name> storageClassName: openshift-storage.noobaa.io",
"apiVersion: batch/v1 kind: Job metadata: name: testjob spec: template: spec: restartPolicy: OnFailure containers: - image: <your application image> name: test env: - name: BUCKET_NAME valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_NAME - name: BUCKET_HOST valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_HOST - name: BUCKET_PORT valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_PORT - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: <obc-name> key: AWS_ACCESS_KEY_ID - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: <obc-name> key: AWS_SECRET_ACCESS_KEY",
"oc apply -f <yaml.file>",
"oc get cm <obc-name> -o yaml",
"oc get secret <obc_name> -o yaml",
"noobaa obc create <obc-name> -n openshift-storage",
"INFO[0001] ✅ Created: ObjectBucketClaim \"test21obc\"",
"oc get obc -n openshift-storage",
"NAME STORAGE-CLASS PHASE AGE test21obc openshift-storage.noobaa.io Bound 38s",
"oc get obc test21obc -o yaml -n openshift-storage",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer generation: 2 labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage resourceVersion: \"40756\" selfLink: /apis/objectbucket.io/v1alpha1/namespaces/openshift-storage/objectbucketclaims/test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af spec: ObjectBucketName: obc-openshift-storage-test21obc bucketName: test21obc-933348a6-e267-4f82-82f1-e59bf4fe3bb4 generateBucketName: test21obc storageClassName: openshift-storage.noobaa.io status: phase: Bound",
"oc get -n openshift-storage secret test21obc -o yaml",
"apiVersion: v1 data: AWS_ACCESS_KEY_ID: c0M0R2xVanF3ODR3bHBkVW94cmY= AWS_SECRET_ACCESS_KEY: Wi9kcFluSWxHRzlWaFlzNk1hc0xma2JXcjM1MVhqa051SlBleXpmOQ== kind: Secret metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage ownerReferences: - apiVersion: objectbucket.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ObjectBucketClaim name: test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af resourceVersion: \"40751\" selfLink: /api/v1/namespaces/openshift-storage/secrets/test21obc uid: 65117c1c-f662-11e9-9094-0a5305de57bb type: Opaque",
"oc get -n openshift-storage cm test21obc -o yaml",
"apiVersion: v1 data: BUCKET_HOST: 10.0.171.35 BUCKET_NAME: test21obc-933348a6-e267-4f82-82f1-e59bf4fe3bb4 BUCKET_PORT: \"31242\" BUCKET_REGION: \"\" BUCKET_SUBREGION: \"\" kind: ConfigMap metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage ownerReferences: - apiVersion: objectbucket.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ObjectBucketClaim name: test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af resourceVersion: \"40752\" selfLink: /api/v1/namespaces/openshift-storage/configmaps/test21obc uid: 651c6501-f662-11e9-9094-0a5305de57bb",
"noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name>",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <namespacestore> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3",
"noobaa bucketclass create namespace-bucketclass cache <my-cache-bucket-class> --backingstores <backing-store> --hub-resource <namespacestore>",
"noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>",
"noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name>",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <namespacestore> namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <backingstore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos",
"noobaa bucketclass create namespace-bucketclass cache <my-bucket-class> --backingstores <backing-store> --hubResource <namespacestore>",
"noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>",
"oc patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{\"spec\": {\"multiCloudGateway\": {\"endpoints\": {\"minCount\": 3,\"maxCount\": 10}}}}'",
"volumes: - name: <volume_name> persistentVolumeClaim: claimName: <pvc_name>",
"volumes: - name: mypd persistentVolumeClaim: claimName: myclaim",
"volumes: - name: <volume_name> persistentVolumeClaim: claimName: <pvc_name>",
"volumes: - name: mypd persistentVolumeClaim: claimName: myclaim",
"oc adm cordon <node_name>",
"oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc get -n openshift-storage pods -l app=rook-ceph-osd -o wide",
"rook-ceph-osd-0-6d77d6c7c6-m8xj6 0/1 CrashLoopBackOff 0 24h 10.129.0.16 compute-2 <none> <none> rook-ceph-osd-1-85d99fb95f-2svc7 1/1 Running 0 24h 10.128.2.24 compute-0 <none> <none> rook-ceph-osd-2-6c66cdb977-jp542 1/1 Running 0 24h 10.130.0.18 compute-1 <none> <none>",
"osd_id_to_remove=0 oc scale -n openshift-storage deployment rook-ceph-osd-USD{osd_id_to_remove} --replicas=0",
"deployment.extensions/rook-ceph-osd-0 scaled",
"oc get -n openshift-storage pods -l ceph-osd-id=USD{osd_id_to_remove}",
"No resources found.",
"oc delete pod rook-ceph-osd-0-6d77d6c7c6-m8xj6 --force --grace-period=0",
"warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod \"rook-ceph-osd-0-6d77d6c7c6-m8xj6\" force deleted",
"oc get pv oc delete pv <failed-pv-name>",
"oc delete -n openshift-storage job ocs-osd-removal-USD{osd_id_to_remove}",
"job.batch \"ocs-osd-removal-0\" deleted",
"oc project openshift-storage",
"oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=USD{osd_id_to_remove} FORCE_OSD_REMOVAL=false |oc create -n openshift-storage -f -",
"oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'",
"2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 |egrep -i 'pvc|deviceset'",
"2021-05-12 14:31:34.666000 I | cephosd: removing the OSD PVC \"ocs-deviceset-xxxx-xxx-xxx-xxx\"",
"oc debug node/<node name> chroot /host",
"sh-4.4# dmsetup ls| grep <pvc name> ocs-deviceset-xxx-xxx-xxx-xxx-block-dmcrypt (253:0)",
"cryptsetup luksClose --debug --verbose ocs-deviceset-xxx-xxx-xxx-xxx-block-dmcrypt",
"ps -ef | grep crypt",
"kill -9 <PID>",
"dmsetup ls",
"oc delete -n openshift-storage job ocs-osd-removal-USD{osd_id_to_remove}",
"job.batch \"ocs-osd-removal-0\" deleted",
"oc get -n openshift-storage pods -l app=rook-ceph-osd",
"rook-ceph-osd-0-5f7f4747d4-snshw 1/1 Running 0 4m47s rook-ceph-osd-1-85d99fb95f-2svc7 1/1 Running 0 1d20h rook-ceph-osd-2-6c66cdb977-jp542 1/1 Running 0 1d20h",
"oc get -n openshift-storage pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE db-noobaa-db-0 Bound pvc-b44ebb5e-3c67-4000-998e-304752deb5a7 50Gi RWO ocs-storagecluster-ceph-rbd 6d ocs-deviceset-0-data-0-gwb5l Bound pvc-bea680cd-7278-463d-a4f6-3eb5d3d0defe 512Gi RWO standard 94s ocs-deviceset-1-data-0-w9pjm Bound pvc-01aded83-6ef1-42d1-a32e-6ca0964b96d4 512Gi RWO standard 6d ocs-deviceset-2-data-0-7bxcq Bound pvc-5d07cd6c-23cb-468c-89c1-72d07040e308 512Gi RWO standard 6d",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/_<OSD-pod-name>_",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"NODE compute-1",
"oc debug node/<node name> chroot /host",
"lsblk",
"oc get deployment noobaa-operator -o yaml -n openshift-storage | grep ROLEARN -A1 value: arn:aws:iam::123456789101:role/your-role-name-here"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html-single/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/proc_scaling-up-storage-by-adding-capacity-to-your-openshift-data-foundation-nodes-on-aws-vmware-infrastructure_osp
|
Chapter 1. Introduction
|
Chapter 1. Introduction The Release Notes provide a high-level description of improvements and additions that have been implemented in Red Hat Virtualization 4.4. Red Hat Virtualization is an enterprise-grade server and desktop virtualization platform built on Red Hat Enterprise Linux. See the Product Guide for more information.
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/release_notes/introduction
|
Chapter 8. Asynchronous updates
|
Chapter 8. Asynchronous updates Security, bug fix, and enhancement updates for Ansible Automation Platform 2.4 are released as asynchronous erratas. All Ansible Automation Platform erratas are available on the Download Red Hat Ansible Automation Platform page in the Customer Portal. As a Red Hat Customer Portal user, you can enable errata notifications in the account settings for Red Hat Subscription Management (RHSM). When errata notifications are enabled, you receive notifications through email whenever new erratas relevant to your registered systems are released. Note Red Hat Customer Portal user accounts must have systems registered and consuming Ansible Automation Platform entitlements for Ansible Automation Platform errata notification emails to generate. The Asynchronous updates section of the release notes will be updated over time to give notes on enhancements and bug fixes for asynchronous errata releases of Ansible Automation Platform 2.4. Additional resources For more information about asynchronous errata support in Ansible Automation Platform, see Red Hat Ansible Automation Platform Life Cycle . For information about Common Vulnerabilities and Exposures (CVEs), see What is a CVE? and Red Hat CVE Database . 8.1. Ansible Automation Platform patch release March 12, 2025 The following enhancements and bug fixes have been implemented in this release of Ansible Automation Platform. 8.1.1. Enhancements 8.1.1.1. General The redhat.insights collection has been updated to 1.3.0(AAP-40261) The ansible.controller collection has been updated to 4.5.19(AAP-41401) 8.1.2. Bug fixes 8.1.2.1. Automation controller Fixed an issue where the Azure credentials automatically added the config_cred value where having both fields for the client caused an error.(AAP-39847) Fixed an issue where the job schedules would run at incorrect times when that schedule's start time fell within a Daylight Saving Time period.(AAP-39827) Fixed an issue where awxkit did not have service account support for Insights credential type. The fields client_id and client_secret were missing from the credential_input_fields .(AAP-39351) Fixed an issue where the python script action_plugins/insights.py could not handle service account oauth.(AAP-37463) Fixed an issue where there was no service account support for Insights credential type for Ansible Automation Platform version 2.4.(AAP-37440) 8.1.2.2. Receptor Fixed an issue where automation mesh receptor was creating too many inotify processes, and where the user would encounter a too many open files error.(AAP-22605) 8.1.2.3. RPM-based Ansible Automation Platform Fixed an issue where the previously required a Red Hat Enterprise Linux minimum versions were not set to 8.8 and 9.2.(AAP-40422) 8.2. Ansible Automation Platform patch release January 29, 2025 The following enhancements and bug fixes have been implemented in this release of Ansible Automation Platform. 8.2.1. Enhancements 8.2.1.1. General The ansible.controller collection has been updated to 4.5.17.(AAP-39099) 8.2.2. Bug fixes 8.2.2.1. CVE With this update, the following CVEs have been addressed: CVE-2024-56326 python3x-jinja2 : Jinja has a sandbox breakout through indirect reference to format method.(AAP-38851) CVE-2024-11407 ansible-lightspeed-container : Denial-of-Service through data corruption in gRPC-C++ .(AAP-38785) CVE-2024-56374 ansible-lightspeed-container : Potential denial-of-service vulnerability in IPv6 validation.(AAP-38784) CVE-2024-56201 python3x-jinja2 : Jinja has a sandbox breakout through malicious filenames.(AAP-38332) CVE-2024-56201 python3x-jinja2 : Jinja has a sandbox breakout through malicious filenames.(AAP-38328) link; CVE-2024-56201 ansible-lightspeed-container : Jinja has a sandbox breakout through malicious filenames.(AAP-38078) CVE-2024-56326 ansible-lightspeed-container : Jinja has a sandbox breakout through indirect reference to format method.(AAP-38055) CVE-2024-52304 ansible-lightspeed-container : aiohttp vulnerable to request smuggling due to incorrect parsing of chunk extensions.(AAP-37995) CVE-2024-53908 automation-controller : Potential SQL injection in HasKey(lhs, rhs) on Oracle.(AAP-36768) CVE-2024-56201 automation-controller : Jinja has a sandbox breakout through malicious filenames.(AAP-38080) 8.2.2.2. Automation controller Fixed an issue where the traceback from host_metric_summary_monthly task caused a type comparison error.(AAP-37486) Fixed an issue where the order of source inventories was not respected by the collection ansible.controller .(AAP-38511) 8.2.2.3. RPM-based Ansible Automation Platform Fixed an issue where setting the *pg_host= without any other context would result in an empty HOST section of settings.py in controller.(AAP-38030) Fixed an issue where Automation hub backup would fail when automationhub_pg_port=" .(AAP-18484) Fixed an issue where providing the database installation a custom port would break the installation of postgres .(AAP-31260) Fixed an issue where setup.sh -p <path_to_log_dir> did not work if the directory specified by the -p parameter was not writable. The setup script now warns if the provided log path does not have write permission.(AAP-18204) 8.3. Ansible Automation Platform patch release December 18, 2024 The following enhancements and bug fixes have been implemented in this release of Ansible Automation Platform. 8.3.1. Enhancements 8.3.1.1. General aap-metrics-utility has been updated to 0.4.1.(AAP-36394) The ansible.controller collection has been updated to 4.5.15.(AAP-37293) 8.3.2. Bug fixes 8.3.2.1. General With this update, the following CVEs have been addressed: CVE-2024-53908 ansible-lightspeed-container : Potential SQL injection in HasKey(lhs, rhs) on Oracle.(AAP-36767) CVE-2024-53907 ansible-lightspeed-container : Potential denial-of-service in django.utils.html.strip_tags() .(AAP-37275) 8.3.2.2. Automation controller Fixed an issue where a scheduled job with count set to non-zero value would run unexpectedly.(AAP-37292) Fixed an issue where when launching the job template, the named URL returned a 404 error code.(AAP-37024) Fixed an issue where temporary receptor files were not being cleaned up on nodes.(AAP-36903) 8.4. Ansible Automation Platform patch release December 3, 2024 The following enhancements and bug fixes have been implemented in this release of Ansible Automation Platform. 8.4.1. Enhancements 8.4.1.1. Ansible Automation Platform Red Hat Ansible Lightspeed has been updated to 2.4.241127. 8.4.1.2. Ansible Automation Platform Operator With this update you can set PostgreSQL SSL/TLS mode to verify-full or verify-ca with the proper sslrootcert configuration in the automation hub Operator. 8.4.1.3. Automation controller With this update, support was added for receiving webhooks from Bitbucket Data Center. Additionally, support was added for posting build statuses back. 8.4.1.4. RPM-based Ansible Automation Platform The 2.4-8 installer can restore a backup created with 2.4-8 or later, but cannot restore backups created with 2.4-1 to 2.4-7. The 2.4-7 installer can restore backups created with 2.4-1 to 2.4-7. Ensure that you make a backup before and after the upgrade to 2.4-8 or later. With this update, installer tasks that include CA or key information are obfuscated. 8.4.2. Bug fixes 8.4.2.1. General With this update, the following CVEs have been addressed: CVE-2024-9902 ansible-core : Ansible-core user can read or write unauthorized content. CVE-2024-8775 ansible-core : Exposure of sensitive information in Ansible vault files due to improper logging. CVE-2024-45801 automation-controller : XSS vulnerability via prototype pollution. CVE-2024-45296 automation-controller : Backtracking regular expressions causes ReDoS. CVE-2024-52304 automation-controller : aiohttp vulnerable to request smuggling due to wrong parsing of chunk extensions. 8.4.2.2. Ansible Automation Platform The Notification List no longer errors when notifications have a missing or null organization field. 8.4.2.3. Ansible Automation Platform Operator Fixed a parsing issue with the node_selector parameter so it is now correctly evaluated as a dictionary. The /var/log/tower directory is now pre-created by mounting an emptyDir so the directory exists and web logging does not throw a permission error. 8.4.2.4. Automation controller Fixed job schedules running at the wrong time when the rrule interval was set to HOURLY or MINUTELY . Fixed an issue where sensitive data was displayed in the job output. With this update, you can now save a constructed inventory when verbosity is greater than 2. Fixed an issue where unrelated jobs could be marked as a dependency of other jobs. Fixed an issue where Thycotic secret server credentials form fields were mis-matched. 8.4.2.5. Execution environments ansible.utils collection has been updated to 5.1.2. 8.4.2.6. Receptor Fixed an issue that caused a receptor runtime panic error. 8.4.2.7. RPM-based Ansible Automation Platform Fixed an issue where the metrics-utility command failed to run after updating automation controller. Fixed an issue where the dispatcher service went into FATAL status and failed to process new jobs after a database outage. With this update, the receptor data directory can now be configured using the receptor_datadir variable. Fixed an issue that caused wrong IDs for RBAC in the database following a backup restore. 8.5. RPM releases Table 8.1. Component versions per errata advisory Errata advisory Component versions RHSA-2024:7312 Sep 27, 2024 ansible-automation-platform-installer 2.4-7.1 ansible-core 2.15.12 Automation controller 4.5.12 Automation hub 4.9.2 Event-Driven Ansible 1.0.7 RHSA-2024:6765 Sep 18, 2024 ansible-automation-platform-installer 2.4-7.1 ansible-core 2.15.12 Automation controller 4.5.11 Automation hub 4.9.2 Event-Driven Ansible 1.0.7 RHSA-2024:6428 Sep 5, 2024 ansible-automation-platform-installer 2.4-7.1 ansible-core 2.15.12 Automation controller 4.5.10 Automation hub 4.9.2 Event-Driven Ansible 1.0.7 RHSA-2024:4522 Jul 12, 2024 ansible-automation-platform-installer 2.4-7.1 ansible-core 2.15.12 Automation controller 4.5.8 Automation hub 4.9.2 Event-Driven Ansible 1.0.7 RHSA-2024:3781 Jun 10, 2024 ansible-automation-platform-installer 2.4-7.1 ansible-core 2.15.11 Automation controller 4.5.7 Automation hub 4.9.2 Event-Driven Ansible 1.0.7 8.5.1. RHSA-2024:7312 - Security Advisory - September 27, 2024 RHSA-2024:7312 8.5.1.1. General With this update, the following CVEs have been addressed: CVE-2024-21520 - Cross-site Scripting (XSS) through break_long_headers . Packages updated: automation-controller: djangorestframework . CVE-2024-37891 - proxy-authorization request header is not stripped during cross-origin redirects. Packages updated: automation-controller: urllib3 . CVE-2024-41810 - Reflected XSS by HTML injection in redirect response. Packages updated: automation-controller . 8.5.1.2. Automation controller Fixed Galaxy credentials to be correctly ordered when assigning them by using 'ansible.controller.organization' (AAP-31398). Fixed gather analytics failure caused by missing '_unpartitioned_main_jobevent' table (AAP-31053). 8.5.2. RHSA-2024:6765 - Security Advisory - September 18, 2024 RHSA-2024:6765 8.5.2.1. General With this update, the following CVEs have been addressed: CVE-2024-7143 - RBAC permissions incorrectly assigned in tasks that create objects. Packages updated: python-pulpcore and python39-pulpcore . CVE-2024-37891 - proxy-authorization request header is not stripped during cross-origin redirects. Packages updated: python-urllib3: urllib3 . CVE-2024-24788 - malformed DNS message can cause an infinite loop. Packages updated: receptor: golang: net . CVE-2024-24790 - unexpected behavior from Is methods for IPv4-mapped IPv6 addresses. Packages updated: receptor: golang: net and receptor: golang: netip . 8.5.2.2. Automation controller Updated the shipping analytics data fallback to use the Red Hat Subscription Manager subscription credentials if analytics gathering is enabled (AAP-30228). Upgraded the 'channels-redis' library to fix Redis connection leaks (AAP-30124). 8.5.3. RHSA-2024:6428 - Security Advisory - September 05, 2024 RHSA-2024:6428 8.5.3.1. General Gunicorn python package will no longer obsolete itself when checking for or applying updates (AAP-28364). With this update, the following CVEs have been addressed: CVE-2024-42005 - potential SQL injection in QuerySet.values() and values_list() . Packages updated: automation-controller: Django , python3-django , and python39-django . CVE-2024-41991 - potential denial of service vulnerability in django.utils.html.urlize() and AdminURLFieldWidget . Packages updated: automation-controller: Django , python3-django , and python39-django . CVE-2024-41990 - potential denial of service vulnerability in django.utils.html.urlize() . Packages updated: automation-controller: Django , python3-django , and python39-django . CVE-2024-33663 - algorithm confusion with OpenSSH ECDSA keys and other key formats. Packages updated: automation-controller: python-jose . CVE-2024-32879 - improper handling of case sensitivity in social-auth-app-django . Packages updated: automation-controller: python-social-auth . CVE-2024-6840 - gain access to the Kubernetes API server through job execution with container group. Packages updated: automation-controller . CVE-2024-41989 - memory exhaustion in django.utils.numberformat.floatformat() . Packages updated: python3-django and python39-django . CVE-2024-39614 - Potential denial of service in django.utils.translation.get_supported_language_variant() . Packages updated: python3-django and python39-django . CVE-2024-39330 - Potential directory-traversal in django.core.files.storage.Storage.save() . Packages updated: python3-django and python39-django . CVE-2024-39329 - Username enumeration through timing difference for users with unusable passwords. Packages updated: python3-django and python39-django . CVE-2024-38875 - Potential denial of service in django.utils.html.urlize() . Packages updated: python3-django and python39-django . CVE-2024-7246 - Client communicating with a HTTP/2 proxy can poison the HPACK table between the proxy and the backend. Packages updated: python3-grpcio and python39-grpcio . CVE-2024-5569 - denial of service (infinite loop) through crafted .zip file. Packages updated: python3-zipp and python39-zipp . 8.5.3.2. Automation controller Updated the receptor to not automatically release the receptor work unit when RECEPTOR_KEEP_WORK_ON_ERROR is set to true (AAP-27635). Updated the Help link in the REST API to point to the latest API reference documentation (AAP-27573). Fixed a timeout error in the UI when trying to load the Activity Stream with a large number of activity records (AAP-26772). 8.5.3.3. Automation hub The API browser now correctly escapes JSON values (AAH-3272, AAP-14463). 8.5.4. RHSA-2024:4522 - Security Advisory - July 12, 2024 RHSA-2024:4522 8.5.4.1. General With this update, the following CVEs have been addressed: CVE-2024-34064 - Jinja accepts keys containing non-attribute characters. Packages updated: automation-controller: jinja2 . CVE-2024-28102 - malicious JWE token can cause denial of service. Packages updated: automation-controller: jwcrypto . CVE-2024-35195 - many requests to the same host ignore cert verification. Packages updated: automation-controller: requests . 8.5.4.2. Automation controller Fixed a bug where the controller does not respect DATABASES['OPTIONS'] setting, if specified (AAP-26398). Changed all uses of ImplicitRoleField to perform an on_delete=SET_NULL (AAP-25136). Fixed the HostMetric automated counter to display the correct values (AAP-25115). Added Django logout redirects (AAP-24543). Updated the dispatcher to make the database password optional in order to support PostgreSQL authentication methods that do not require them (AAP-22231). 8.5.5. RHSA-2024:3781 - Security Advisory - June 10, 2024 RHSA-2024:3781 8.5.5.1. General Added the automation-controller-cli package to the ansible-developer RPM repositories (AAP-23368). With this update, the following CVEs have been addressed: CVE-2023-45288 - unlimited number of CONTINUATION frames causes a denial of service (DoS). Packages updated: receptor: golang: net/http, x/net/http2 . CVE-2023-45290 - memory exhaustion in Request.ParseMultipartForm . Packages updated: receptor: golang: net/http . CVE-2023-49083 - null-pointer dereference when loading PKCS7 certificates. Packages updated: python3-cryptography and python39-cryptography . CVE-2023-50447 - arbitrary code execution with the environment parameter. Packages updated: python3-pillow and python39-pillow . CVE-2024-1135 - HTTP Request Smuggling due to improper validation of Transfer-Encoding headers. Packages updated: python3-gunicorn and python39-gunicorn . CVE-2024-21503 - regular expression denial of service (ReDoS) with the lines_with_leading_tabs_expanded() function within the strings.py file. Packages updated: python3-black and python39-black . CVE-2024-24783 - verify panics on certificates with an unknown public key algorithm. Packages updated: receptor: golang: crypto/x509 . CVE-2024-26130 - NULL pointer dereference with pkcs12.serialize_key_and_certificates when called with a non-matching certificate and private key and an hmac_hash override. Packages updated: python3-cryptography and python39-cryptography . CVE-2024-27306 - cross-site scripting (XSS) on index pages for static file handling. Packages updated: python3-aiohttp and python39-aiohttp . CVE-2024-27351 - potential ReDoS in django.utils.text.Truncator.words() . Packages updated: automation-controller: Django . CVE-2024-28219 - buffer overflow in _imagingcms.c . Packages updated: python3-pillow and python39-pillow . CVE-2024-28849 - possible credential leak. Packages updated: python3-galaxy-ng: follow-redirects , python39-galaxy-ng: follow-redirects , and automation-hub: follow-redirects . CVE-2024-30251 - DoS when trying to parse malformed POST requests. Packages updated: python3-aiohttp , python39-aiohttp , and automation-controller: aiohttp . CVE-2024-32879 - improper handling of case sensitivity in social-auth-app-django . Packages updated: python3-social-auth-app-django and python39-social-auth-app-django . CVE-2024-34064 - xmlattr filter accepts keys containing non-attribute characters. Packages updated: python3-jinja2 and python39-jinja2 . CVE-2024-35195 - additional requests to the same host ignore cert verification. Packages updated: python3-requests and python39-requests . CVE-2024-3651 - potential DoS with resource consumption through specially crafted inputs to idna.encode() . Packages updated: python3-idna and python39-idna . CVE-2024-3772 - ReDoS with a crafted email string. Packages updated: python3-pydantic , python39-pydantic , and automation-controller: python-pydantic . CVE-2024-4340 - parsing a heavily nested list leads to a DoS. Packages updated: python3-sqlparse and python39-sqlparse . CVE-2023-5752 - Mercurial configuration injection in repository revision when installing with pip . Packages updated: automation-controller: pip . 8.5.5.2. Automation controller Fixed a Redis connection leak on automation controller version 4.5.6 (AAP-24286). Fixed the #! interpreter directive, also known as shebang, for the Python uwsgitop script (AAP-22461). 8.5.5.3. Automation hub With this update, fetching a list of users for a namespace does not include group members (AAH-3121). Fixed an issue that caused a "Calculated digest does not equal passed in digest" error when syncing the community repository (AAH-3111). Fixed an issue where syncing a rh-certified repository after updating automation hub to the latest version failed (AAH-3218). 8.5.5.4. Event-Driven Ansible Added support for the SAFE_PLUGINS_FOR_PORT_FORWARD setting for eda-server to the installation program (AAP-21620). With this update, eda-server now opens the ports for a rulebook that has a source plugin that requires inbound connections only if that plugin is allowed in the settings (AAP-17416). Fixed an issue where an activation could not be started after reaching a limit of 2048 pods due to a wrong cleanup of volumes (AAP-21065). Fixed an issue where some activations failed due a wrong cleanup of volumes (AAP-22132). With this release, activation-worker and worker targets now correctly stop worker services independently of other required Event-Driven Ansible services (AAP-23735). 8.5.6. RHSA-2024:1057 - Security Advisory - March 01, 2024 RHSA-2024:1057 8.5.6.1. Automation hub Displays the download count for each collection in automation hub (AAP-18298). 8.5.6.2. Event-Driven Ansible Added a parameter to control the number of running activations per Event-Driven Ansible worker service (AAP-20672). Added EDA_CSRF_TRUSTED_ORIGINS , which can be set by user input or defined based on the allowed hostnames that are determined by the installer (AAP-20244). Event-Driven Ansible installation now fails when the pre-existing automation controller version is 4.4.0 or older (AAP-20241). Added the podman_containers_conf_logs_max_size variable for containers.conf to control the max log size for Podman installations. The default value is 10 MiB (AAP-19775). Setting the Event-Driven Ansible debug flag to false now correctly disables Django debug mode (AAP-19577). XDG_RUNTIME_DIR is now defined when applying Event-Driven Ansible linger settings for Podman (AAP-19265). Fixed the Event-Driven Ansible nginx config when using a custom https port (AAP-19137). Some features in this release are classified as Developer Preview, including LDAP authentication functionality for Event-Driven Ansible. For more information about these Event-Driven Ansible Developer Preview features, see Event-Driven Ansible - Developer Preview . 8.5.7. RHSA-2024:0733 - Security Advisory - February 07, 2024 RHSA-2024:0733 8.5.7.1. Automation controller Fixed an error that caused rsyslogd to stop sending events to Splunk HTTP Collector (AAP-19069). 8.5.7.2. Automation hub Automation hub now uses system crypto-policies in nginx (AAP-18974). 8.5.7.3. Event-Driven Ansible Fixed an error that caused a manual installation failure when pinning Event-Driven Ansible to an older version (AAP-19399). 8.5.7.4. Related RPM and container releases for bundle installer RHSA-2024:0322 RHBA-2023:7863 8.5.8. RHBA-2024:0104 - Bug Fix Advisory - January 11, 2024 RHBA-2024:0104 8.5.8.1. General Fixed conditional code statements to align with changes from ansible-core issue #82295 (AAP-19099). Fixed an issue which caused the update-ca-trust handler to be skipped for execution nodes in controller (AAP-18911). Improved the error pages for automation controller (AAP-18840). Implemented libffi fix to avoid uWSGI core dumps on failed import (AAP-18196). Fixed an issue with checking the license type following an upgrade caused by earlier incomplete upgrade (AAP-17615). Postgres certificates are now temporarily copied when checking the Postgres version for SSL mode verify-full (AAP-15374). 8.5.8.2. Related RPM and container releases for bundle installer RHSA-2023:7773 RHBA-2023:7728 RHBA-2023:7863 8.5.9. RHBA-2023:7460 - Bug Fix Advisory - November 21, 2023 RHBA-2023:7460 8.5.9.1. General Fixed an error which caused the incorrect target database to be selected when restoring Event-Driven Ansible from a backup (AAP-18151). Postgres tasks that create users in FIPS environments now use scram-sha-256 (AAP-17516). All Event-Driven Ansible services are enabled after installation is complete (AAP-17426). Ensure all backup and restore staged files and directories are cleaned up before running a backup or restore. You must also mark the files for deletion after a backup or restore (AAP-16101). Updated nginx to 1.22 (AAP-15962). Added a task to VMs that will run the awx-manage command to pre-create events table partitions before executing pg_dump and added a variable for the default number of hours to pre-create (AAP-15920). 8.5.9.2. Event-Driven Ansible Fixed the automation controller URL check when installing Event-Driven Ansible without controller (AAP-18169). Added a separate worker queue for Event-Driven Ansible activations to not interfere with application tasks such as project updates (AAP-14743). 8.5.9.3. Related RPM and container releases for bundle installer. RHSA-2023:7517 RHBA-2023:7460 RHBA-2023:6853 RHBA-2023:6302 RHBA-2023:7462 8.5.10. RHBA-2023:5347 - Bug Fix Advisory - September 25, 2023 RHBA-2023:5347 8.5.10.1. General The installer now properly generates a new SECRET_KEY for controller when running setup.sh with the -k option (AAP-15565). Added temporary file cleanup for Podman to prevent cannot re-exec process error during job execution (AAP-15248). Added new variables for additional nginx configurations per component (AAP-15124). The installer now correctly enforces only one Event-Driven Ansible host per Ansible Automation Platform installation (AAP-15122). You are now able to sync execution environment images in automation hub to automation controller on upgrade (AAP-15121). awx user configuration now supports rootless Podman (AAP-15072). You can now mount the /var/lib/awx directory as a separate filesystem on execution nodes (AAP-15065). Fixed the linger configuration for an Event-Driven Ansible user (AAP-14745). Fixed the values used for signing installer managed certificates for internal postgres installations (AAP-14236). Subject alt names for component hosts will now only be checked for signing certificates when https is enabled (AAP-14235). Fixed postgres sslmode for verify-full that affected external postgres and postgres signed for 127.0.0.1 for internally managed postgres (AAP-13962). Updated the inventory file to include SSL key and cert parameters for provided SSL web certificates (AAP-13854). Fixed an issue with the awx-rsyslogd process where it starts with the wrong user (AAP-13664). Fixed an issue where the restore process failed to stop pulpcore-worker services on RHEL 9 (AAP-13297). Podman configurations are now correctly aligned to the Event-Driven Ansible home directory (AAP-13289). 8.5.10.2. Related RPM and container releases for bundle installer RHSA-2023:5208 RHBA-2023:5271 RHBA-2023:5316 8.6. Installer releases Table 8.2. Component versions per installation bundle Installation bundle Component versions 2.4-7.4 October 01, 2024 ansible-core 2.15.12 Automation controller 4.5.12 Automation hub 4.9.2 Event-Driven Ansible 1.0.7 2.4-7.3 September 19, 2024 ansible-core 2.15.12 Automation controller 4.5.11 Automation hub 4.9.2 Event-Driven Ansible 1.0.7 2.4-7.2 September 06, 2024 ansible-core 2.15.12 Automation controller 4.5.10 Automation hub 4.9.2 Event-Driven Ansible 1.0.7 2.4-7.1 July 15, 2024 ansible-core 2.15.12 Automation controller 4.5.8 Automation hub 4.9.2 Event-Driven Ansible 1.0.7 2.4-7 June 12, 2024 ansible-core 2.15.11 Automation controller 4.5.7 Automation hub 4.9.2 Event-Driven Ansible 1.0.7 8.6.1. RHBA-2024:7454 - bundle installer release 2.4-7.4 - October 01, 2024 RHBA-2024:7454 8.6.1.1. Related RPM releases RHSA-2024:7312 - Security Advisory - September 27, 2024 8.6.1.2. Related container releases RHBA-2024:7282 - Bug Fix Advisory - September 27, 2024 8.6.2. RHBA-2024:6877 - bundle installer release 2.4-7.3 - September 19, 2024 RHBA-2024:6877 8.6.2.1. Related RPM releases RHSA-2024:6765 - Security Advisory - September 18, 2024 8.6.2.2. Related container releases RHBA-2024:6772 - Bug Fix Advisory - September 18, 2024 8.6.3. RHBA-2024:6492 - bundle installer release 2.4-7.2 - September 09, 2024 RHBA-2024:6492 8.6.3.1. Related RPM releases RHSA-2024:6428 - Security Advisory - September 05, 2024 8.6.3.2. Related container releases RHBA-2024:6429 - Bug Fix Advisory - September 05, 2024 8.6.4. RHBA-2024:4555 - bundle installer release 2.4-7.1 - July 15, 2024 RHBA-2024:4555 8.6.4.1. Related RPM releases RHSA-2024:4522 - Security Advisory - July 12, 2024 8.6.4.2. Related container releases RHBA-2024:4523 - Bug Fix Advisory - July 12, 2024 8.6.5. RHBA-2024:3871 - bundle installer release 2.4-7 - June 12, 2024 RHBA-2024:3871 8.6.5.1. Related RPM releases RHSA-2024:3781 - Security Advisory - June 10, 2024 8.6.5.2. Related container releases RHBA-2024:3782 - Bug Fix Advisory - June 10, 2024 8.6.6. RHBA-2024:2074 - bundle installer release 2.4-6.2 - April 25, 2024 RHBA-2024:2074 8.6.6.1. General Resolved a race condition that occurred when there were many nearly simultaneous uploads of the same collection. (AAH-2699) 8.6.6.2. Automation controller Fixed a database connection leak that occurred when the wsrelay main asyncio loop crashes. (AAP-22938) 8.6.7. RHBA-2024:1672 - bundle installer release 2.4-6.1 - April 4, 2024 RHBA-2024:1672 8.6.7.1. General Fixed an issue where worker nodes became unavailable and stuck in a running state (AAP-21828). automation-controller: axios: Exposure of confidential data stored in cookies ( CVE-2023-45857 ) python-django: Potential regular expression denial-of-service in django.utils.text.Truncator.words() ( CVE-2024-27351 ) receptor: golang-fips/openssl: Memory leaks in code encrypting and decrypting RSA payloads ( CVE-2024-1394 ) automation-controller: python-aiohttp: HTTP request smuggling ( CVE-2024-23829 ) python-aiohttp: HTTP request smuggling ( CVE-2024-23829 ) automation-controller: aiohttp: follow_symlinks directory traversal vulnerability ( CVE-2024-23334 ) python3x-aiohttp: aiohttp: follow_symlinks directory traversal vulnerability ( CVE-2024-23334 ) python-aiohttp: aiohttp: follow_symlinks directory traversal vulnerability ( CVE-2024-23334 ) automation-controller: Django: denial of service in intcomma template filter ( CVE-2024-24680 ) automation-controller: jinja2: HTML attribute injection when passing user input as keys to xmlattr filter ( CVE-2024-22195 ) automation-controller: python-cryptography: NULL-dereference when loading PKCS7 certificates ( CVE-2023-49083 ) receptor: golang: net/http/internal: Denial of service by resource consumption through HTTP requests ( CVE-2023-39326 ) automation-controller: python-aiohttp: Issues in HTTP parser with header parsing ( CVE-2023-47627 ) automation-controller: GitPython: Blind local file inclusion ( CVE-2023-41040 ) automation-controller: python-twisted: Disordered HTTP pipeline response in twisted.web ( CVE-2023-46137 ) 8.6.7.2. Automation controller The update execution environment image no longer fails with jobs that use the image (AAP-21733). Replaced string validation of English literals with error codes to allow for universal validation and comparison (AAP-21721). The dispatcher now appropriately ends child processes when the dispatcher terminates (AAP-21049). Fixed a bug where schedule prompted variables and survey answers were reset in edit mode when changing one of the basic form fields (AAP-20967). The upgrade from Ansible Tower 3.8.6 to Ansible Automation Platform 2.4 no longer fails after a database schema migration (AAP-19738). Fixed a bug in OpenShift Container Platform deployments that caused the controller task container to restart (AAP-21308). 8.6.8. RHBA-2024:1158 - bundle installer release 2.4-6 - March 6, 2024 RHBA-2024:1158 8.6.8.1. General python-django: Django: denial-of-service in intcomma template filter ( CVE-2024-24680 ) pycryptodomex: pycryptodome: Side-channel leakage for OAEP decryption in PyCryptodome and pycryptodomex ( CVE-2023-52323 ) python-pygments: pygments: ReDoS in pygments ( CVE-2022-40896 ) python3x-jinja2: jinja2: HTML attribute injection when passing user input as keys to xmlattr filter ( CVE-2024-22195 ) python-jinja2: jinja2: HTML attribute injection when passing user input as keys to xmlattr filter ( CVE-2024-22195 ) python3x-aiohttp: CRLF injection if user controls the HTTP method using aiohttp client ( CVE-2023-49082 ) python-aiohttp: aiohttp: CRLF injection if user controls the HTTP method using aiohttp client ( CVE-2023-49082 ) python3x-aiohttp: aiohttp: HTTP request modification ( CVE-2023-49081 ) python-aiohttp: aiohttp: HTTP request modification ( CVE-2023-49081 ) python3x-aiohttp: python-aiohttp: Issues in HTTP parser with header parsing ( CVE-2023-47627 ) python-aiohttp: Issues in HTTP parser with header parsing ( CVE-2023-47627 ) python3x-pillow: python-pillow: Uncontrolled resource consumption when text length in an ImageDraw instance operates on a long text argument ( CVE-2023-44271 ) python-pillow: Uncontrolled resource consumption when text length in an ImageDraw instance operates on a long text argument ( CVE-2023-44271 ) 8.6.8.2. Event-Driven Ansible event_driven: Ansible Automation Platform: Insecure WebSocket used when interacting with Event-Driven Ansible server ( CVE-2024-1657 ). 8.6.9. RHBA-2023:6831 - bundle installer release 2.4-2.4 - November 08, 2023 RHBA-2023:6831 8.6.9.1. General python3-urllib3/python39-urllib3: Cookie request header is not stripped during cross-origin redirects ( CVE-2023-43804 ) 8.6.9.2. Automation controller automation-controller: Django: Denial-of-service possibility in django.utils.text.Truncator ( CVE-2023-43665 ) Customers using the infra.controller_configuration collection (which uses ansible.controller collection) to update their Ansible Automation Platform environment no longer receive an HTTP 499 response (AAP-17422). 8.6.10. RHBA-2023:5886 - bundle installer release 2.4-2.3 - October 19, 2023 RHBA-2023:5886 8.6.10.1. General receptor: golang: net/http, x/net/http2: rapid stream resets can cause excessive work (CVE-2023-44487) ( CVE-2023-39325 ) receptor: golang: crypto/tls: slow verification of certificate chains containing large RSA keys ( CVE-2023-29409 ) 8.6.10.2. Automation controller receptor: HTTP/2: Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack) ( CVE-2023-44487 ) 8.6.11. RHBA-2023:5812 - bundle installer release 2.4-2.2 - October 17, 2023 RHBA-2023:5812 8.6.11.1. General ansible-core: malicious role archive can cause ansible-galaxy to overwrite arbitrary files ( CVE-2023-5115 ) python3-django/python39-django: Denial-of-service possibility in django.utils.text.Truncator ( CVE-2023-43665 ) 8.6.11.2. Automation controller Added a new Subscription Usage page to the controller UI to view historical usage of licenses (AAP-16983). automation-controller: Django: Potential denial of service vulnerability in django.utils.encoding.uri_to_iri() ( CVE-2023-41164 ) 8.6.12. RHBA-2023:5653 - bundle installer release 2.4-2.1 - October 10, 2023 RHBA-2023:5653 8.6.12.1. General Updated ansible-lint to include an offline mode, which is enabled by default, to prevent outbound network calls (AAH-2606). 8.6.12.2. Automation controller Fixed settings lookup to no longer leave some services in a supervisord FATAL unresponsive state (AAP-16460). Replaced the SQL commands for creating a partition with the use of ATTACH PARTITION to avoid exclusive table lock on event tables (AAP-16350). Fixed settings to allow simultaneous use of SOCIAL_AUTH_SAML_ORGANIZATION_ATTR and SOCIAL_AUTH_SAML_ORGANIZATION_MAP for a given organization (AAP-16183). Fixed Content Security Policy (CSP) to enable Pendo retrieval (AAP-16057). Updated the Thycotic DevOps Secrets Vault credential plugin to allow for filtering based on secret_field (AAP-15695). 8.6.13. RHBA-2023:5140 - bundle installer release 2.4-1.4 - September 12, 2023 RHBA-2023:5140 8.6.13.1. Automation controller Fixed a bug that caused a deadlock on shutdown when Redis was unavailable (AAP-14203). The login form no longer supports autocomplete on the password field due to security concerns (AAP-15545). automation-controller: cryptography: memory corruption via immutable objects ( CVE-2023-23931 ) automation-controller: GitPython: Insecure non-multi options in clone and clone_from is not blocked ( CVE-2023-40267 ) python3-gitpython/python39-gitpython: Insecure non-multi options in clone and clone_from is not blocked ( CVE-2023-40267 ) 8.6.14. RHBA-2023:4782 - bundle installer release 2.4-1.3 - August 28, 2023 RHBA-2023:4782 8.6.14.1. Automation controller automation-controller: python-django: Potential regular expression denial of service vulnerability in EmailValidator/URLValidator ( CVE-2023-36053 ) automation-controller: python-django: Potential denial-of-service vulnerability in file uploads ( CVE-2023-24580 ) Changing credential types by using the drop-down list in the Launch prompt window no longer causes the screen to disappear (AAP-11444). Upgraded python dependencies which include upgrades from Django 3.2 to 4.2.3, psycopg2 to psycopg3, and additional libraries as needed. Also added a new setting in the UI exposing the CSRF_TRUSTED_ORIGIN settings (AAP-12345). Fixed slow database UPDATE statements on the job events table which could cause a task manager timeout (AAP-12586). Fixed an issue where adding a new label to a job through the Prompt On Launch option would not add the label to the job details (AAP-14204). Added noopener and noreferrer attributes to controller UI links that were missing these attributes (AAP-14345). Fixed the broken User Guide link in the Edit Subscription Details page (AAP-14375). Turned off auto-complete on the remaining controller UI forms that were missing that attribute (AAP-14442). The Add button on the credentials page is now accessible for users with the correct permissions (AAP-14525). Fixed an unexpected error that occurred when adding a new host while using a manifest with size 10 (AAP-14675). Applied environment variables from the AWX_TASK_ENV setting when running credential lookup plugins (AAP-14683). Interrupted jobs (such as canceled jobs) no longer clear facts from hosts if the job ran on an execution node (AAP-14878). Using a license that is missing a usage attribute no longer returns a 400 error (AAP-14880). Fixed sub-keys under data from HashiCorp Vault Secret Lookup responses to check for secrets, if found (AAP-14946). Fixed Ansible facts to retry saving to hosts if there is a database deadlock (AAP-15021). 8.6.14.2. Event-Driven Ansible automation-eda-controller: token exposed at importing project ( CVE-2023-4380 ) python3-cryptography/python39-cryptography: memory corruption via immutable objects ( CVE-2023-23931 ) python3-requests/python39-requests: Unintended leak of Proxy-Authorization header ( CVE-2023-32681 ) Contributor and editor roles now have permissions to access users and set the AWX token (AAP-11573). The onboarding wizard now requests controller token creation (AAP-11907). Corrected the filtering capability of the Rule Audit screens so that a search yields results with the starts with function (AAP-11987). Enabling or disabling rulebook activation no longer increases the restarts counter by 1 (AAP-12042). Filtering by a text string now displays all applicable items in the UI, including those that are not visible in the list at that time (AAP-12446). Audit records are no longer missing when running activations with multiple jobs (AAP-12522). The event payload is no longer missing key attributes when a job template fails (AAP-12529). Fixed the Git token leak that occurs when importing a project fails (AAP-12767). The restart policy in Kubernetes (k8s) now restarts a successful activation that is incorrectly marked as failed (AAP-12862). Activation statuses are now reported correctly, whether you are disabling or enabling them (AAP-12896). When the run_job_template action fails, ansible-rulebook prints an error log in the activation output and creates an entry in rule audit so the user is alerted that the rule has failed (AAP-12909). When a user tries to bulk delete rulebook activations from the list, the request now completes successfully and consistently (AAP-13093). The Rulebook Activation link now functions correctly in the Rule Audit Detail UI (AAP-13182). The ansible-rulebook now only connects to the controller if the rulebook being processed has a run_job_template action (AAP-13209). Fixed a bug where some audit rule records had the wrong rulebook link (AAP-13844). Fixed a bug where only the first 10 audit rules had the right link (AAP-13845). Before this update, project credentials could not be updated if there was a change to the credential used in the project. With this update, credentials can be updated in a project with a new or different credential (AAP-13983). The User Access section of the navigation panel no longer disappears after creating a decision environment (AAP-14273). Fixed a bug where filtering for audit rules did not work properly on OpenShift Container Platform (AAP-14512). 8.6.15. RHBA-2023:4621 - bundle installer release 2.4-1.2 - August 10, 2023 RHBA-2023:4621 8.6.15.1. Automation controller automation controller: Html injection in custom login info ( CVE-2023-3971 ) Organization admin users are no longer shown an error on the Instances list (AAP-11195). Fixed the workflow job within the workflow approval to display the correct details (AAP-11433). Credential name search in the ad hoc commands prompt no longer requires case-sensitive input (AAP-11442). The Back to list button in the controller UI now maintains search filters (AAP-11527). Topology view and Instances are only available as sidebar menu options to System Administrators and System Auditors (AAP-11585). Fixed the frequency of the scheduler to run on the correct day of the week as specified by the user (AAP-11776). Fixed an issue with slow database UPDATE statements when using nested tasks (include_tasks) causing task manager timeout (AAP-12586). Added the ability to add execution and hop nodes to VM-based controller installations from the UI (AAP-12849). Added the awx-manage command for creating future events table partitions (AAP-12907). Re-enabled Pendo support by providing the correct Pendo API key (AAP-13415). Added the ability to filter teams by using partial names in the dialog for granting teams access to a resource (AAP-13557). Fixed a bug where a weekly rrule string without a BYDAY value would result in the UI throwing a TypeError (AAP-13670). Fixed a server error that happened when deleting workflow jobs ran before event partitioning migration (AAP-13806). Added API reference documentation for the new bulk API endpoint (AAP-13980). Fixed an issue where related items were not visible in some cases. For example, job template instance groups, organization galaxy credentials, and organization instance groups (AAP-14057). 8.6.16. RHBA-2023:4288 - bundle installer release 2.4-1.1 - July 26, 2023 RHBA-2023:4288 8.6.16.1. Automation hub Fixed issue by using gpg key with passphrase for signing services (AAH-2445). 8.7. Ansible plug-ins for Red Hat Developer Hub 8.7.1. 1.0.0 technical preview release (July 2024) The technology preview release of Ansible plug-ins for Red Hat Developer Hub provides links to the following curated content: Learning paths Introduction to Ansible Getting started with the Ansible VS Code extension YAML Essentials for Ansible Getting started with Ansible playbooks Getting started with Content Collections Ansible plug-ins for Red Hat Developer Hub user guide Interactive labs Getting started with Ansible Navigator Getting started with Ansible Builder Writing your first playbook Signing Ansible Content Collections with Private Automation Hub Note Learning paths and interactive labs are hosted on developers.redhat.com for the tech preview. Customers must sign up for a Red Hat Developer account to access them. Software templates Create Ansible Collection Project Create Ansible Playbook Project Documentation updates Installing Ansible plug-ins for Red Hat Developer Hub Using Ansible plug-ins for Red Hat Developer Hub Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope .
| null |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_release_notes/asynchronous_updates
|
Chapter 4. Using Jobs and DaemonSets
|
Chapter 4. Using Jobs and DaemonSets 4.1. Running background tasks on nodes automatically with daemon sets As an administrator, you can create and use daemon sets to run replicas of a pod on specific or all nodes in an OpenShift Container Platform cluster. A daemon set ensures that all (or some) nodes run a copy of a pod. As nodes are added to the cluster, pods are added to the cluster. As nodes are removed from the cluster, those pods are removed through garbage collection. Deleting a daemon set will clean up the pods it created. You can use daemon sets to create shared storage, run a logging pod on every node in your cluster, or deploy a monitoring agent on every node. For security reasons, the cluster administrators and the project administrators can create daemon sets. For more information on daemon sets, see the Kubernetes documentation . Important Daemon set scheduling is incompatible with project's default node selector. If you fail to disable it, the daemon set gets restricted by merging with the default node selector. This results in frequent pod recreates on the nodes that got unselected by the merged node selector, which in turn puts unwanted load on the cluster. 4.1.1. Scheduled by default scheduler A daemon set ensures that all eligible nodes run a copy of a pod. Normally, the node that a pod runs on is selected by the Kubernetes scheduler. However, previously daemon set pods are created and scheduled by the daemon set controller. That introduces the following issues: Inconsistent pod behavior: Normal pods waiting to be scheduled are created and in Pending state, but daemon set pods are not created in Pending state. This is confusing to the user. Pod preemption is handled by default scheduler. When preemption is enabled, the daemon set controller will make scheduling decisions without considering pod priority and preemption. The ScheduleDaemonSetPods feature, enabled by default in OpenShift Container Platform, lets you to schedule daemon sets using the default scheduler instead of the daemon set controller, by adding the NodeAffinity term to the daemon set pods, instead of the spec.nodeName term. The default scheduler is then used to bind the pod to the target host. If node affinity of the daemon set pod already exists, it is replaced. The daemon set controller only performs these operations when creating or modifying daemon set pods, and no changes are made to the spec.template of the daemon set. nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - target-host-name In addition, a node.kubernetes.io/unschedulable:NoSchedule toleration is added automatically to daemon set pods. The default scheduler ignores unschedulable Nodes when scheduling daemon set pods. 4.1.2. Creating daemonsets When creating daemon sets, the nodeSelector field is used to indicate the nodes on which the daemon set should deploy replicas. Prerequisites Before you start using daemon sets, disable the default project-wide node selector in your namespace, by setting the namespace annotation openshift.io/node-selector to an empty string: USD oc patch namespace myproject -p \ '{"metadata": {"annotations": {"openshift.io/node-selector": ""}}}' Tip You can alternatively apply the following YAML to disable the default project-wide node selector for a namespace: apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: openshift.io/node-selector: '' If you are creating a new project, overwrite the default node selector: USD oc adm new-project <name> --node-selector="" Procedure To create a daemon set: Define the daemon set yaml file: apiVersion: apps/v1 kind: DaemonSet metadata: name: hello-daemonset spec: selector: matchLabels: name: hello-daemonset 1 template: metadata: labels: name: hello-daemonset 2 spec: nodeSelector: 3 role: worker containers: - image: openshift/hello-openshift imagePullPolicy: Always name: registry ports: - containerPort: 80 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log serviceAccount: default terminationGracePeriodSeconds: 10 1 The label selector that determines which pods belong to the daemon set. 2 The pod template's label selector. Must match the label selector above. 3 The node selector that determines on which nodes pod replicas should be deployed. A matching label must be present on the node. Create the daemon set object: USD oc create -f daemonset.yaml To verify that the pods were created, and that each node has a pod replica: Find the daemonset pods: USD oc get pods Example output hello-daemonset-cx6md 1/1 Running 0 2m hello-daemonset-e3md9 1/1 Running 0 2m View the pods to verify the pod has been placed onto the node: USD oc describe pod/hello-daemonset-cx6md|grep Node Example output Node: openshift-node01.hostname.com/10.14.20.134 USD oc describe pod/hello-daemonset-e3md9|grep Node Example output Node: openshift-node02.hostname.com/10.14.20.137 Important If you update a daemon set pod template, the existing pod replicas are not affected. If you delete a daemon set and then create a new daemon set with a different template but the same label selector, it recognizes any existing pod replicas as having matching labels and thus does not update them or create new replicas despite a mismatch in the pod template. If you change node labels, the daemon set adds pods to nodes that match the new labels and deletes pods from nodes that do not match the new labels. To update a daemon set, force new pod replicas to be created by deleting the old replicas or nodes. 4.2. Running tasks in pods using jobs A job executes a task in your OpenShift Container Platform cluster. A job tracks the overall progress of a task and updates its status with information about active, succeeded, and failed pods. Deleting a job will clean up any pod replicas it created. Jobs are part of the Kubernetes API, which can be managed with oc commands like other object types. Sample Job specification apiVersion: batch/v1 kind: Job metadata: name: pi spec: parallelism: 1 1 completions: 1 2 activeDeadlineSeconds: 1800 3 backoffLimit: 6 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: OnFailure 6 1 The pod replicas a job should run in parallel. 2 Successful pod completions are needed to mark a job completed. 3 The maximum duration the job can run. 4 The number of retries for a job. 5 The template for the pod the controller creates. 6 The restart policy of the pod. See the Kubernetes documentation for more information about jobs. 4.2.1. Understanding jobs and cron jobs A job tracks the overall progress of a task and updates its status with information about active, succeeded, and failed pods. Deleting a job cleans up any pods it created. Jobs are part of the Kubernetes API, which can be managed with oc commands like other object types. There are two possible resource types that allow creating run-once objects in OpenShift Container Platform: Job A regular job is a run-once object that creates a task and ensures the job finishes. There are three main types of task suitable to run as a job: Non-parallel jobs: A job that starts only one pod, unless the pod fails. The job is complete as soon as its pod terminates successfully. Parallel jobs with a fixed completion count: a job that starts multiple pods. The job represents the overall task and is complete when there is one successful pod for each value in the range 1 to the completions value. Parallel jobs with a work queue: A job with multiple parallel worker processes in a given pod. OpenShift Container Platform coordinates pods to determine what each should work on or use an external queue service. Each pod is independently capable of determining whether or not all peer pods are complete and that the entire job is done. When any pod from the job terminates with success, no new pods are created. When at least one pod has terminated with success and all pods are terminated, the job is successfully completed. When any pod has exited with success, no other pod should be doing any work for this task or writing any output. Pods should all be in the process of exiting. For more information about how to make use of the different types of job, see Job Patterns in the Kubernetes documentation. Cron job A job can be scheduled to run multiple times, using a cron job. A cron job builds on a regular job by allowing you to specify how the job should be run. Cron jobs are part of the Kubernetes API, which can be managed with oc commands like other object types. Cron jobs are useful for creating periodic and recurring tasks, like running backups or sending emails. Cron jobs can also schedule individual tasks for a specific time, such as if you want to schedule a job for a low activity period. A cron job creates a Job object based on the timezone configured on the control plane node that runs the cronjob controller. Warning A cron job creates a Job object approximately once per execution time of its schedule, but there are circumstances in which it fails to create a job or two jobs might be created. Therefore, jobs must be idempotent and you must configure history limits. 4.2.1.1. Understanding how to create jobs Both resource types require a job configuration that consists of the following key parts: A pod template, which describes the pod that OpenShift Container Platform creates. The parallelism parameter, which specifies how many pods running in parallel at any point in time should execute a job. For non-parallel jobs, leave unset. When unset, defaults to 1 . The completions parameter, specifying how many successful pod completions are needed to finish a job. For non-parallel jobs, leave unset. When unset, defaults to 1 . For parallel jobs with a fixed completion count, specify a value. For parallel jobs with a work queue, leave unset. When unset defaults to the parallelism value. 4.2.1.2. Understanding how to set a maximum duration for jobs When defining a job, you can define its maximum duration by setting the activeDeadlineSeconds field. It is specified in seconds and is not set by default. When not set, there is no maximum duration enforced. The maximum duration is counted from the time when a first pod gets scheduled in the system, and defines how long a job can be active. It tracks overall time of an execution. After reaching the specified timeout, the job is terminated by OpenShift Container Platform. 4.2.1.3. Understanding how to set a job back off policy for pod failure A job can be considered failed, after a set amount of retries due to a logical error in configuration or other similar reasons. Failed pods associated with the job are recreated by the controller with an exponential back off delay ( 10s , 20s , 40s ...) capped at six minutes. The limit is reset if no new failed pods appear between controller checks. Use the spec.backoffLimit parameter to set the number of retries for a job. 4.2.1.4. Understanding how to configure a cron job to remove artifacts Cron jobs can leave behind artifact resources such as jobs or pods. As a user it is important to configure history limits so that old jobs and their pods are properly cleaned. There are two fields within cron job's spec responsible for that: .spec.successfulJobsHistoryLimit . The number of successful finished jobs to retain (defaults to 3). .spec.failedJobsHistoryLimit . The number of failed finished jobs to retain (defaults to 1). Tip Delete cron jobs that you no longer need: USD oc delete cronjob/<cron_job_name> Doing this prevents them from generating unnecessary artifacts. You can suspend further executions by setting the spec.suspend to true. All subsequent executions are suspended until you reset to false . 4.2.1.5. Known limitations The job specification restart policy only applies to the pods , and not the job controller . However, the job controller is hard-coded to keep retrying jobs to completion. As such, restartPolicy: Never or --restart=Never results in the same behavior as restartPolicy: OnFailure or --restart=OnFailure . That is, when a job fails it is restarted automatically until it succeeds (or is manually discarded). The policy only sets which subsystem performs the restart. With the Never policy, the job controller performs the restart. With each attempt, the job controller increments the number of failures in the job status and create new pods. This means that with each failed attempt, the number of pods increases. With the OnFailure policy, kubelet performs the restart. Each attempt does not increment the number of failures in the job status. In addition, kubelet will retry failed jobs starting pods on the same nodes. 4.2.2. Creating jobs You create a job in OpenShift Container Platform by creating a job object. Procedure To create a job: Create a YAML file similar to the following: apiVersion: batch/v1 kind: Job metadata: name: pi spec: parallelism: 1 1 completions: 1 2 activeDeadlineSeconds: 1800 3 backoffLimit: 6 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: OnFailure 6 1 Optional: Specify how many pod replicas a job should run in parallel; defaults to 1 . For non-parallel jobs, leave unset. When unset, defaults to 1 . 2 Optional: Specify how many successful pod completions are needed to mark a job completed. For non-parallel jobs, leave unset. When unset, defaults to 1 . For parallel jobs with a fixed completion count, specify the number of completions. For parallel jobs with a work queue, leave unset. When unset defaults to the parallelism value. 3 Optional: Specify the maximum duration the job can run. 4 Optional: Specify the number of retries for a job. This field defaults to six. 5 Specify the template for the pod the controller creates. 6 Specify the restart policy of the pod: Never . Do not restart the job. OnFailure . Restart the job only if it fails. Always . Always restart the job. For details on how OpenShift Container Platform uses restart policy with failed containers, see the Example States in the Kubernetes documentation. Create the job: USD oc create -f <file-name>.yaml Note You can also create and launch a job from a single command using oc create job . The following command creates and launches a job similar to the one specified in the example: USD oc create job pi --image=perl -- perl -Mbignum=bpi -wle 'print bpi(2000)' 4.2.3. Creating cron jobs You create a cron job in OpenShift Container Platform by creating a job object. Procedure To create a cron job: Create a YAML file similar to the following: apiVersion: batch/v1 kind: CronJob metadata: name: pi spec: schedule: "*/1 * * * *" 1 concurrencyPolicy: "Replace" 2 startingDeadlineSeconds: 200 3 suspend: true 4 successfulJobsHistoryLimit: 3 5 failedJobsHistoryLimit: 1 6 jobTemplate: 7 spec: template: metadata: labels: 8 parent: "cronjobpi" spec: containers: - name: pi image: perl command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: OnFailure 9 1 Schedule for the job specified in cron format . In this example, the job will run every minute. 2 An optional concurrency policy, specifying how to treat concurrent jobs within a cron job. Only one of the following concurrent policies may be specified. If not specified, this defaults to allowing concurrent executions. Allow allows cron jobs to run concurrently. Forbid forbids concurrent runs, skipping the run if the has not finished yet. Replace cancels the currently running job and replaces it with a new one. 3 An optional deadline (in seconds) for starting the job if it misses its scheduled time for any reason. Missed jobs executions will be counted as failed ones. If not specified, there is no deadline. 4 An optional flag allowing the suspension of a cron job. If set to true , all subsequent executions will be suspended. 5 The number of successful finished jobs to retain (defaults to 3). 6 The number of failed finished jobs to retain (defaults to 1). 7 Job template. This is similar to the job example. 8 Sets a label for jobs spawned by this cron job. 9 The restart policy of the pod. This does not apply to the job controller. Note The .spec.successfulJobsHistoryLimit and .spec.failedJobsHistoryLimit fields are optional. These fields specify how many completed and failed jobs should be kept. By default, they are set to 3 and 1 respectively. Setting a limit to 0 corresponds to keeping none of the corresponding kind of jobs after they finish. Create the cron job: USD oc create -f <file-name>.yaml Note You can also create and launch a cron job from a single command using oc create cronjob . The following command creates and launches a cron job similar to the one specified in the example: USD oc create cronjob pi --image=perl --schedule='*/1 * * * *' -- perl -Mbignum=bpi -wle 'print bpi(2000)' With oc create cronjob , the --schedule option accepts schedules in cron format .
|
[
"nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - target-host-name",
"oc patch namespace myproject -p '{\"metadata\": {\"annotations\": {\"openshift.io/node-selector\": \"\"}}}'",
"apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: openshift.io/node-selector: ''",
"oc adm new-project <name> --node-selector=\"\"",
"apiVersion: apps/v1 kind: DaemonSet metadata: name: hello-daemonset spec: selector: matchLabels: name: hello-daemonset 1 template: metadata: labels: name: hello-daemonset 2 spec: nodeSelector: 3 role: worker containers: - image: openshift/hello-openshift imagePullPolicy: Always name: registry ports: - containerPort: 80 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log serviceAccount: default terminationGracePeriodSeconds: 10",
"oc create -f daemonset.yaml",
"oc get pods",
"hello-daemonset-cx6md 1/1 Running 0 2m hello-daemonset-e3md9 1/1 Running 0 2m",
"oc describe pod/hello-daemonset-cx6md|grep Node",
"Node: openshift-node01.hostname.com/10.14.20.134",
"oc describe pod/hello-daemonset-e3md9|grep Node",
"Node: openshift-node02.hostname.com/10.14.20.137",
"apiVersion: batch/v1 kind: Job metadata: name: pi spec: parallelism: 1 1 completions: 1 2 activeDeadlineSeconds: 1800 3 backoffLimit: 6 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: OnFailure 6",
"oc delete cronjob/<cron_job_name>",
"apiVersion: batch/v1 kind: Job metadata: name: pi spec: parallelism: 1 1 completions: 1 2 activeDeadlineSeconds: 1800 3 backoffLimit: 6 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: OnFailure 6",
"oc create -f <file-name>.yaml",
"oc create job pi --image=perl -- perl -Mbignum=bpi -wle 'print bpi(2000)'",
"apiVersion: batch/v1 kind: CronJob metadata: name: pi spec: schedule: \"*/1 * * * *\" 1 concurrencyPolicy: \"Replace\" 2 startingDeadlineSeconds: 200 3 suspend: true 4 successfulJobsHistoryLimit: 3 5 failedJobsHistoryLimit: 1 6 jobTemplate: 7 spec: template: metadata: labels: 8 parent: \"cronjobpi\" spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: OnFailure 9",
"oc create -f <file-name>.yaml",
"oc create cronjob pi --image=perl --schedule='*/1 * * * *' -- perl -Mbignum=bpi -wle 'print bpi(2000)'"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/nodes/using-jobs-and-daemonsets
|
Chapter 141. Webhook
|
Chapter 141. Webhook Only consumer is supported The Webhook meta component allows other Camel components to configure webhooks on a remote webhook provider and listening for them. The following components currently provide webhook endpoints: Telegram Typically, other components that support webhook will bring this dependency transitively. 141.1. Dependencies When using webhook with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-webhook-starter</artifactId> </dependency> 141.2. URI Format 141.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 141.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 141.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 141.4. Component Options The Webhook component supports 8 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean webhookAutoRegister (consumer) Automatically register the webhook at startup and unregister it on shutdown. true boolean webhookBasePath (consumer) The first (base) path element where the webhook will be exposed. It's a good practice to set it to a random string, so that it cannot be guessed by unauthorized parties. String webhookComponentName (consumer) The Camel Rest component to use for the REST transport, such as netty-http. String webhookExternalUrl (consumer) The URL of the current service as seen by the webhook provider. String webhookPath (consumer) The path where the webhook endpoint will be exposed (relative to basePath, if any). String autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean configuration (advanced) Set the default configuration for the webhook meta-component. WebhookConfiguration 141.5. Endpoint Options The Webhook endpoint is configured using URI syntax: with the following path and query parameters: 141.5.1. Path Parameters (1 parameters) Name Description Default Type endpointUri (consumer) Required The delegate uri. Must belong to a component that supports webhooks. String 141.5.2. Query Parameters (8 parameters) Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean webhookAutoRegister (consumer) Automatically register the webhook at startup and unregister it on shutdown. true boolean webhookBasePath (consumer) The first (base) path element where the webhook will be exposed. It's a good practice to set it to a random string, so that it cannot be guessed by unauthorized parties. String webhookComponentName (consumer) The Camel Rest component to use for the REST transport, such as netty-http. String webhookExternalUrl (consumer) The URL of the current service as seen by the webhook provider. String webhookPath (consumer) The path where the webhook endpoint will be exposed (relative to basePath, if any). String exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern 141.6. Examples Examples of webhook component are provided in the documentation of the delegate components that support it. 141.7. Spring Boot Auto-Configuration The component supports 9 options, which are listed below. Name Description Default Type camel.component.webhook.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.webhook.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.webhook.configuration Set the default configuration for the webhook meta-component. The option is a org.apache.camel.component.webhook.WebhookConfiguration type. WebhookConfiguration camel.component.webhook.enabled Whether to enable auto configuration of the webhook component. This is enabled by default. Boolean camel.component.webhook.webhook-auto-register Automatically register the webhook at startup and unregister it on shutdown. true Boolean camel.component.webhook.webhook-base-path The first (base) path element where the webhook will be exposed. It's a good practice to set it to a random string, so that it cannot be guessed by unauthorized parties. String camel.component.webhook.webhook-component-name The Camel Rest component to use for the REST transport, such as netty-http. String camel.component.webhook.webhook-external-url The URL of the current service as seen by the webhook provider. String camel.component.webhook.webhook-path The path where the webhook endpoint will be exposed (relative to basePath, if any). String
|
[
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-webhook-starter</artifactId> </dependency>",
"webhook:endpoint[?options]",
"webhook:endpointUri"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-webhook-component-starter
|
Chapter 17. Networking (neutron) Parameters
|
Chapter 17. Networking (neutron) Parameters You can modify the neutron service with networking parameters. Parameter Description CertificateKeySize Specifies the private key size used when creating the certificate. The default value is 2048 . ContainerOvnCertificateKeySize Override the private key size used when creating the certificate for this service. DerivePciWhitelistEnabled Whether to enable or not the pci passthrough whitelist automation. The default value is true . DhcpAgentNotification Enables DHCP agent notifications. The default value is false . DockerAdditionalSockets Additional domain sockets for the docker daemon to bind to (useful for mounting into containers that launch other containers). The default value is ['/var/lib/openstack/docker.sock'] . DockerNeutronDHCPAgentUlimit Ulimit for OpenStack Networking (neutron) DHCP Agent Container. The default value is ['nofile=16384'] . DockerNeutronL3AgentUlimit Ulimit for OpenStack Networking (neutron) L3 Agent Container. The default value is ['nofile=16384'] . DockerOpenvswitchUlimit Ulimit for Openvswitch Container. The default value is ['nofile=16384'] . DockerPuppetMountHostPuppet Whether containerized puppet executions use modules from the baremetal host. Defaults to true. Can be set to false to consume puppet modules from containers directly. The default value is true . DockerSRIOVUlimit Ulimit for SR-IOV Container. The default value is ['nofile=16384'] . EnableSQLAlchemyCollectd Set to true to enable the SQLAlchemy-collectd server plugin. The default value is false . EnableVLANTransparency If True, then allow plugins that support it to create VLAN transparent networks. The default value is false . MemcacheUseAdvancedPool Use the advanced (eventlet safe) memcached client pool. The default value is true . NeutronAgentDownTime Seconds to regard the agent as down; should be at least twice NeutronGlobalReportInterval, to be sure the agent is down for good. The default value is 600 . NeutronAgentExtensions Comma-separated list of extensions enabled for the OpenStack Networking (neutron) agents. The default value is qos . NeutronAllowL3AgentFailover Allow automatic l3-agent failover. The default value is True . NeutronApiOptEnvVars Hash of optional environment variables. NeutronApiOptVolumes List of optional volumes to be mounted. NeutronBridgeMappings The logical to physical bridge mappings to use. The default ( datacentre:br-ex ) maps br-ex (the external bridge on hosts) to a physical name datacentre , which provider networks can use (for example, the default floating network). If changing this, either use different post-install network scripts or be sure to keep datacentre as a mapping network name. The default value is datacentre:br-ex . NeutronCertificateKeySize Override the private key size used when creating the certificate for this service. NeutronCorePlugin The core plugin for networking. The value should be the entrypoint to be loaded from neutron.core_plugins namespace. The default value is ml2 . NeutronDBSyncExtraParams String of extra command line parameters to append to the neutron-db-manage upgrade head command. NeutronDefaultAvailabilityZones Comma-separated list of default network availability zones to be used by OpenStack Networking (neutron) if its resource is created without availability zone hints. If not set, no AZs will be configured for OpenStack Networking (neutron) network services. NeutronDhcpAgentAvailabilityZone Availability zone for OpenStack Networking (neutron) DHCP agent. If not set, no AZs will be configured for OpenStack Networking (neutron) network services. NeutronDhcpAgentDnsmasqDnsServers List of servers to use as dnsmasq forwarders. NeutronDhcpAgentDnsmasqEnableAddr6List Enable dhcp-host entry with list of addresses when port has multiple IPv6 addresses in the same subnet. The default value is true . NeutronDhcpAgentsPerNetwork The number of DHCP agents to schedule per network. The default value is 0 . NeutronDhcpCertificateKeySize Override the private key size used when creating the certificate for this service. NeutronDhcpLoadType Additional to the availability zones aware network scheduler. The default value is networks . NeutronDhcpOvsIntegrationBridge Name of Open vSwitch bridge to use. NeutronDhcpServerBroadcastReply OpenStack Networking (neutron) DHCP agent to use broadcast in DHCP replies. The default value is false . NeutronDnsDomain Domain to use for building the hostnames. The default value is openstacklocal . NeutronEnableARPResponder Enable ARP responder feature in the OVS Agent. The default value is false . NeutronEnableDnsmasqDockerWrapper Generate a dnsmasq wrapper script so that OpenStack Networking (neutron) launches dnsmasq in a separate container. The default value is true . NeutronEnableDVR Enable Distributed Virtual Router. NeutronEnableForceMetadata If True, DHCP always provides metadata route to VM. The default value is false . NeutronEnableHaproxyDockerWrapper Generate a wrapper script so OpenStack Networking (neutron) launches haproxy in a separate container. The default value is true . NeutronEnableIgmpSnooping Enable IGMP Snooping. The default value is false . NeutronEnableInternalDNS If True, enable the internal OpenStack Networking (neutron) DNS server that provides name resolution between VMs. This parameter has no effect if NeutronDhcpAgentDnsmasqDnsServers is set. The default value is false . NeutronEnableIsolatedMetadata If True, DHCP allows metadata support on isolated networks. The default value is false . NeutronEnableKeepalivedWrapper Generate a wrapper script so OpenStack Networking (neutron) launches keepalived processes in a separate container. The default value is true . NeutronEnableL2Pop Enable/disable the L2 population feature in the OpenStack Networking (neutron) agents. The default value is False . NeutronEnableMetadataNetwork If True, DHCP provides metadata network. Requires either NeutronEnableIsolatedMetadata or NeutronEnableForceMetadata parameters to also be True. The default value is false . NeutronExcludeDevices List of <network_device>:<excluded_devices> mapping network_device to the agent's node-specific list of virtual functions that should not be used for virtual networking. excluded_devices is a semicolon separated list of virtual functions to exclude from network_device. The network_device in the mapping should appear in the physical_device_mappings list. NeutronFirewallDriver Firewall driver for realizing OpenStack Networking (neutron) security group function. NeutronFlatNetworks Sets the flat network name to configure in plugins. The default value is datacentre . NeutronGeneveMaxHeaderSize Geneve encapsulation header size. The default value is 38 . NeutronGlobalPhysnetMtu MTU of the underlying physical network. OpenStack Networking (neutron) uses this value to calculate MTU for all virtual network components. For flat and VLAN networks, OpenStack Networking uses this value without modification. For overlay networks such as VXLAN, OpenStack Networking automatically subtracts the overlay protocol overhead from this value. The default value is 0 . NeutronGlobalReportInterval Seconds between nodes reporting state to server; should be less than NeutronAgentDownTime, best if it is half or less than NeutronAgentDownTime. The default value is 300 . NeutronInterfaceDriver OpenStack Networking (neutron) DHCP Agent interface driver. The default value is neutron.agent.linux.interface.OVSInterfaceDriver . NeutronL3AgentAvailabilityZone Availability zone for OpenStack Networking (neutron) L3 agent. If not set, no AZs will be configured for OpenStack Networking (neutron) network services. NeutronL3AgentExtensions Comma-separated list of extensions enabled for the OpenStack Networking (neutron) L3 agent. NeutronL3AgentLoggingBurstLimit Maximum number of packets per rate_limit. The default value is 25 . NeutronL3AgentLoggingLocalOutputLogBase Output logfile path on agent side, default syslog file. NeutronL3AgentLoggingRateLimit Maximum number of packets logging per second. The default value is 100 . NeutronL3AgentMode Agent mode for L3 agent. Must be legacy or dvr_snat . The default value is legacy . NeutronL3AgentRadvdUser The username passed to radvd, used to drop root privileges and change user ID to username and group ID to the primary group of username. If no user specified, the user executing the L3 agent will be passed. If "root" specified, because radvd is spawned as root, no "username" parameter will be passed. The default value is root . NeutronMechanismDrivers The mechanism drivers for the OpenStack Networking (neutron) tenant network. The default value is ovn . NeutronMetadataProxySharedSecret Shared secret to prevent spoofing. NeutronMetadataWorkers Sets the number of worker processes for the OpenStack Networking (neutron) OVN metadata agent. The default value results in the configuration being left unset and a system-dependent default will be chosen (usually the number of processors). Please note that this can result in a large number of processes and memory consumption on systems with a large core count. On such systems it is recommended that a non-default value be selected that matches the load requirements. NeutronML2PhysicalNetworkMtus A list of mappings of physical networks to MTU values. The format of the mapping is <physnet>:<mtu val> . This mapping allows you to specify a physical network MTU value that differs from the default segment_mtu value in ML2 plugin and overwrites values from global_physnet_mtu for the selected network. NeutronNetworkSchedulerDriver The network schedule driver to use for availability zones. The default value is neutron.scheduler.dhcp_agent_scheduler.AZAwareWeightScheduler . NeutronNetworkType The tenant network type for OpenStack Networking (neutron). The default value is geneve . If you change this value, make sure the new value matches the parameter OVNEncapType . For example, if you want to use VXLAN instead of Geneve in an ML2/OVN environment, ensure that both NeutronNetworkType and OVNEncapType are set to vxlan . The default value is geneve . NeutronNetworkVLANRanges The OpenStack Networking (neutron) ML2 and Open vSwitch VLAN mapping range to support. Defaults to permitting any VLAN on the datacentre physical network (See NeutronBridgeMappings ). The default value is datacentre:1:1000 . NeutronOverlayIPVersion IP version used for all overlay network endpoints. The default value is 4 . NeutronOVNLoggingBurstLimit Maximum number of packets per rate_limit. The default value is 25 . NeutronOVNLoggingLocalOutputLogBase Output logfile path on agent side, default syslog file. NeutronOVNLoggingRateLimit Maximum number of packets logging per second. The default value is 100 . NeutronOVSAgentLoggingBurstLimit Maximum number of packets per rate_limit. The default value is 25 . NeutronOVSAgentLoggingLocalOutputLogBase Output logfile path on agent side, default syslog file. NeutronOVSAgentLoggingRateLimit Maximum number of packets logging per second. The default value is 100 . NeutronOVSFirewallDriver Configure the classname of the firewall driver to use for implementing security groups. Possible values depend on system configuration. Some examples are: noop , openvswitch , iptables_hybrid . The default value of an empty string results in a default supported configuration. NeutronOvsIntegrationBridge Name of Open vSwitch bridge to use. NeutronOvsResourceProviderBandwidths Comma-separated list of <bridge>:<egress_bw>:<ingress_bw> tuples, showing the available bandwidth for the given bridge in the given direction. The direction is meant from VM perspective. Bandwidth is measured in kilobits per second (kbps). The bridge must appear in bridge_mappings as the value. NeutronOVSTunnelCsum Set or un-set the tunnel header checksum on outgoing IP packet carrying GRE/VXLAN tunnel. The default value is false . NeutronOvsVnicTypeBlacklist Comma-separated list of VNIC types for which support in OpenStack Networking (neutron) is administratively prohibited by the OVS mechanism driver. NeutronPassword The password for the OpenStack Networking (neutron) service and database account. NeutronPermittedEthertypes Set additional ethertypes to to be configured on OpenStack Networking (neutron) firewalls. NeutronPhysicalDevMappings List of <physical_network>:<physical device> All physical networks listed in network_vlan_ranges on the server should have mappings to appropriate interfaces on each agent. Example "tenant0:ens2f0,tenant1:ens2f1". NeutronPluginExtensions Comma-separated list of enabled extension plugins. The default value is qos,port_security,dns_domain_ports . NeutronPluginMl2PuppetTags Puppet resource tag names that are used to generate configuration files with puppet. The default value is neutron_plugin_ml2 . NeutronPortQuota Number of ports allowed per tenant, and minus means unlimited. The default value is 500 . NeutronRouterSchedulerDriver The router schedule driver to use for availability zones. The default value is neutron.scheduler.l3_agent_scheduler.AZLeastRoutersScheduler . NeutronRpcWorkers Sets the number of RPC workers for the OpenStack Networking (neutron) service. If not specified, it'll take the value of NeutronWorkers and if this is not specified either, the default value results in the configuration being left unset and a system-dependent default will be chosen (usually 1). NeutronSecurityGroupQuota Number of security groups allowed per tenant, and minus means unlimited. The default value is 10 . NeutronServicePlugins Comma-separated list of service plugin entrypoints. The default value is qos,ovn-router,trunk,segments,port_forwarding,log . NeutronSriovAgentExtensions Comma-separated list of extensions enabled for the OpenStack Networking (neutron) SR-IOV agents. NeutronSriovResourceProviderBandwidths Comma-separated list of <network_device>:<egress_bw>:<ingress_bw> tuples, showing the available bandwidth for the given device in the given direction. The direction is meant from VM perspective. Bandwidth is measured in kilobits per second (kbps). The device must appear in physical_device_mappings as the value. NeutronSriovVnicTypeBlacklist Comma-separated list of VNIC types for which support in OpenStack Networking (neutron) is administratively prohibited by the SR-IOV mechanism driver. NeutronTunnelIdRanges Comma-separated list of <tun_min>:<tun_max> tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation. The default value is ['1:4094'] . NeutronTunnelTypes The tunnel types for the OpenStack Networking (neutron) tenant network. The default value is vxlan . NeutronTypeDrivers Comma-separated list of network type driver entrypoints to be loaded. The default value is geneve,vxlan,vlan,flat . NeutronVhostuserSocketDir The vhost-user socket directory for OVS. NeutronVniRanges Comma-separated list of <vni_min>:<vni_max> tuples enumerating ranges of VXLAN VNI IDs that are available for tenant network allocation. The default value is ['1:65536'] . NeutronWorkers Sets the number of API and RPC workers for the OpenStack Networking service. Note that more workers creates a larger number of processes on systems, which results in excess memory consumption. It is recommended to choose a suitable non-default value on systems with high CPU core counts. 0 sets to the OpenStack internal default, which is equal to the number of CPU cores on the node. NotificationDriver Driver or drivers to handle sending notifications. The default value is noop . OVNAvailabilityZone The az options to configure in ovs db. eg. [ az-0 , az-1 , az-2 ]. OVNCMSOptions The CMS options to configure in ovs db. OVNContainerCpusetCpus Limit the specific CPUs or cores a container can use. It can be specified as a single core (ex. 0), as a comma-separated list (ex. 0,1), as a range (ex. 0-3) or a combination if methods (ex 0-3,7,11-15). The selected cores should be isolated from guests and hypervisor in order to obtain best possible performance. OVNControllerGarpMaxTimeout When used, this configuration value specifies the maximum timeout (in seconds) between two consecutive GARP packets sent by ovn-controller. The default value is 0 . OVNControllerImageUpdateTimeout During update, how long we wait for the container image to be updated, in seconds. The default value is 600 . OVNControllerUpdateTimeout During update, how long we wait for the container to be updated, in seconds. The default value is 600 . OVNDbConnectionTimeout Timeout in seconds for the OVSDB connection transaction. The default value is 180 . OvnDBSCertificateKeySize Override the private key size used when creating the certificate for this service. OVNDnsServers List of servers to use as as dns forwarders. OVNEmitNeedToFrag Configure OVN to emit "need to frag" packets in case of MTU mismatch. The default value is false . OVNEnableHaproxyDockerWrapper Generate a wrapper script so that haproxy is launched in a separate container. The default value is true . OVNEncapTos The value to be applied to OVN tunnel interface's option:tos as specified in the Open_vSwitch database Interface table. This feature is supported from OVN v21.12.0. The default value is 0 . OVNEncapType Type of encapsulation used in OVN. It can be geneve or vxlan . The default value is geneve . If you change this value, make sure the new value is also listed in the parameter NeutronNetworkType . For example, if you change OVNEncapType to vxlan , ensure that the list in NeutronNetworkType includes vxlan . The default value is geneve . OvnHardwareOffloadedQos Enable the QoS support for hardware offloaded ports. The default value is false . OVNIntegrationBridge Name of the OVS bridge to use as integration bridge by OVN Controller. The default value is br-int . OvnMetadataCertificateKeySize Override the private key size used when creating the certificate for this service. OVNMetadataEnabled Whether Metadata Service has to be enabled. The default value is true . OVNNeutronSyncMode The synchronization mode of OVN with OpenStack Networking (neutron) DB. The default value is log . OVNNorthboundClusterPort Cluster port of the OVN Northbound DB server. The default value is 6643 . OVNNorthboundServerPort Port of the OVN Northbound DB server. The default value is 6641 . OVNOfctrlWaitBeforeClear Sets the time ovn-controller will wait on startup before clearing all openflow rules and installing the new ones, in ms. The default value is 8000 . OVNOpenflowProbeInterval The inactivity probe interval of the OpenFlow connection to the OpenvSwitch integration bridge, in seconds. The default value is 60 . OVNOvsdbProbeInterval Probe interval in ms for the OVSDB session. The default value is 60000 . OVNQosDriver OVN notification driver for OpenStack Networking (neutron) QOS service plugin. The default value is ovn-qos . OVNRemoteProbeInterval Probe interval in ms. The default value is 60000 . OVNSouthboundClusterPort Cluster port of the OVN Southbound DB server. The default value is 6644 . OVNSouthboundServerPort Port of the OVN Southbound DB server. The default value is 6642 . OVNStaticBridgeMacMappings Static OVN Bridge MAC address mappings. Unique OVN bridge mac addresses is dynamically allocated by creating OpenStack Networking (neutron) ports. When OpenStack Networking (neutron) isn't available, for instance in the standalone deployment, use this parameter to provide static OVN bridge mac addresses. For example: controller-0: datacenter: 00:00:5E:00:53:00 provider: 00:00:5E:00:53:01 compute-0: datacenter: 00:00:5E:00:54:00 provider: 00:00:5E:00:54:01. OvsDisableEMC Disable OVS Exact Match Cache. The default value is false . OvsHwOffload Enable OVS Hardware Offload. This feature supported from OVS 2.8.0. The default value is false . PythonInterpreter The python interpreter to use for python and ansible actions. The default value is `USD(command -v python3 command -v python)`. TenantNetPhysnetMtu MTU of the underlying physical network. OpenStack Networking (neutron) uses this value to calculate MTU for all virtual network components. For flat and VLAN networks, OpenStack Networking (neutron) uses this value without modification. For overlay networks such as VXLAN, OpenStack Networking (neutron) automatically subtracts the overlay protocol overhead from this value. (The mtu setting of the Tenant network in network_data.yaml control's this parameter.). The default value is 1500 .
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/overcloud_parameters/ref_networking-neutron-parameters_overcloud_parameters
|
Appendix B. Restoring manual changes overwritten by a Puppet run
|
Appendix B. Restoring manual changes overwritten by a Puppet run If your manual configuration has been overwritten by a Puppet run, you can restore the files to the state. The following example shows you how to restore a DHCP configuration file overwritten by a Puppet run. Procedure Copy the file you intend to restore. This allows you to compare the files to check for any mandatory changes required by the upgrade. This is not common for DNS or DHCP services. Check the log files to note down the md5sum of the overwritten file. For example: Restore the overwritten file: Compare the backup file and the restored file, and edit the restored file to include any mandatory changes required by the upgrade.
|
[
"cp /etc/dhcp/dhcpd.conf /etc/dhcp/dhcpd.backup",
"journalctl -xe /Stage[main]/Dhcp/File[/etc/dhcp/dhcpd.conf]: Filebucketed /etc/dhcp/dhcpd.conf to puppet with sum 622d9820b8e764ab124367c68f5fa3a1",
"puppet filebucket restore --local --bucket /var/lib/puppet/clientbucket /etc/dhcp/dhcpd.conf \\ 622d9820b8e764ab124367c68f5fa3a1"
] |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/installing_satellite_server_in_a_disconnected_network_environment/restoring-manual-changes-overwritten-by-a-puppet-run_satellite
|
Chapter 13. PodMetrics [metrics.k8s.io/v1beta1]
|
Chapter 13. PodMetrics [metrics.k8s.io/v1beta1] Description PodMetrics sets resource usage metrics of a pod. Type object Required timestamp window containers 13.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources containers array Metrics for all containers are collected within the same time window. containers[] object ContainerMetrics sets resource usage metrics of a container. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata timestamp Time The following fields define time interval from which metrics were collected from the interval [Timestamp-Window, Timestamp]. window Duration 13.1.1. .containers Description Metrics for all containers are collected within the same time window. Type array 13.1.2. .containers[] Description ContainerMetrics sets resource usage metrics of a container. Type object Required name usage Property Type Description name string Container name corresponding to the one from pod.spec.containers. usage object (Quantity) The memory usage is the memory working set. 13.2. API endpoints The following API endpoints are available: /apis/metrics.k8s.io/v1beta1/pods GET : list objects of kind PodMetrics /apis/metrics.k8s.io/v1beta1/namespaces/{namespace}/pods GET : list objects of kind PodMetrics /apis/metrics.k8s.io/v1beta1/namespaces/{namespace}/pods/{name} GET : read the specified PodMetrics 13.2.1. /apis/metrics.k8s.io/v1beta1/pods HTTP method GET Description list objects of kind PodMetrics Table 13.1. HTTP responses HTTP code Reponse body 200 - OK PodMetricsList schema 13.2.2. /apis/metrics.k8s.io/v1beta1/namespaces/{namespace}/pods HTTP method GET Description list objects of kind PodMetrics Table 13.2. HTTP responses HTTP code Reponse body 200 - OK PodMetricsList schema 13.2.3. /apis/metrics.k8s.io/v1beta1/namespaces/{namespace}/pods/{name} Table 13.3. Global path parameters Parameter Type Description name string name of the PodMetrics HTTP method GET Description read the specified PodMetrics Table 13.4. HTTP responses HTTP code Reponse body 200 - OK PodMetrics schema
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/monitoring_apis/podmetrics-metrics-k8s-io-v1beta1
|
Chapter 17. Applying security context to Streams for Apache Kafka pods and containers
|
Chapter 17. Applying security context to Streams for Apache Kafka pods and containers Security context defines constraints on pods and containers. By specifying a security context, pods and containers only have the permissions they need. For example, permissions can control runtime operations or access to resources. 17.1. Handling of security context by OpenShift platform Handling of security context depends on the tooling of the OpenShift platform you are using. For example, OpenShift uses built-in security context constraints (SCCs) to control permissions. SCCs are the settings and strategies that control the security features a pod has access to. By default, OpenShift injects security context configuration automatically. In most cases, this means you don't need to configure security context for the pods and containers created by the Cluster Operator. Although you can still create and manage your own SCCs. For more information, see the OpenShift documentation .
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/assembly-security-providers-str
|
Chapter 6. Resolved issues
|
Chapter 6. Resolved issues The following issue is resolved for this release: Issue Description JWS-3336 JWS 6.0.2 is missing redhat version suffix in ServerInfo For details of any security fixes in this release, see the errata links in Advisories related to this release .
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_6.0_service_pack_3_release_notes/resolved_issues
|
Chapter 10. Observability in JBoss EAP
|
Chapter 10. Observability in JBoss EAP If you're a developer or system administrator, observability is a set of practices and technologies you can use to determine, based on certain signals from your application, the location and source of a problem in your application. The most common signals are metrics, events, and tracing. JBoss EAP uses OpenTelemetry for observability . 10.1. OpenTelemetry in JBoss EAP OpenTelemetry is a set of tools, application programming interfaces (APIs), and software development kits (SDKs) you can use to instrument, generate, collect, and export telemetry data for your applications. Telemetry data includes metrics, logs, and traces. Analyzing an application's telemetry data helps you to improve your application's performance. JBoss EAP provides OpenTelemetry capability through the opentelemetry subsystem. Note Red Hat JBoss Enterprise Application Platform 7.4 provides only OpenTelemetry tracing capabilities. Important OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview . Additional resources OpenTelemetry Documentation 10.2. OpenTelemetry configuration in JBoss EAP You configure a number of aspects of OpenTelemetry in JBoss EAP using the opentelemetry subsystem. These include exporter, span processor, and sampler. exporter To analyze and visualize traces and metrics, you export them to a collector such as Jaeger. You can configure JBoss EAP to use either Jaeger or any collector that supports the OpenTelemetry protocol (OTLP). span processor You can configure the span processor to export spans either as they are produced or in batches. You can also configure the number of traces to export. sampler You can configure the number of traces to record by configuring the sampler. Example configuration The following XML is an example of the full OpenTelemetry configuration, including default values. JBoss EAP does not persist the default values when you make changes, so your configuration might look different. <subsystem xmlns="urn:wildfly:opentelemetry:1.0" service-name="example"> <exporter type="jaeger" endpoint="http://localhost:14250"/> <span-processor type="batch" batch-delay="4500" max-queue-size="128" max-export-batch-size="512" export-timeout="45"/> <sampler type="on"/> </subsystem> Note You cannot use an OpenShift route object to connect with a Jaeger endpoint. Instead, use http:// <ip_address> : <port> or http:// <service_name> : <port> . Additional resources OpenTelemetry subsystem attributes 10.3. OpenTelemetry tracing in JBoss EAP JBoss EAP provides OpenTelemetry tracing to help you track the progress of user requests as they pass through different parts of your application. By analyzing traces, you can improve your application's performance and debug availability issues. OpenTelemetry tracing consists of the following components: Trace A collection of operations that a request goes through in an application. Span A single operation within a trace. It provides request, error, and duration (RED) metrics and contains a span context. Span context A set of unique identifiers that represents a request that the containing span is a part of. JBoss EAP automatically traces REST calls to your Jakarta RESTful Web Services applications and container-managed Jakarta RESTful Web Services client invocations. JBoss EAP traces REST calls implicitly as follows: For each incoming request: JBoss EAP extracts the span context from the request. JBoss EAP starts a new span, then closes it when the request is completed. For each outgoing request: JBoss EAP injects span context into the request. JBoss EAP starts a new span, then closes it when the request is completed. In addition to implicit tracing, you can create custom spans by injecting a Tracer instance into your application for granular tracing. Important If you see duplicate traces exported for REST calls, disable the microprofile-opentracing-smallrye subsystem. For information about disabling the microprofile-opentracing-smallrye , see Removing the microprofile-opentracing-smallrye subsystem . Additional resources Using Jaeger to observe the OpenTelemetry traces for an application OpenTelemetry application development in JBoss EAP 10.4. Enabling OpenTelemetry tracing in JBoss EAP To use OpenTelemetry tracing in JBoss EAP you must first enable the opentelemetry subsystem. Prerequisites You have installed JBoss EAP XP. Procedure Add the OpenTelemetry extension using the management CLI. Enable the opentelemetry subsystem using the management CLI. Reload JBoss EAP. Additional resources Configuring the opentelemetry subsystem 10.5. Configuring the opentelemetry subsystem You can configure the opentelemetry subsystem to set different aspects of tracing. Configure these based on the collector you use for observing the traces. Prerequisites You have enabled the opentelemetry subsystem. For more information, see Enabling OpenTelemetry tracing in JBoss EAP . Procedure Set the exporter type for the traces. Syntax Example Set the endpoint at which to export the traces. Syntax Example Set the service name under which the traces are exported. Syntax Example Additional resources Using Jaeger to observe the OpenTelemetry traces for an application 10.6. Using Jaeger to observe the OpenTelemetry traces for an application JBoss EAP automatically and implicitly traces REST calls to Jakarta RESTful Web Services applications. You do not need to add any configuration to your Jakarta RESTful Web Services application or configure the opentelemetry subsystem. The following procedure demonstrates how to observe traces for the helloworld-rs quickstart in the Jaeger console. Prerequisites You have installed Docker. For more information, see Get Docker . You have downloaded the helloworld-rs quickstart. The quickstart is available at helloworld-rs . You have configured the the opentelemetry subsystem. For more information, see Configuring the opentelemetry subsystem . Procedure Start the Jaeger console using its Docker image. Use Maven to deploy the helloworld-rs quickstart from its root directory. In a web browser, access the quickstart at http://localhost:8080/helloworld-rs/ , then click any link. In a web browser, open the Jaeger console at http://localhost:16686/search . hello-world.rs is listed under Service . Select hello-world.rs and click Find Traces . The details of the trace for hello-world.rs are listed. Additional resources OpenTelemetry application development in JBoss EAP 10.7. OpenTelemetry tracing application development Although JBoss EAP automatically and implicitly traces REST calls to Jakarta RESTful Web Services applications, you can create custom spans from your application for granular tracing. A span is a single operation within a trace. You can create a span when, for example, a resource is defined, a method is called, and so on, in your application. You create custom traces in your application by injecting a Tracer instance. 10.7.1. Configuring a Maven project for OpenTelemetry tracing For creating an OpenTelemetry tracing application, create a Maven project with the required dependencies and directory structure. Prerequisites You have installed Maven. For more information, see Downloading Apache Maven . You have configured your Maven repository for the latest release. For information about installing the latest Maven repository patch, see Maven and the JBoss EAP microprofile maven repository . Procedure In the CLI, use the mvn command to set up a Maven project. This command creates the directory structure for the project and the pom.xml configuration file. Syntax Example Navigate to the application root directory. Syntax Example Update the generated pom.xml file. Set the following properties: <properties> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target> <failOnMissingWebXml>false</failOnMissingWebXml> <version.server.bom>4.0.0.GA</version.server.bom> <version.wildfly-jar.maven.plugin>6.1.1.Final</version.wildfly-jar.maven.plugin> </properties> Set the following dependencies: <dependencies> <dependency> <groupId>jakarta.enterprise</groupId> <artifactId>jakarta.enterprise.cdi-api</artifactId> <version>2.0.2</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.jboss.spec.javax.ws.rs</groupId> <artifactId>jboss-jaxrs-api_2.1_spec</artifactId> <version>2.0.2.Final</version> <scope>provided</scope> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-api</artifactId> <version>1.5.0</version> <scope>provided</scope> </dependency> </dependencies> Set the following build configuration to use mvn widlfy:deploy to deploy the application: <build> <!-- Set the name of the archive --> <finalName>USD{project.artifactId}</finalName> <plugins> <!-- Allows to use mvn wildfly:deploy --> <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-maven-plugin</artifactId> </plugin> </plugins> </build> Verification In the application root directory, enter the following command: You get an output similar to the following: You can now create an OpenTelemetry tracing application. Additional resources Creating applications that create custom spans 10.7.2. Creating applications that create custom spans The following procedure demonstrates how to create an application that can create two custom spans like these: prepare-hello - When the method getHello() in the application is called. process-hello - When the value hello is assigned to a new String object hello . This procedure also demonstrates how to view these spans in a Jaeger console. <application_root> in the procedure denotes the directory that contains the pom.xml file, which contains the Maven configuration for your application. Prerequisites You have installed Docker. For more information, see Get Docker . You have created a Maven project. For more information, see Configuring Maven project for OpenTelemetry tracing . You have configured the the opentelemetry subsystem. For more information, see Configuring the opentelemetry subsystem . Procedure In the <application_root> , create a directory to store the Java files. Syntax Example Navigate to the new directory. Syntax Example Create a JakartaRestApplication.java file with the following content. This JakartaRestApplication class declares the application as a Jakarta RESTful Web Services application. package com.example.opentelemetry; import javax.ws.rs.ApplicationPath; import javax.ws.rs.core.Application; @ApplicationPath("/") public class JakartaRestApplication extends Application { } Create an ExplicitlyTracedBean.java file with the following content for the class ExplicitlyTracedBean . This class creates custom spans by injecting a Tracer class. package com.example.opentelemetry; import javax.enterprise.context.RequestScoped; import javax.inject.Inject; import io.opentelemetry.api.trace.Span; import io.opentelemetry.api.trace.Tracer; @RequestScoped public class ExplicitlyTracedBean { @Inject private Tracer tracer; 1 public String getHello() { Span prepareHelloSpan = tracer.spanBuilder("prepare-hello").startSpan(); 2 prepareHelloSpan.makeCurrent(); String hello = "hello"; Span processHelloSpan = tracer.spanBuilder("process-hello").startSpan(); 3 processHelloSpan.makeCurrent(); hello = hello.toUpperCase(); processHelloSpan.end(); prepareHelloSpan.end(); return hello; } } 1 Inject a Tracer class to create custom spans. 2 Create a span called prepare-hello to indicate that the method getHello() was called. 3 Create a span called process-hello to indicate that the value hello was assigned to a new String object called hello . Create a TracedResource.java file with the following content for TracedResource class. This file injects the ExplicitlyTracedBean class and declares two endpoints: traced and cdi-trace . package com.example.opentelemetry; import javax.enterprise.context.RequestScoped; import javax.inject.Inject; import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.Produces; import javax.ws.rs.core.MediaType; @Path("/hello") @RequestScoped public class TracedResource { @Inject private ExplicitlyTracedBean tracedBean; @GET @Path("/traced") @Produces(MediaType.TEXT_PLAIN) public String hello() { return "hello"; } @GET @Path("/cdi-trace") @Produces(MediaType.TEXT_PLAIN) public String cdiHello() { return tracedBean.getHello(); } } Navigate to the application root directory. Syntax Example Compile and deploy the application with the following command: Start the Jaeger console. In a browser, navigate to \localhost:8080/simple-tracing-example/hello/cdi-trace . In a browser, open the Jaeger console at http://localhost:16686/search . In the Jaeger console, select JBoss EAP XP and click Find Traces . Click 3 Spans . The Jaeger console displays the following traces: 1 This is the span for the automatic implicit trace. 2 The custom span prepare-hello indicates that the method getHello() was called. It is the child of span for the automatic implicit trace. 3 The custom span process-hello indicates that the value hello was assigned to a new String object hello . It is the child of the prepare-hello span. Whenever you access the application endpoint at http://localhost:16686/search , a new trace is created with all the child spans. Additional resources OpenTelemetry tracing in JBoss EAP
|
[
"<subsystem xmlns=\"urn:wildfly:opentelemetry:1.0\" service-name=\"example\"> <exporter type=\"jaeger\" endpoint=\"http://localhost:14250\"/> <span-processor type=\"batch\" batch-delay=\"4500\" max-queue-size=\"128\" max-export-batch-size=\"512\" export-timeout=\"45\"/> <sampler type=\"on\"/> </subsystem>",
"/extension=org.wildfly.extension.opentelemetry:add",
"/subsystem=opentelemetry:add",
"reload",
"/subsystem=opentelemetry:write-attribute(name=exporter-type, value= <exporter_type> )",
"/subsystem=opentelemetry:write-attribute(name=exporter-type, value=jaeger)",
"/subsystem=opentelemetry:write-attribute(name=endpoint, value= <URL:port> )",
"/subsystem=opentelemetry:write-attribute(name=endpoint, value=http:localhost:14250)",
"/subsystem=opentelemetry:write-attribute(name=service-name, value= <service_name> )",
"/subsystem=opentelemetry:write-attribute(name=service-name, value=exampleOpenTelemetryService)",
"docker run -d --name jaeger -e COLLECTOR_ZIPKIN_HOST_PORT=:9411 -p 5775:5775/udp -p 6831:6831/udp -p 6832:6832/udp -p 5778:5778 -p 16686:16686 -p 14268:14268 -p 14250:14250 -p 9411:9411 jaegertracing/all-in-one:1.29",
"mvn clean install wildfly:deploy",
"mvn archetype:generate -DgroupId= <group-to-which-your-application-belongs> -DartifactId= <name-of-your-application> -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false",
"mvn archetype:generate -DgroupId=com.example.opentelemetry -DartifactId=simple-tracing-example -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false",
"cd <name-of-your-application>",
"cd simple-tracing-example",
"<properties> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target> <failOnMissingWebXml>false</failOnMissingWebXml> <version.server.bom>4.0.0.GA</version.server.bom> <version.wildfly-jar.maven.plugin>6.1.1.Final</version.wildfly-jar.maven.plugin> </properties>",
"<dependencies> <dependency> <groupId>jakarta.enterprise</groupId> <artifactId>jakarta.enterprise.cdi-api</artifactId> <version>2.0.2</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.jboss.spec.javax.ws.rs</groupId> <artifactId>jboss-jaxrs-api_2.1_spec</artifactId> <version>2.0.2.Final</version> <scope>provided</scope> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-api</artifactId> <version>1.5.0</version> <scope>provided</scope> </dependency> </dependencies>",
"<build> <!-- Set the name of the archive --> <finalName>USD{project.artifactId}</finalName> <plugins> <!-- Allows to use mvn wildfly:deploy --> <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-maven-plugin</artifactId> </plugin> </plugins> </build>",
"mvn install",
"[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1.440 s [INFO] Finished at: 2021-12-27T14:45:12+05:30 [INFO] ------------------------------------------------------------------------",
"mkdir -p src/main/java/com/example/opentelemetry",
"mkdir -p src/main/java/com/example/opentelemetry",
"cd src/main/java/com/example/opentelemetry",
"cd src/main/java/com/example/opentelemetry",
"package com.example.opentelemetry; import javax.ws.rs.ApplicationPath; import javax.ws.rs.core.Application; @ApplicationPath(\"/\") public class JakartaRestApplication extends Application { }",
"package com.example.opentelemetry; import javax.enterprise.context.RequestScoped; import javax.inject.Inject; import io.opentelemetry.api.trace.Span; import io.opentelemetry.api.trace.Tracer; @RequestScoped public class ExplicitlyTracedBean { @Inject private Tracer tracer; 1 public String getHello() { Span prepareHelloSpan = tracer.spanBuilder(\"prepare-hello\").startSpan(); 2 prepareHelloSpan.makeCurrent(); String hello = \"hello\"; Span processHelloSpan = tracer.spanBuilder(\"process-hello\").startSpan(); 3 processHelloSpan.makeCurrent(); hello = hello.toUpperCase(); processHelloSpan.end(); prepareHelloSpan.end(); return hello; } }",
"package com.example.opentelemetry; import javax.enterprise.context.RequestScoped; import javax.inject.Inject; import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.Produces; import javax.ws.rs.core.MediaType; @Path(\"/hello\") @RequestScoped public class TracedResource { @Inject private ExplicitlyTracedBean tracedBean; @GET @Path(\"/traced\") @Produces(MediaType.TEXT_PLAIN) public String hello() { return \"hello\"; } @GET @Path(\"/cdi-trace\") @Produces(MediaType.TEXT_PLAIN) public String cdiHello() { return tracedBean.getHello(); } }",
"cd <path_to_application_root> / <application_root>",
"cd ~/applications/simple-tracing-example",
"mvn clean package wildfly:deploy",
"docker run -d --name jaeger -e COLLECTOR_ZIPKIN_HOST_PORT=:9411 -p 5775:5775/udp -p 6831:6831/udp -p 6832:6832/udp -p 5778:5778 -p 16686:16686 -p 14268:14268 -p 14250:14250 -p 9411:9411 jaegertracing/all-in-one:1.29",
"|GET /hello/cdi-trace 1 - | prepare-hello 2 - | process-hello 3"
] |
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/using_jboss_eap_xp_4.0.0/assembly-observability-in-jboss-eap_openid-connect-in-jboss-eap
|
2.8.9.2.3. IPTables Parameter Options
|
2.8.9.2.3. IPTables Parameter Options Certain iptables commands, including those used to add, append, delete, insert, or replace rules within a particular chain, require various parameters to construct a packet filtering rule. -c - Resets the counters for a particular rule. This parameter accepts the PKTS and BYTES options to specify which counter to reset. -d - Sets the destination hostname, IP address, or network of a packet that matches the rule. When matching a network, the following IP address/netmask formats are supported: N.N.N.N / M.M.M.M - Where N.N.N.N is the IP address range and M.M.M.M is the netmask. N.N.N.N / M - Where N.N.N.N is the IP address range and M is the bitmask. -f - Applies this rule only to fragmented packets. You can use the exclamation point character ( ! ) option before this parameter to specify that only unfragmented packets are matched. Note Distinguishing between fragmented and unfragmented packets is desirable, despite fragmented packets being a standard part of the IP protocol. Originally designed to allow IP packets to travel over networks with differing frame sizes, these days fragmentation is more commonly used to generate DoS attacks using malformed packets. It's also worth noting that IPv6 disallows fragmentation entirely. -i - Sets the incoming network interface, such as eth0 or ppp0 . With iptables , this optional parameter may only be used with the INPUT and FORWARD chains when used with the filter table and the PREROUTING chain with the nat and mangle tables. This parameter also supports the following special options: Exclamation point character ( ! ) - Reverses the directive, meaning any specified interfaces are excluded from this rule. Plus character ( + ) - A wildcard character used to match all interfaces that match the specified string. For example, the parameter -i eth+ would apply this rule to any Ethernet interfaces but exclude any other interfaces, such as ppp0 . If the -i parameter is used but no interface is specified, then every interface is affected by the rule. -j - Jumps to the specified target when a packet matches a particular rule. The standard targets are ACCEPT , DROP , QUEUE , and RETURN . Extended options are also available through modules loaded by default with the Red Hat Enterprise Linux iptables RPM package. Valid targets in these modules include LOG , MARK , and REJECT , among others. Refer to the iptables man page for more information about these and other targets. This option can also be used to direct a packet matching a particular rule to a user-defined chain outside of the current chain so that other rules can be applied to the packet. If no target is specified, the packet moves past the rule with no action taken. The counter for this rule, however, increases by one. -o - Sets the outgoing network interface for a rule. This option is only valid for the OUTPUT and FORWARD chains in the filter table, and the POSTROUTING chain in the nat and mangle tables. This parameter accepts the same options as the incoming network interface parameter ( -i ). -p <protocol> - Sets the IP protocol affected by the rule. This can be either icmp , tcp , udp , or all , or it can be a numeric value, representing one of these or a different protocol. You can also use any protocols listed in the /etc/protocols file. The " all " protocol means the rule applies to every supported protocol. If no protocol is listed with this rule, it defaults to " all ". -s - Sets the source for a particular packet using the same syntax as the destination ( -d ) parameter.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-command_options_for_iptables-iptables_parameter_options
|
Chapter 3. Installing a cluster quickly on Alibaba Cloud
|
Chapter 3. Installing a cluster quickly on Alibaba Cloud In OpenShift Container Platform version 4.15, you can install a cluster on Alibaba Cloud that uses the default configuration options. Important Alibaba Cloud on OpenShift Container Platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You registered your domain . If you use a firewall, you configured it to allow the sites that your cluster requires access to. You have created the required Alibaba Cloud resources . If the cloud Resource Access Management (RAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain Resource Access Management (RAM) credentials . 3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 3.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 3.5. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Alibaba Cloud. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select alibabacloud as the platform to target. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Provide a descriptive name for your cluster. Installing the cluster into Alibaba Cloud requires that the Cloud Credential Operator (CCO) operate in manual mode. Modify the install-config.yaml file to set the credentialsMode parameter to Manual : Example install-config.yaml configuration file with credentialsMode set to Manual apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled ... 1 Add this line to set the credentialsMode to Manual . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 3.6. Generating the required installation manifests You must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. Procedure Generate the manifests by running the following command from the directory that contains the installation program: USD openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the directory in which the installation program creates files. 3.7. Creating credentials for OpenShift Container Platform components with the ccoctl tool You can use the OpenShift Container Platform Cloud Credential Operator (CCO) utility to automate the creation of Alibaba Cloud RAM users and policies for each in-cluster component. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Created a RAM user with sufficient permission to create the OpenShift Container Platform cluster. Added the AccessKeyID ( access_key_id ) and AccessKeySecret ( access_key_secret ) of that RAM user into the ~/.alibabacloud/credentials file on your local computer. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: Run the following command to use the tool: USD ccoctl alibabacloud create-ram-users \ --name <name> \ 1 --region=<alibaba_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> 4 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the Alibaba Cloud region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Specify the directory where the generated component credentials secrets will be placed. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Example output 2022/02/11 16:18:26 Created RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:27 Ready for creating new ram policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy 2022/02/11 16:18:27 RAM policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has created 2022/02/11 16:18:28 Policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has attached on user user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Created access keys for RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Saved credentials configuration to: user1-alicloud/manifests/openshift-machine-api-alibabacloud-credentials-credentials.yaml ... Note A RAM user can have up to two AccessKeys at the same time. If you run ccoctl alibabacloud create-ram-users more than twice, the previously generated manifests secret becomes stale and you must reapply the newly generated secrets. Verify that the OpenShift Container Platform secrets are created: USD ls <path_to_ccoctl_output_dir>/manifests Example output openshift-cluster-csi-drivers-alibaba-disk-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-alibabacloud-credentials-credentials.yaml You can verify that the RAM users and policies are created by querying Alibaba Cloud. For more information, refer to Alibaba Cloud documentation on listing RAM users and policies. Copy the generated credential files to the target manifests directory: USD cp ./<path_to_ccoctl_output_dir>/manifests/*credentials.yaml ./<path_to_installation>dir>/manifests/ where: <path_to_ccoctl_output_dir> Specifies the directory created by the ccoctl alibabacloud create-ram-users command. <path_to_installation_dir> Specifies the directory in which the installation program creates files. 3.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.9. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 3.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.11. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. 3.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. See About remote health monitoring for more information about the Telemetry service 3.13. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting .
|
[
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl alibabacloud create-ram-users --name <name> \\ 1 --region=<alibaba_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> 4",
"2022/02/11 16:18:26 Created RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:27 Ready for creating new ram policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy 2022/02/11 16:18:27 RAM policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has created 2022/02/11 16:18:28 Policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has attached on user user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Created access keys for RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Saved credentials configuration to: user1-alicloud/manifests/openshift-machine-api-alibabacloud-credentials-credentials.yaml",
"ls <path_to_ccoctl_output_dir>/manifests",
"openshift-cluster-csi-drivers-alibaba-disk-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-alibabacloud-credentials-credentials.yaml",
"cp ./<path_to_ccoctl_output_dir>/manifests/*credentials.yaml ./<path_to_installation>dir>/manifests/",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_alibaba/installing-alibaba-default
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_external_red_hat_utilities_with_identity_management/proc_providing-feedback-on-red-hat-documentation_using-external-red-hat-utilities-with-idm
|
Chapter 6. Clustering in Web Applications
|
Chapter 6. Clustering in Web Applications 6.1. Session Replication 6.1.1. About HTTP Session Replication Session replication ensures that client sessions of distributable applications are not disrupted by failovers of nodes in a cluster. Each node in the cluster shares information about ongoing sessions, and can take over sessions if a node disappears. Session replication is the mechanism by which mod_cluster, mod_jk, mod_proxy, ISAPI, and NSAPI clusters provide high availability. 6.1.2. Enable Session Replication in Your Application To take advantage of JBoss EAP High Availability (HA) features and enable clustering of your web application, you must configure your application to be distributable. If your application is not marked as distributable, its sessions will never be distributed. Make your Application Distributable Add the <distributable/> element inside the <web-app> tag of your application's web.xml descriptor file: Example: Minimum Configuration for a Distributable Application <?xml version="1.0"?> <web-app xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_3_0.xsd" version="3.0"> <distributable/> </web-app> , if desired, modify the default replication behavior. If you want to change any of the values affecting session replication, you can override them inside a <replication-config> element inside <jboss-web> in an application's WEB-INF/jboss-web.xml file. For a given element, only include it if you want to override the defaults. Example: <replication-config> Values <jboss-web xmlns="http://www.jboss.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.jboss.com/xml/ns/javaee http://www.jboss.org/j2ee/schema/jboss-web_10_0.xsd"> <replication-config> <replication-granularity>SESSION</replication-granularity> </replication-config> </jboss-web> The <replication-granularity> parameter determines the granularity of data that is replicated. It defaults to SESSION , but can be set to ATTRIBUTE to increase performance on sessions where most attributes remain unchanged. Valid values for <replication-granularity> can be : SESSION : The default value. The entire session object is replicated if any attribute is dirty. This policy is required if an object reference is shared by multiple session attributes. The shared object references are maintained on remote nodes since the entire session is serialized in one unit. ATTRIBUTE : This is only for dirty attributes in the session and for some session data, such as the last-accessed timestamp. Immutable Session Attributes For JBoss EAP 7, session replication is triggered when the session is mutated or when any mutable attribute of the session is accessed. Session attributes are assumed to be mutable unless one of the following is true: The value is a known immutable value: null java.util.Collections.EMPTY_LIST , EMPTY_MAP , EMPTY_SET The value type is or implements a known immutable type: java.lang.Boolean , Character , Byte , Short , Integer , Long , Float , Double java.lang.Class , Enum , StackTraceElement , String java.io.File , java.nio.file.Path java.math.BigDecimal , BigInteger , MathContext java.net.Inet4Address , Inet6Address , InetSocketAddress , URI , URL java.security.Permission java.util.Currency , Locale , TimeZone , UUID java.time.Clock , Duration , Instant , LocalDate , LocalDateTime , LocalTime , MonthDay , Period , Year , YearMonth , ZoneId , ZoneOffset , ZonedDateTime java.time.chrono.ChronoLocalDate , Chronology , Era java.time.format.DateTimeFormatter , DecimalStyle java.time.temporal.TemporalField , TemporalUnit , ValueRange , WeekFields java.time.zone.ZoneOffsetTransition , ZoneOffsetTransitionRule , ZoneRules The value type is annotated with: @org.wildfly.clustering.web.annotation.Immutable @net.jcip.annotations.Immutable 6.1.3. Session attribute marshalling Minimizing the replication or persistence payload for individual session attributes can directly improve performance by reducing the number of bytes that is sent over the network or persisted to storage. By using a web application, you can optimize the marshalling of a session attribute in the following ways: You can customize Java Development Kit (JDK) serialization logic. You can implement a custom externalizer. An externalizer implements the org.wildfly.clustering.marshalling.Externalizer interface, which dictates the marshalling of a class. An externalizer not only reads or writes the state of an object directly from or to an input/output stream, but also performs the following actions: Allows an application to store an object in the session that does not implement java.io.Serializable Eliminates the need to serialize the class descriptor of an object along with its state Example public class MyObjectExternalizer implements org.wildfly.clustering.marshalling.Externalizer<MyObject> { @Override public Class<MyObject> getTargetClass() { return MyObject.class; } @Override public void writeObject(ObjectOutput output, MyObject object) throws IOException { // Write object state to stream } @Override public MyObject readObject(ObjectInput input) throws IOException, ClassNotFoundException { // Construct and read object state from stream return ...; } } Note The service loader mechanism dynamically loads the externalizers during deployment. Implementations must be enumerated within a file named /META-INF/services/org.wildfly.clustering.marshalling.Externalizer . 6.2. HTTP Session Passivation and Activation 6.2.1. About HTTP Session Passivation and Activation Passivation is the process of controlling memory usage by removing relatively unused sessions from memory while storing them in persistent storage. Activation is when passivated data is retrieved from persisted storage and put back into memory. Passivation occurs at different times in an HTTP session's lifetime: When the container requests the creation of a new session, if the number of currently active sessions exceeds a configurable limit, the server attempts to passivate some sessions to make room for the new one. When a web application is deployed and a backup copy of sessions active on other servers is acquired by the newly deploying web application's session manager, sessions might be passivated. A session is passivated if the number of active sessions exceeds a configurable maximum. Sessions are always passivated using a Least Recently Used (LRU) algorithm. 6.2.2. Configure HTTP Session Passivation in Your Application HTTP session passivation is configured in your application's WEB-INF/jboss-web.xml and META-INF/jboss-web.xml file. Example: jboss-web.xml File <jboss-web xmlns="http://www.jboss.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.jboss.com/xml/ns/javaee http://www.jboss.org/j2ee/schema/jboss-web_10_0.xsd"> <max-active-sessions>20</max-active-sessions> </jboss-web> The <max-active-sessions> element dictates the maximum number of active sessions allowed, and is used to enable session passivation. If session creation would cause the number of active sessions to exceed <max-active-sessions> , then the oldest session known to the session manager will passivate to make room for the new session. Note The total number of sessions in memory includes sessions replicated from other cluster nodes that are not being accessed on this node. Take this into account when setting <max-active-sessions> . The number of sessions replicated from other nodes also depends on whether REPL or DIST cache mode is enabled. In REPL cache mode, each session is replicated to each node. In DIST cache mode, each session is replicated only to the number of nodes specified by the owners parameter. See Configure the Cache Mode in the JBoss EAP Configuration Guide for information on configuring session cache modes. For example, consider an eight node cluster, where each node handles requests from 100 users. With REPL cache mode, each node would store 800 sessions in memory. With DIST cache mode enabled, and the default owners setting of 2 , each node stores 200 sessions in memory. 6.3. Public API for Clustering Services JBoss EAP 7 introduced a refined public clustering API for use by applications. The new services are designed to be lightweight, easily injectable, with no external dependencies. org.wildfly.clustering.group.Group The group service provides a mechanism to view the cluster topology for a JGroups channel, and to be notified when the topology changes. @Resource(lookup = "java:jboss/clustering/group/channel-name") private Group channelGroup; org.wildfly.clustering.dispatcher.CommandDispatcher The CommandDispatcherFactory service provides a mechanism to create a dispatcher for executing commands on nodes in the cluster. The resulting CommandDispatcher is a command-pattern analog to the reflection-based GroupRpcDispatcher from JBoss EAP releases. @Resource(lookup = "java:jboss/clustering/dispatcher/channel-name") private CommandDispatcherFactory factory; public void foo() { String context = "Hello world!"; // Exclude node1 and node3 from the executeOnCluster try (CommandDispatcher<String> dispatcher = this.factory.createCommandDispatcher(context)) { dispatcher.executeOnGroup(new StdOutCommand(), node1, node3); } } public static class StdOutCommand implements Command<Void, String> { @Override public Void execute(String context) { System.out.println(context); return null; } } 6.4. HA Singleton Service A clustered singleton service, also known as a high-availability (HA) singleton, is a service deployed on multiple nodes in a cluster. The service is provided on only one of the nodes. The node running the singleton service is usually called the master node. When the master node either fails or shuts down, another master is selected from the remaining nodes and the service is restarted on the new master. Other than a brief interval when one master has stopped and another has yet to take over, the service is provided by one, and only one, node. HA Singleton ServiceBuilder API JBoss EAP 7 introduced a new public API for building singleton services that simplifies the process significantly. The SingletonServiceConfigurator implementation installs its services so they will start asynchronously, preventing deadlocking of the Modular Service Container (MSC). HA Singleton Service Election Policies If there is a preference for which node should start the HA singleton, you can set the election policy in the ServiceActivator class. JBoss EAP provides two election policies: Simple election policy The simple election policy selects a master node based on the relative age. The required age is configured in the position property, which is the index in the list of available nodes, where: position = 0 - refers to the oldest node. This is the default. position = 1 - refers to the 2nd oldest, and so on. Position can also be negative to indicate the youngest nodes. position = -1 - refers to the youngest node. position = -2 - refers to the 2nd youngest node, and so on. Random election policy The random election policy elects a random member to be the provider of a singleton service. HA Singleton Service Preferences An HA singleton service election policy may optionally specify one or more preferred servers. This preferred server, when available, will be the master for all singleton applications under that policy. You can define the preferences either through the node name or through the outbound socket binding name. Note Node preferences always take precedence over the results of an election policy. By default, JBoss EAP high availability configurations provide a simple election policy named default with no preferred server. You can set the preference by creating a custom policy and defining the preferred server. Quorum A potential issue with a singleton service arises when there is a network partition. In this situation, also known as the split-brain scenario, subsets of nodes cannot communicate with each other. Each set of servers consider all servers from the other set failed and continue to work as the surviving cluster. This might result in data consistency issues. JBoss EAP allows you to specify a quorum in the election policy to prevent the split-brain scenario. The quorum specifies a minimum number of nodes to be present before a singleton provider election can take place. A typical deployment scenario uses a quorum of N/2 + 1, where N is the anticipated cluster size. This value can be updated at runtime, and will immediately affect any active singleton services. HA Singleton Service Election Listener After electing a new primary singleton service provider, any registered SingletonElectionListener is triggered, notifying every member of the cluster about the new primary provider. The following example illustrates the usage of SingletonElectionListener : public class MySingletonElectionListener implements SingletonElectionListener { @Override public void elected(List<Node> candidates, Node primary) { // ... } } public class MyServiceActivator implements ServiceActivator { @Override public void activate(ServiceActivatorContext context) { String containerName = "foo"; SingletonElectionPolicy policy = new MySingletonElectionPolicy(); SingletonElectionListener listener = new MySingletonElectionListener(); int quorum = 3; ServiceName name = ServiceName.parse("my.service.name"); // Use a SingletonServiceConfiguratorFactory backed by default cache of "foo" container Supplier<SingletonServiceConfiguratorFactory> factory = new ActiveServiceSupplier<SingletonServiceConfiguratorFactory>(context.getServiceRegistry(), ServiceName.parse(SingletonDefaultCacheRequirement.SINGLETON_SERVICE_CONFIGURATOR_FACTORY.resolve(containerName))); ServiceBuilder<?> builder = factory.get().createSingletonServiceConfigurator(name) .electionListener(listener) .electionPolicy(policy) .requireQuorum(quorum) .build(context.getServiceTarget()); Service service = new MyService(); builder.setInstance(service).install(); } } Create an HA Singleton Service Application The following is an abbreviated example of the steps required to create and deploy an application as a cluster-wide singleton service. This example demonstrates a querying service that regularly queries a singleton service to get the name of the node on which it is running. To see the singleton behavior, you must deploy the application to at least two servers. It is transparent whether the singleton service is running on the same node or whether the value is obtained remotely. Create the SingletonService class. The getValue() method, which is called by the querying service, provides information about the node on which it is running. class SingletonService implements Service { private Logger LOG = Logger.getLogger(this.getClass()); private Node node; private Supplier<Group> groupSupplier; private Consumer<Node> nodeConsumer; SingletonService(Supplier<Group> groupSupplier, Consumer<Node> nodeConsumer) { this.groupSupplier = groupSupplier; this.nodeConsumer = nodeConsumer; } @Override public void start(StartContext context) { this.node = this.groupSupplier.get().getLocalMember(); this.nodeConsumer.accept(this.node); LOG.infof("Singleton service is started on node '%s'.", this.node); } @Override public void stop(StopContext context) { LOG.infof("Singleton service is stopping on node '%s'.", this.node); this.node = null; } } Create the querying service. It calls the getValue() method of the singleton service to get the name of the node on which it is running, and then writes the result to the server log. class QueryingService implements Service { private Logger LOG = Logger.getLogger(this.getClass()); private ScheduledExecutorService executor; @Override public void start(StartContext context) throws { LOG.info("Querying service is starting."); executor = Executors.newSingleThreadScheduledExecutor(); executor.scheduleAtFixedRate(() -> { Supplier<Node> node = new PassiveServiceSupplier<>(context.getController().getServiceContainer(), SingletonServiceActivator.SINGLETON_SERVICE_NAME); if (node.get() != null) { LOG.infof("Singleton service is running on this (%s) node.", node.get()); } else { LOG.infof("Singleton service is not running on this node."); } }, 5, 5, TimeUnit.SECONDS); } @Override public void stop(StopContext context) { LOG.info("Querying service is stopping."); executor.shutdown(); } } Implement the SingletonServiceActivator class to build and install both the singleton service and the querying service. public class SingletonServiceActivator implements ServiceActivator { private final Logger LOG = Logger.getLogger(SingletonServiceActivator.class); static final ServiceName SINGLETON_SERVICE_NAME = ServiceName.parse("org.jboss.as.quickstarts.ha.singleton.service"); private static final ServiceName QUERYING_SERVICE_NAME = ServiceName.parse("org.jboss.as.quickstarts.ha.singleton.service.querying"); @Override public void activate(ServiceActivatorContext serviceActivatorContext) { SingletonPolicy policy = new ActiveServiceSupplier<SingletonPolicy>( serviceActivatorContext.getServiceRegistry(), ServiceName.parse(SingletonDefaultRequirement.POLICY.getName())).get(); ServiceTarget target = serviceActivatorContext.getServiceTarget(); ServiceBuilder<?> builder = policy.createSingletonServiceConfigurator(SINGLETON_SERVICE_NAME).build(target); Consumer<Node> member = builder.provides(SINGLETON_SERVICE_NAME); Supplier<Group> group = builder.requires(ServiceName.parse("org.wildfly.clustering.default-group")); builder.setInstance(new SingletonService(group, member)).install(); serviceActivatorContext.getServiceTarget() .addService(QUERYING_SERVICE_NAME, new QueryingService()) .setInitialMode(ServiceController.Mode.ACTIVE) .install(); serviceActivatorContext.getServiceTarget().addService(QUERYING_SERVICE_NAME).setInstance(new QueryingService()).install(); LOG.info("Singleton and querying services activated."); } } Create a file in the META-INF/services/ directory named org.jboss.msc.service.ServiceActivator that contains the name of the ServiceActivator class, for example, org.jboss.as.quickstarts.ha.singleton.service.SingletonServiceActivator . See the ha-singleton-service quickstart that ships with JBoss EAP for the complete working example. This quickstart also provides a second example that demonstrates a singleton service that is installed with a backup service. The backup service is running on all nodes that are not elected to be running the singleton service. Finally, this quickstart also demonstrates how to configure a few different election policies. 6.5. HA singleton deployments You can deploy your application as a singleton deployment. When deployed to a group of clustered servers, a singleton deployment only deploys on a single node at any given time. If the node on which the deployment is active stops or fails, the deployment automatically starts on another node. A singleton deployment can be deployed on multiple nodes in the following situations: A group of clustered servers on a given node cannot establish a connection due to a configuration issue or a network issue. A non-HA configuration is used, such as the following configuration files: A standalone.xml configuration, which supports the Jakarta EE 8 Web Profile, or a standalone-full.xml configuration, which supports the Jakarta EE 8 Full Platform profile. A domain.xml configuration, which consists of either default domain profiles or full-domain profiles. Important Non-HA configurations do not have the singleton subsystem enabled by default. If you use this default configuration, the singleton-deployment.xml file is ignored to promote a successful deployment of an application. However, using a non-HA configuration can cause errors for the jboss-all.xml descriptor file. To avoid these errors, add the single deployment to the singleton-deployment.xml descriptor. You can then deploy the application using any profile type. The policies for controlling HA singleton behavior are managed by a new singleton subsystem. A deployment can either specify a specific singleton policy or use the default subsystem policy. A deployment identifies itself as a singleton deployment by using a META-INF/singleton-deployment.xml deployment descriptor, which is applied to an existing deployment as a deployment overlay. Alternatively, the requisite singleton configuration can be embedded within an existing jboss-all.xml file. Defining or choosing a singleton deployment To define a deployment as a singleton deployment, include a META-INF/singleton-deployment.xml descriptor in your application archive. If a Maven WAR plug-in already exists, you can migrate the plug-in to the META-INF directory: **/src/main/webapp/META-INF . Procedure If an application is deployed in an EAR file, move the singleton-deployment.xml descriptor or the singleton-deployment element, which is located within the jboss-all.xml file, to the top-level of the META-INF directory. Example: Singleton deployment descriptor <?xml version="1.0" encoding="UTF-8"?> <singleton-deployment xmlns="urn:jboss:singleton-deployment:1.0"/> To add an application deployment as a WAR file or a JAR file, move the singleton-deployment.xml descriptor to the top-level of the /META-INF directory in the application archive. Example: Singleton deployment descriptor with a specific singleton policy <?xml version="1.0" encoding="UTF-8"?> <singleton-deployment policy="my-new-policy" xmlns="urn:jboss:singleton-deployment:1.0"/> Optional: To define the singleton-deployment in a jboss-all.xml file, move the jboss-all.xml descriptor to the top-level of the /META-INF directory in the application archive. Example: Defining singleton-deployment in jboss-all.xml <?xml version="1.0" encoding="UTF-8"?> <jboss xmlns="urn:jboss:1.0"> <singleton-deployment xmlns="urn:jboss:singleton-deployment:1.0"/> </jboss> Optional: Use a singleton policy to define the singleton-deployment in the jboss-all.xml file. Move the jboss-all.xml descriptor to the top-level of the /META-INF directory in the application archive. Example: Defining singleton-deployment in jboss-all.xml with a specific singleton policy <?xml version="1.0" encoding="UTF-8"?> <jboss xmlns="urn:jboss:1.0"> <singleton-deployment policy="my-new-policy" xmlns="urn:jboss:singleton-deployment:1.0"/> </jboss> Creating a Singleton Deployment JBoss EAP provides two election policies: Simple election policy The simple-election-policy chooses a specific member, indicated by the position attribute, on which a given application will be deployed. The position attribute determines the index of the node to be elected from a list of candidates sorted by descending age, where 0 indicates the oldest node, 1 indicates the second oldest node, -1 indicates the youngest node, -2 indicates the second youngest node, and so on. If the specified position exceeds the number of candidates, a modulus operation is applied. Example: Create a New Singleton Policy with a simple-election-policy and Position Set to -1 , Using the Management CLI Note To set the newly created policy my-new-policy as the default, run this command: Example: Configure a simple-election-policy with Position Set to -1 Using standalone-ha.xml <subsystem xmlns="urn:jboss:domain:singleton:1.0"> <singleton-policies default="my-new-policy"> <singleton-policy name="my-new-policy" cache-container="server"> <simple-election-policy position="-1"/> </singleton-policy> </singleton-policies> </subsystem> Random election policy The random-election-policy chooses a random member on which a given application will be deployed. Example: Creating a New Singleton Policy with a random-election-policy , Using the Management CLI Example: Configure a random-election-policy Using standalone-ha.xml <subsystem xmlns="urn:jboss:domain:singleton:1.0"> <singleton-policies default="my-other-new-policy"> <singleton-policy name="my-other-new-policy" cache-container="server"> <random-election-policy/> </singleton-policy> </singleton-policies> </subsystem> Note The default-cache attribute of the cache-container needs to be defined before trying to add the policy. Without this, if you are using a custom cache container, you might end up getting error messages. Preferences Additionally, any singleton election policy can indicate a preference for one or more members of a cluster. Preferences can be defined either by using the node name or by using the outbound socket binding name. Node preferences always take precedent over the results of an election policy. Example: Indicate Preference in the Existing Singleton Policy Using the Management CLI Example: Create a New Singleton Policy with a simple-election-policy and name-preferences , Using the Management CLI Note To set the newly created policy my-new-policy as the default, run this command: Example: Configure a random-election-policy with socket-binding-preferences Using standalone-ha.xml <subsystem xmlns="urn:jboss:domain:singleton:1.0"> <singleton-policies default="my-other-new-policy"> <singleton-policy name="my-other-new-policy" cache-container="server"> <random-election-policy> <socket-binding-preferences>binding1 binding2 binding3 binding4</socket-binding-preferences> </random-election-policy> </singleton-policy> </singleton-policies> </subsystem> Define a Quorum Network partitions are particularly problematic for singleton deployments, since they can trigger multiple singleton providers for the same deployment to run at the same time. To defend against this scenario, a singleton policy can define a quorum that requires a minimum number of nodes to be present before a singleton provider election can take place. A typical deployment scenario uses a quorum of N/2 + 1, where N is the anticipated cluster size. This value can be updated at runtime, and will immediately affect any singleton deployments using the respective singleton policy. Example: Quorum Declaration in the standalone-ha.xml File <subsystem xmlns="urn:jboss:domain:singleton:1.0"> <singleton-policies default="default"> <singleton-policy name="default" cache-container="server" quorum="4"> <simple-election-policy/> </singleton-policy> </singleton-policies> </subsystem> Example: Quorum Declaration Using the Management CLI See the ha-singleton-deployment quickstart that ships with JBoss EAP for a complete working example of a service packaged in an application as a cluster-wide singleton using singleton deployments. Determine the Primary Singleton Service Provider Using the CLI The singleton subsystem exposes a runtime resource for each singleton deployment or service created from a particular singleton policy. This helps you determine the primary singleton provider using the CLI. You can view the name of the cluster member currently acting as the singleton provider. For example: You can also view the names of the nodes on which the singleton deployment or service is installed. For example: 6.6. Apache mod_cluster-manager Application 6.6.1. About mod_cluster-manager Application The mod_cluster-manager application is an administration web page, which is available on Apache HTTP Server. It is used for monitoring the connected worker nodes and performing various administration tasks, such as enabling or disabling contexts, and configuring the load-balancing properties of worker nodes in a cluster. Exploring mod_cluster-manager Application The mod_cluster-manager application can be used for performing various administration tasks on worker nodes. Figure - mod_cluster Administration Web Page [1] mod_cluster/1.3.1.Final : The version of the mod_cluster native library. [2] ajp://192.168.122.204:8099 : The protocol used (either AJP, HTTP, or HTTPS), hostname or IP address of the worker node, and the port. [3] jboss-eap-7.0-2 : The worker node's JVMRoute. [4] Virtual Host 1 : The virtual host(s) configured on the worker node. [5] Disable : An administration option that can be used to disable the creation of new sessions on the particular context. However, the ongoing sessions do not get disabled and remain intact. [6] Stop : An administration option that can be used to stop the routing of session requests to the context. The remaining sessions will fail over to another node unless the sticky-session-force property is set to true . [7] Enable Contexts Disable Contexts Stop Contexts : The operations that can be performed on the whole node. Selecting one of these options affects all the contexts of a node in all its virtual hosts. [8] Load balancing group (LBGroup) : The load-balancing-group property is set in the modcluster subsystem in JBoss EAP configuration to group all worker nodes into custom load balancing groups. Load balancing group (LBGroup) is an informational field that gives information about all set load balancing groups. If this field is not set, then all worker nodes are grouped into a single default load balancing group. Note This is only an informational field and thus cannot be used to set load-balancing-group property. The property has to be set in modcluster subsystem in JBoss EAP configuration. [9] Load (value) : The load factor on the worker node. The load factors are evaluated as below: Note For JBoss EAP 7.4, it is also possible to use Undertow as load balancer. 6.7. The distributable-web subsystem for Distributable Web Session Configurations The distributable-web subsystem facilitates flexible and distributable web session configurations. The subsystem defines a set of distributable web session management profiles. One of these profiles is designated as the default profile. It defines the default behavior of a distributable web application. For example: The default session management stores web session data within an Infinispan cache as the following example illustrates: The attributes used in this example and the possible values are: cache : A cache within the associated cache-container. The web application's cache is based on the configuration of this cache. If undefined, the default cache of the associated cache container is used. cache-container : A cache-container defined in the Infinispan subsystem into which session data is stored. granularity : Defines how the session manager maps a session into individual cache entries. The possible values are: SESSION : Stores all session attributes within a single cache entry. More expensive than the ATTRIBUTE granularity, but preserves any cross-attribute object references. ATTRIBUTE : Stores each session attribute within a separate cache entry. More efficient than the SESSION granularity, but does not preserve any cross-attribute object references. affinity : Defines the affinity that a web request must have for a server. The affinity of the associated web session determines the algorithm for generating the route to be appended onto the session ID. The possible values are: affinity=none : Web requests do not have any affinity to any node. Use this if web session state is not maintained within the application server. affinity=local : Web requests have an affinity to the server that last handled a request for a session. This option corresponds to the sticky session behavior. affinity=primary-owner : Web requests have an affinity to the primary owner of a session. This is the default affinity for this distributed session manager. Behaves the same as affinity=local if the backing cache is not distributed or replicated. affinity=ranked : Web requests have an affinity for the first available member in a list that include the primary and the backup owners, and for the member that last handled a session. affinity=ranked delimiter : The delimiter used to separate the individual routes within the encoded session identifier. Affinity=ranked max routes : The maximum number of routes to encode into the session identifier. You must enable ranked session affinity in your load balancer to have session affinity with multiple, ordered routes. For more information, see Enabling Ranked Session Affinity in Your Load Balancer in the Configuration Guide for JBoss EAP. You can override the default distributable session management behavior by referencing a session management profile by name or by providing a deployment-specific session management configuration. For more information, see Overide Default Distributable Session Management Behavior . 6.7.1. Storing Web Session Data In a Remote Red Hat Data Grid The distributable-web subsystem can be configured to store web session data in a remote Red Hat Data Grid cluster using the HotRod protocol. Storing web session data in a remote cluster allows the cache layer to scale independently of the application servers. Example configuration: The attributes used in this example and the possible values are: remote-cache-container : The remote cache container defined in the Infinispan subsystem to store the web session data. cache-configuration : Name of the cache configuration in Red Hat Data Grid cluster. The newly created deployment-specific caches are based on this configuration. If a remote cache configuration matching the name is not found, a new cache configuration is created in the remote container. granularity : Defines how the session manager maps a session into individual cache entries. The possible values are: SESSION : Stores all session attributes within a single cache entry. More expensive than the ATTRIBUTE granularity, but preserves any cross-attribute object references. ATTRIBUTE : Stores each session attribute within a separate cache entry. More efficient than the SESSION granularity, but does not preserve any cross-attribute object references.
|
[
"<?xml version=\"1.0\"?> <web-app xmlns=\"http://java.sun.com/xml/ns/j2ee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_3_0.xsd\" version=\"3.0\"> <distributable/> </web-app>",
"<jboss-web xmlns=\"http://www.jboss.com/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://www.jboss.com/xml/ns/javaee http://www.jboss.org/j2ee/schema/jboss-web_10_0.xsd\"> <replication-config> <replication-granularity>SESSION</replication-granularity> </replication-config> </jboss-web>",
"public class MyObjectExternalizer implements org.wildfly.clustering.marshalling.Externalizer<MyObject> { @Override public Class<MyObject> getTargetClass() { return MyObject.class; } @Override public void writeObject(ObjectOutput output, MyObject object) throws IOException { // Write object state to stream } @Override public MyObject readObject(ObjectInput input) throws IOException, ClassNotFoundException { // Construct and read object state from stream return ...; } }",
"<jboss-web xmlns=\"http://www.jboss.com/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://www.jboss.com/xml/ns/javaee http://www.jboss.org/j2ee/schema/jboss-web_10_0.xsd\"> <max-active-sessions>20</max-active-sessions> </jboss-web>",
"@Resource(lookup = \"java:jboss/clustering/group/channel-name\") private Group channelGroup;",
"@Resource(lookup = \"java:jboss/clustering/dispatcher/channel-name\") private CommandDispatcherFactory factory; public void foo() { String context = \"Hello world!\"; // Exclude node1 and node3 from the executeOnCluster try (CommandDispatcher<String> dispatcher = this.factory.createCommandDispatcher(context)) { dispatcher.executeOnGroup(new StdOutCommand(), node1, node3); } } public static class StdOutCommand implements Command<Void, String> { @Override public Void execute(String context) { System.out.println(context); return null; } }",
"public class MySingletonElectionListener implements SingletonElectionListener { @Override public void elected(List<Node> candidates, Node primary) { // } } public class MyServiceActivator implements ServiceActivator { @Override public void activate(ServiceActivatorContext context) { String containerName = \"foo\"; SingletonElectionPolicy policy = new MySingletonElectionPolicy(); SingletonElectionListener listener = new MySingletonElectionListener(); int quorum = 3; ServiceName name = ServiceName.parse(\"my.service.name\"); // Use a SingletonServiceConfiguratorFactory backed by default cache of \"foo\" container Supplier<SingletonServiceConfiguratorFactory> factory = new ActiveServiceSupplier<SingletonServiceConfiguratorFactory>(context.getServiceRegistry(), ServiceName.parse(SingletonDefaultCacheRequirement.SINGLETON_SERVICE_CONFIGURATOR_FACTORY.resolve(containerName))); ServiceBuilder<?> builder = factory.get().createSingletonServiceConfigurator(name) .electionListener(listener) .electionPolicy(policy) .requireQuorum(quorum) .build(context.getServiceTarget()); Service service = new MyService(); builder.setInstance(service).install(); } }",
"class SingletonService implements Service { private Logger LOG = Logger.getLogger(this.getClass()); private Node node; private Supplier<Group> groupSupplier; private Consumer<Node> nodeConsumer; SingletonService(Supplier<Group> groupSupplier, Consumer<Node> nodeConsumer) { this.groupSupplier = groupSupplier; this.nodeConsumer = nodeConsumer; } @Override public void start(StartContext context) { this.node = this.groupSupplier.get().getLocalMember(); this.nodeConsumer.accept(this.node); LOG.infof(\"Singleton service is started on node '%s'.\", this.node); } @Override public void stop(StopContext context) { LOG.infof(\"Singleton service is stopping on node '%s'.\", this.node); this.node = null; } }",
"class QueryingService implements Service { private Logger LOG = Logger.getLogger(this.getClass()); private ScheduledExecutorService executor; @Override public void start(StartContext context) throws { LOG.info(\"Querying service is starting.\"); executor = Executors.newSingleThreadScheduledExecutor(); executor.scheduleAtFixedRate(() -> { Supplier<Node> node = new PassiveServiceSupplier<>(context.getController().getServiceContainer(), SingletonServiceActivator.SINGLETON_SERVICE_NAME); if (node.get() != null) { LOG.infof(\"Singleton service is running on this (%s) node.\", node.get()); } else { LOG.infof(\"Singleton service is not running on this node.\"); } }, 5, 5, TimeUnit.SECONDS); } @Override public void stop(StopContext context) { LOG.info(\"Querying service is stopping.\"); executor.shutdown(); } }",
"public class SingletonServiceActivator implements ServiceActivator { private final Logger LOG = Logger.getLogger(SingletonServiceActivator.class); static final ServiceName SINGLETON_SERVICE_NAME = ServiceName.parse(\"org.jboss.as.quickstarts.ha.singleton.service\"); private static final ServiceName QUERYING_SERVICE_NAME = ServiceName.parse(\"org.jboss.as.quickstarts.ha.singleton.service.querying\"); @Override public void activate(ServiceActivatorContext serviceActivatorContext) { SingletonPolicy policy = new ActiveServiceSupplier<SingletonPolicy>( serviceActivatorContext.getServiceRegistry(), ServiceName.parse(SingletonDefaultRequirement.POLICY.getName())).get(); ServiceTarget target = serviceActivatorContext.getServiceTarget(); ServiceBuilder<?> builder = policy.createSingletonServiceConfigurator(SINGLETON_SERVICE_NAME).build(target); Consumer<Node> member = builder.provides(SINGLETON_SERVICE_NAME); Supplier<Group> group = builder.requires(ServiceName.parse(\"org.wildfly.clustering.default-group\")); builder.setInstance(new SingletonService(group, member)).install(); serviceActivatorContext.getServiceTarget() .addService(QUERYING_SERVICE_NAME, new QueryingService()) .setInitialMode(ServiceController.Mode.ACTIVE) .install(); serviceActivatorContext.getServiceTarget().addService(QUERYING_SERVICE_NAME).setInstance(new QueryingService()).install(); LOG.info(\"Singleton and querying services activated.\"); } }",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <singleton-deployment xmlns=\"urn:jboss:singleton-deployment:1.0\"/>",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <singleton-deployment policy=\"my-new-policy\" xmlns=\"urn:jboss:singleton-deployment:1.0\"/>",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <jboss xmlns=\"urn:jboss:1.0\"> <singleton-deployment xmlns=\"urn:jboss:singleton-deployment:1.0\"/> </jboss>",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <jboss xmlns=\"urn:jboss:1.0\"> <singleton-deployment policy=\"my-new-policy\" xmlns=\"urn:jboss:singleton-deployment:1.0\"/> </jboss>",
"batch /subsystem=singleton/singleton-policy=my-new-policy:add(cache-container=server) /subsystem=singleton/singleton-policy=my-new-policy/election- policy=simple:add(position=-1) run-batch",
"/subsystem=singleton:write-attribute(name=default, value=my-new-policy)",
"<subsystem xmlns=\"urn:jboss:domain:singleton:1.0\"> <singleton-policies default=\"my-new-policy\"> <singleton-policy name=\"my-new-policy\" cache-container=\"server\"> <simple-election-policy position=\"-1\"/> </singleton-policy> </singleton-policies> </subsystem>",
"batch /subsystem=singleton/singleton-policy=my-other-new-policy:add(cache-container=server) /subsystem=singleton/singleton-policy=my-other-new-policy/election-policy=random:add() run-batch",
"<subsystem xmlns=\"urn:jboss:domain:singleton:1.0\"> <singleton-policies default=\"my-other-new-policy\"> <singleton-policy name=\"my-other-new-policy\" cache-container=\"server\"> <random-election-policy/> </singleton-policy> </singleton-policies> </subsystem>",
"/subsystem=singleton/singleton-policy=foo/election-policy=simple:list-add(name=name-preferences, value=nodeA) /subsystem=singleton/singleton-policy=bar/election-policy=random:list-add(name=socket-binding-preferences, value=binding1)",
"batch /subsystem=singleton/singleton-policy=my-new-policy:add(cache-container=server) /subsystem=singleton/singleton-policy=my-new-policy/election-policy=simple:add(name-preferences=[node1, node2, node3, node4]) run-batch",
"/subsystem=singleton:write-attribute(name=default, value=my-new-policy)",
"<subsystem xmlns=\"urn:jboss:domain:singleton:1.0\"> <singleton-policies default=\"my-other-new-policy\"> <singleton-policy name=\"my-other-new-policy\" cache-container=\"server\"> <random-election-policy> <socket-binding-preferences>binding1 binding2 binding3 binding4</socket-binding-preferences> </random-election-policy> </singleton-policy> </singleton-policies> </subsystem>",
"<subsystem xmlns=\"urn:jboss:domain:singleton:1.0\"> <singleton-policies default=\"default\"> <singleton-policy name=\"default\" cache-container=\"server\" quorum=\"4\"> <simple-election-policy/> </singleton-policy> </singleton-policies> </subsystem>",
"/subsystem=singleton/singleton-policy=foo:write-attribute(name=quorum, value=3)",
"/subsystem=singleton/singleton-policy=default/deployment=singleton.jar:read-attribute(name=primary-provider) { \"outcome\" => \"success\", \"result\" => \"node1\" }",
"/subsystem=singleton/singleton-policy=default/deployment=singleton.jar:read-attribute(name=providers) { \"outcome\" => \"success\", \"result\" => [ \"node1\", \"node2\" ] }",
"-load > 0 : A load factor with value 1 indicates that the worker node is overloaded. A load factor of 100 denotes a free and not-loaded node. -load = 0 : A load factor of value 0 indicates that the worker node is in standby mode. This means that no session requests will be routed to this node until and unless the other worker nodes are unavailable. -load = -1 : A load factor of value -1 indicates that the worker node is in an error state. -load = -2 : A load factor of value -2 indicates that the worker node is undergoing CPing/CPong and is in a transition state.",
"[standalone@embedded /] /subsystem=distributable-web:read-attribute(name=default-session-management) { \"outcome\" => \"success\", \"result\" => \"default\" }",
"[standalone@embedded /] /subsystem=distributable-web/infinispan-session-management=default:read-resource { \"outcome\" => \"success\", \"result\" => { \"cache\" => undefined, \"cache-container\" => \"web\", \"granularity\" => \"SESSION\", \"affinity\" => {\"primary-owner\" => undefined} } }",
"[standalone@embedded /]/subsystem=distributable-web/hotrod-session-management=ExampleRemoteSessionStore:add(remote-cache-container=datagrid, cache-configuration=__REMOTE_CACHE_CONFIG_NAME__, granularity=ATTRIBUTE) { \"outcome\" => \"success\" }"
] |
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/development_guide/clustering_in_web_applications
|
Chapter 2. Using the roxctl CLI
|
Chapter 2. Using the roxctl CLI 2.1. Prerequisites You have configured the ROX_ENDPOINT environment variable using the following command: USD export ROX_ENDPOINT= <host:port> 1 1 The host and port information that you want to store in the ROX_ENDPOINT environment variable. 2.2. Getting authentication information The following procedure describes how to use the roxctl central whoami command to retrieve information about your authentication status and user profile in Central. The example output illustrates the data you can expect to see, including user roles, access permissions, and various administrative functions. This step allows you to review your access and roles within Central. Procedure Run the following command to get information about your current authentication status and user information in Central: USD roxctl central whoami Example output UserID: <redacted> User name: <redacted> Roles: APIToken creator, Admin, Analyst, Continuous Integration, Network Graph Viewer, None, Sensor Creator, Vulnerability Management Approver, Vulnerability Management Requester, Vulnerability Manager, Vulnerability Report Creator Access: rw Access rw Administration rw Alert rw CVE rw Cluster rw Compliance rw Deployment rw DeploymentExtension rw Detection rw Image rw Integration rw K8sRole rw K8sRoleBinding rw K8sSubject rw Namespace rw NetworkGraph rw NetworkPolicy rw Node rw Secret rw ServiceAccount rw VulnerabilityManagementApprovals rw VulnerabilityManagementRequests rw WatchedImage rw WorkflowAdministration Review the output to ensure that the authentication and user details are as expected. 2.3. Authenticating by using the roxctl CLI For authentication, you can use an API token, your administrator password, or the roxctl central login command. Follow these guidelines for the effective use of API tokens: Use an API token in a production environment with continuous integration (CI). Each token is assigned specific access permissions, providing control over the actions it can perform. In addition, API tokens do not require interactive processes, such as browser-based logins, making them ideal for automated processes. These tokens have a time-to-live (TTL) value of 1 year, providing a longer validity period for seamless integration and operational efficiency. Use your administrator password only for testing purposes. Do not use it in the production environment. Use the roxctl central login command only for interactive, local uses. 2.3.1. Creating an API token Procedure In the RHACS portal, go to Platform Configuration Integrations . Scroll to the Authentication Tokens category, and then click API Token . Click Generate Token . Enter a name for the token and select a role that provides the required level of access (for example, Continuous Integration or Sensor Creator ). Click Generate . Important Copy the generated token and securely store it. You will not be able to view it again. 2.3.2. Exporting and saving the API token Procedure After you have generated the authentication token, export it as the ROX_API_TOKEN variable by entering the following command: USD export ROX_API_TOKEN=<api_token> (Optional): You can also save the token in a file and use it with the --token-file option by entering the following command: USD roxctl central debug dump --token-file <token_file> Note the following guidelines: You cannot use both the -password ( -p ) and the --token-file options simultaneously. If you have already set the ROX_API_TOKEN variable, and specify the --token-file option, the roxctl CLI uses the specified token file for authentication. If you have already set the ROX_API_TOKEN variable, and specify the --password option, the roxctl CLI uses the specified password for authentication. 2.3.3. Using an authentication provider to authenticate with roxctl You can configure an authentication provider in Central and initiate the login process with the roxctl CLI. Set the ROX_ENDPOINT variable, initiate the login process with the roxctl central login command, select the authentication provider in a browser window, and retrieve the token information from the roxctl CLI as described in the following procedure. Prerequisite You selected an authentication provider of your choice, such as OpenID Connect (OIDC) with fragment or query mode. Procedure Run the following command to set the ROX_ENDPOINT variable to Central hostname and port: export ROX_ENDPOINT= <central_hostname:port> Run the following command to initiate the login process to Central: USD roxctl central login Within the roxctl CLI, a URL is printed as output and you are redirected to a browser window where you can select the authentication provider you want to use. Log in with your authentication provider. After you have successfully logged in, the browser window indicates that authentication was successful and you can close the browser window. The roxctl CLI displays your token information including details such as the access token, the expiration time of the access token, the refresh token if one has been issued, and notification that these values are stored locally. Example output Please complete the authorization flow in the browser with an auth provider of your choice. If no browser window opens, please click on the following URL: http://127.0.0.1:xxxxx/login INFO: Received the following after the authorization flow from Central: INFO: Access token: <redacted> 1 INFO: Access token expiration: 2023-04-19 13:58:43 +0000 UTC 2 INFO: Refresh token: <redacted> 3 INFO: Storing these values under USDHOME/.roxctl/login... 4 1 The access token. 2 The expiration time of the access token. 3 The refresh token. 4 The directory where values of the access token, the access token expiration time, and the refresh token are stored locally. Important Ensure that you set the environment to determine the directory where the configuration is stored. By default, the configuration is stored in the USDHOME/.roxctl/roxctl-config directory. If you set the USDROX_CONFIG_DIR environment variable, the configuration is stored in the USDROX_CONFIG_DIR/roxctl-config directory. This option has the highest priority. If you set the USDXDG_RUNTIME_DIR environment variable and the USDROX_CONFIG_DIR variable is not set, the configuration is stored in the USDXDG_RUNTIME_DIR /roxctl-config directory. If you do not set the USDROX_CONFIG_DIR or USDXDG_RUNTIME_DIR environment variable, the configuration is stored in the USDHOME/.roxctl/roxctl-config directory. 2.4. Configuring and using the roxctl CLI in RHACS Cloud Service Procedure Export the ROX_API_TOKEN by running the following command: USD export ROX_API_TOKEN=<api_token> Export the ROX_ENDPOINT by running the following command: USD export ROX_ENDPOINT=<address>:<port_number> You can use the --help option to get more information about the commands. In Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service), when using roxctl commands that require the Central address, use the Central instance address as displayed in the Instance Details section of the Red Hat Hybrid Cloud Console. For example, use acs-ABCD12345.acs.rhcloud.com instead of acs-data-ABCD12345.acs.rhcloud.com .
|
[
"export ROX_ENDPOINT= <host:port> 1",
"roxctl central whoami",
"UserID: <redacted> User name: <redacted> Roles: APIToken creator, Admin, Analyst, Continuous Integration, Network Graph Viewer, None, Sensor Creator, Vulnerability Management Approver, Vulnerability Management Requester, Vulnerability Manager, Vulnerability Report Creator Access: rw Access rw Administration rw Alert rw CVE rw Cluster rw Compliance rw Deployment rw DeploymentExtension rw Detection rw Image rw Integration rw K8sRole rw K8sRoleBinding rw K8sSubject rw Namespace rw NetworkGraph rw NetworkPolicy rw Node rw Secret rw ServiceAccount rw VulnerabilityManagementApprovals rw VulnerabilityManagementRequests rw WatchedImage rw WorkflowAdministration",
"export ROX_API_TOKEN=<api_token>",
"roxctl central debug dump --token-file <token_file>",
"export ROX_ENDPOINT= <central_hostname:port>",
"roxctl central login",
"Please complete the authorization flow in the browser with an auth provider of your choice. If no browser window opens, please click on the following URL: http://127.0.0.1:xxxxx/login INFO: Received the following after the authorization flow from Central: INFO: Access token: <redacted> 1 INFO: Access token expiration: 2023-04-19 13:58:43 +0000 UTC 2 INFO: Refresh token: <redacted> 3 INFO: Storing these values under USDHOME/.roxctl/login... 4",
"export ROX_API_TOKEN=<api_token>",
"export ROX_ENDPOINT=<address>:<port_number>"
] |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/roxctl_cli/using-the-roxctl-cli-1
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/providing-feedback-on-red-hat-documentation_common
|
Chapter 119. KafkaMirrorMakerSpec schema reference
|
Chapter 119. KafkaMirrorMakerSpec schema reference Used in: KafkaMirrorMaker Full list of KafkaMirrorMakerSpec schema properties Configures Kafka MirrorMaker. 119.1. include Use the include property to configure a list of topics that Kafka MirrorMaker mirrors from the source to the target Kafka cluster. The property allows any regular expression from the simplest case with a single topic name to complex patterns. For example, you can mirror topics A and B using A|B or all topics using * . You can also pass multiple regular expressions separated by commas to the Kafka MirrorMaker. 119.2. KafkaMirrorMakerConsumerSpec and KafkaMirrorMakerProducerSpec Use the KafkaMirrorMakerConsumerSpec and KafkaMirrorMakerProducerSpec to configure source (consumer) and target (producer) clusters. Kafka MirrorMaker always works together with two Kafka clusters (source and target). To establish a connection, the bootstrap servers for the source and the target Kafka clusters are specified as comma-separated lists of HOSTNAME:PORT pairs. Each comma-separated list contains one or more Kafka brokers or a Service pointing to Kafka brokers specified as a HOSTNAME:PORT pair. 119.3. logging Kafka MirrorMaker has its own configurable logger: mirrormaker.root.logger MirrorMaker uses the Apache log4j logger implementation. Use the logging property to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties . Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services . Here we see examples of inline and external logging. The inline logging specifies the root logger level. You can also set log levels for specific classes or loggers by adding them to the loggers property. apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker spec: # ... logging: type: inline loggers: mirrormaker.root.logger: INFO log4j.logger.org.apache.kafka.clients.NetworkClient: TRACE log4j.logger.org.apache.kafka.common.network.Selector: DEBUG # ... Note Setting a log level to DEBUG may result in a large amount of log output and may have performance implications. apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker spec: # ... logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: mirror-maker-log4j.properties # ... Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . 119.4. KafkaMirrorMakerSpec schema properties Property Property type Description version string The Kafka MirrorMaker version. Defaults to the latest version. Consult the documentation to understand the process required to upgrade or downgrade the version. replicas integer The number of pods in the Deployment . image string The container image used for Kafka MirrorMaker pods. If no image name is explicitly specified, it is determined based on the spec.version configuration. The image names are specifically mapped to corresponding versions in the Cluster Operator configuration. consumer KafkaMirrorMakerConsumerSpec Configuration of source cluster. producer KafkaMirrorMakerProducerSpec Configuration of target cluster. resources ResourceRequirements CPU and memory resources to reserve. whitelist string The whitelist property has been deprecated, and should now be configured using spec.include . List of topics which are included for mirroring. This option allows any regular expression using Java-style regular expressions. Mirroring two topics named A and B is achieved by using the expression A|B . Or, as a special case, you can mirror all topics using the regular expression * . You can also specify multiple regular expressions separated by commas. include string List of topics which are included for mirroring. This option allows any regular expression using Java-style regular expressions. Mirroring two topics named A and B is achieved by using the expression A|B . Or, as a special case, you can mirror all topics using the regular expression * . You can also specify multiple regular expressions separated by commas. jvmOptions JvmOptions JVM Options for pods. logging InlineLogging , ExternalLogging Logging configuration for MirrorMaker. metricsConfig JmxPrometheusExporterMetrics Metrics configuration. tracing JaegerTracing , OpenTelemetryTracing The configuration of tracing in Kafka MirrorMaker. template KafkaMirrorMakerTemplate Template to specify how Kafka MirrorMaker resources, Deployments and Pods , are generated. livenessProbe Probe Pod liveness checking. readinessProbe Probe Pod readiness checking.
|
[
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker spec: # logging: type: inline loggers: mirrormaker.root.logger: INFO log4j.logger.org.apache.kafka.clients.NetworkClient: TRACE log4j.logger.org.apache.kafka.common.network.Selector: DEBUG #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker spec: # logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: mirror-maker-log4j.properties #"
] |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaMirrorMakerSpec-reference
|
3.2. Red Hat High Availability Add-On Resource Classes
|
3.2. Red Hat High Availability Add-On Resource Classes There are several classes of resource agents supported by Red Hat High Availability Add-On: LSB - The Linux Standards Base agent abstracts the compliant services supported by the LSB, namely those services in /etc/init.d and the associated return codes for successful and failed service states (started, stopped, running status). OCF - The Open Cluster Framework is superset of the LSB (Linux Standards Base) that sets standards for the creation and execution of server initialization scripts, input parameters for the scripts using environment variables, and more. systemd - The newest system services manager for Linux based systems, systemd uses sets of unit files rather than initialization scripts as does LSB and OCF. These units can be manually created by administrators or can even be created and managed by services themselves. Pacemaker manages these units in a similar way that it manages OCF or LSB init scripts. Upstart - Much like systemd, Upstart is an alternative system initialization manager for Linux. Upstart uses jobs, as opposed to units in systemd or init scripts. STONITH - A resource agent exclusively for fencing services and fence agents using STONITH. Nagios - Agents that abstract plug-ins for the Nagios system and infrastructure monitoring tool.
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_overview/s1-resourceoptions-haao
|
Chapter 17. DNS Servers
|
Chapter 17. DNS Servers DNS (Domain Name System), also known as a nameserver , is a network system that associates host names with their respective IP addresses. For users, this has the advantage that they can refer to machines on the network by names that are usually easier to remember than the numerical network addresses. For system administrators, using the nameserver allows them to change the IP address for a host without ever affecting the name-based queries, or to decide which machines handle these queries. 17.1. Introduction to DNS DNS is usually implemented using one or more centralized servers that are authoritative for certain domains. When a client host requests information from a nameserver, it usually connects to port 53. The nameserver then attempts to resolve the name requested. If it does not have an authoritative answer, or does not already have the answer cached from an earlier query, it queries other nameservers, called root nameservers , to determine which nameservers are authoritative for the name in question, and then queries them to get the requested name. 17.1.1. Nameserver Zones In a DNS server such as BIND (Berkeley Internet Name Domain), all information is stored in basic data elements called resource records (RR). The resource record is usually a fully qualified domain name (FQDN) of a host, and is broken down into multiple sections organized into a tree-like hierarchy. This hierarchy consists of a main trunk, primary branches, secondary branches, and so on. Example 17.1. A simple resource record Each level of the hierarchy is divided by a period (that is, . ). In Example 17.1, "A simple resource record" , com defines the top-level domain , example its subdomain, and sales the subdomain of example . In this case, bob identifies a resource record that is part of the sales.example.com domain. With the exception of the part furthest to the left (that is, bob ), each of these sections is called a zone and defines a specific namespace . Zones are defined on authoritative nameservers through the use of zone files , which contain definitions of the resource records in each zone. Zone files are stored on primary nameservers (also called master nameservers ), where changes are made to the files, and secondary nameservers (also called slave nameservers ), which receive zone definitions from the primary nameservers. Both primary and secondary nameservers are authoritative for the zone and look the same to clients. Depending on the configuration, any nameserver can also serve as a primary or secondary server for multiple zones at the same time. 17.1.2. Nameserver Types There are two nameserver configuration types: authoritative Authoritative nameservers answer to resource records that are part of their zones only. This category includes both primary (master) and secondary (slave) nameservers. recursive Recursive nameservers offer resolution services, but they are not authoritative for any zone. Answers for all resolutions are cached in a memory for a fixed period of time, which is specified by the retrieved resource record. Although a nameserver can be both authoritative and recursive at the same time, it is recommended not to combine the configuration types. To be able to perform their work, authoritative servers should be available to all clients all the time. On the other hand, since the recursive lookup takes far more time than authoritative responses, recursive servers should be available to a restricted number of clients only, otherwise they are prone to distributed denial of service (DDoS) attacks. 17.1.3. BIND as a Nameserver BIND consists of a set of DNS-related programs. It contains a nameserver called named , an administration utility called rndc , and a debugging tool called dig . See Chapter 12, Services and Daemons for more information on how to run a service in Red Hat Enterprise Linux.
|
[
"bob.sales.example.com"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch-dns_servers
|
probe::netdev.hard_transmit
|
probe::netdev.hard_transmit Name probe::netdev.hard_transmit - Called when the devices is going to TX (hard) Synopsis netdev.hard_transmit Values truesize The size of the data to be transmitted. dev_name The device scheduled to transmit protocol The protocol used in the transmission length The length of the transmit buffer.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-netdev-hard-transmit
|
Installing on OpenStack
|
Installing on OpenStack OpenShift Container Platform 4.18 Installing OpenShift Container Platform on OpenStack Red Hat OpenShift Documentation Team
|
[
"#!/usr/bin/env bash set -Eeuo pipefail declare catalog san catalog=\"USD(mktemp)\" san=\"USD(mktemp)\" readonly catalog san declare invalid=0 openstack catalog list --format json --column Name --column Endpoints | jq -r '.[] | .Name as USDname | .Endpoints[] | select(.interface==\"public\") | [USDname, .interface, .url] | join(\" \")' | sort > \"USDcatalog\" while read -r name interface url; do # Ignore HTTP if [[ USD{url#\"http://\"} != \"USDurl\" ]]; then continue fi # Remove the schema from the URL noschema=USD{url#\"https://\"} # If the schema was not HTTPS, error if [[ \"USDnoschema\" == \"USDurl\" ]]; then echo \"ERROR (unknown schema): USDname USDinterface USDurl\" exit 2 fi # Remove the path and only keep host and port noschema=\"USD{noschema%%/*}\" host=\"USD{noschema%%:*}\" port=\"USD{noschema##*:}\" # Add the port if was implicit if [[ \"USDport\" == \"USDhost\" ]]; then port='443' fi # Get the SAN fields openssl s_client -showcerts -servername \"USDhost\" -connect \"USDhost:USDport\" </dev/null 2>/dev/null | openssl x509 -noout -ext subjectAltName > \"USDsan\" # openssl returns the empty string if no SAN is found. # If a SAN is found, openssl is expected to return something like: # # X509v3 Subject Alternative Name: # DNS:standalone, DNS:osp1, IP Address:192.168.2.1, IP Address:10.254.1.2 if [[ \"USD(grep -c \"Subject Alternative Name\" \"USDsan\" || true)\" -gt 0 ]]; then echo \"PASS: USDname USDinterface USDurl\" else invalid=USD((invalid+1)) echo \"INVALID: USDname USDinterface USDurl\" fi done < \"USDcatalog\" clean up temporary files rm \"USDcatalog\" \"USDsan\" if [[ USDinvalid -gt 0 ]]; then echo \"USD{invalid} legacy certificates were detected. Update your certificates to include a SAN field.\" exit 1 else echo \"All HTTPS certificates for this cloud are valid.\" fi",
"x509: certificate relies on legacy Common Name field, use SANs instead",
"openstack catalog list",
"host=<host_name>",
"port=<port_number>",
"openssl s_client -showcerts -servername \"USDhost\" -connect \"USDhost:USDport\" </dev/null 2>/dev/null | openssl x509 -noout -ext subjectAltName",
"X509v3 Subject Alternative Name: DNS:your.host.example.net",
"x509: certificate relies on legacy Common Name field, use SANs instead",
"openstack network create radio --provider-physical-network radio --provider-network-type flat --external",
"openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external",
"openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio",
"openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"openstack role add --user <user> --project <project> swiftoperator",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name>",
"oc apply -f <storage_class_file_name>",
"storageclass.storage.k8s.io/custom-csi-storageclass created",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: \"true\" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3",
"oc apply -f <pvc_file_name>",
"persistentvolumeclaim/csi-pvc-imageregistry created",
"oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{\"op\": \"replace\", \"path\": \"/spec/storage/pvc/claim\", \"value\": \"csi-pvc-imageregistry\"}]'",
"config.imageregistry.operator.openshift.io/cluster patched",
"oc get configs.imageregistry.operator.openshift.io/cluster -o yaml",
"status: managementState: Managed pvc: claim: csi-pvc-imageregistry",
"oc get pvc -n openshift-image-registry csi-pvc-imageregistry",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"openshift-install --dir <destination_directory> create manifests",
"vi openshift/manifests/cloud-provider-config.yaml",
"# [LoadBalancer] lb-provider = \"amphora\" 1 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #",
"oc edit configmap -n openshift-config cloud-provider-config",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"controlPlane: platform: openstack: type: <bare_metal_control_plane_flavor> 1 compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: openstack: type: <bare_metal_compute_flavor> 2 replicas: 3 platform: openstack: machinesSubnet: <subnet_UUID> 3",
"./openshift-install wait-for install-complete --log-level debug",
"openstack network create --project openshift",
"openstack subnet create --project openshift",
"openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2",
"platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.25.0/24 - cidr: fd2e:6f44:5dd8:c956::/64 serviceNetwork: - 172.30.0.0/16 - fd02::/112 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiVIPs: - 192.168.25.10 - fd2e:6f44:5dd8:c956:f816:3eff:fec3:5955 ingressVIPs: - 192.168.25.132 - fd2e:6f44:5dd8:c956:f816:3eff:fe40:aecb controlPlanePort: fixedIPs: - subnet: name: openshift-dual4 - subnet: name: openshift-dual6 network: name: openshift-dual fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: machineNetwork: 1 - cidr: \"192.168.25.0/24\" - cidr: \"fd2e:6f44:5dd8:c956::/64\" clusterNetwork: 2 - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 serviceNetwork: 3 - 172.30.0.0/16 - fd02::/112 platform: openstack: ingressVIPs: ['192.168.25.79', 'fd2e:6f44:5dd8:c956:f816:3eff:fef1:1bad'] 4 apiVIPs: ['192.168.25.199', 'fd2e:6f44:5dd8:c956:f816:3eff:fe78:cf36'] 5 controlPlanePort: 6 fixedIPs: 7 - subnet: 8 name: subnet-v4 id: subnet-v4-id - subnet: 9 name: subnet-v6 id: subnet-v6-id network: 10 name: dualstack id: network-id",
"apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: machineNetwork: 1 - cidr: \"fd2e:6f44:5dd8:c956::/64\" - cidr: \"192.168.25.0/24\" clusterNetwork: 2 - cidr: fd01::/48 hostPrefix: 64 - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: 3 - fd02::/112 - 172.30.0.0/16 platform: openstack: ingressVIPs: ['fd2e:6f44:5dd8:c956:f816:3eff:fef1:1bad', '192.168.25.79'] 4 apiVIPs: ['fd2e:6f44:5dd8:c956:f816:3eff:fe78:cf36', '192.168.25.199'] 5 controlPlanePort: 6 fixedIPs: 7 - subnet: 8 name: subnet-v6 id: subnet-v6-id - subnet: 9 name: subnet-v4 id: subnet-v4-id network: 10 name: dualstack id: network-id",
"[connection] type=ethernet [ipv6] addr-gen-mode=eui64 method=auto",
"[connection] ipv6.addr-gen-mode=0",
"openstack port create api --network <v6_machine_network>",
"openstack port create ingress --network <v6_machine_network>",
"apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: machineNetwork: - cidr: \"fd2e:6f44:5dd8:c956::/64\" 1 clusterNetwork: - cidr: fd01::/48 hostPrefix: 64 serviceNetwork: - fd02::/112 platform: openstack: ingressVIPs: ['fd2e:6f44:5dd8:c956::383'] 2 apiVIPs: ['fd2e:6f44:5dd8:c956::9a'] 3 controlPlanePort: fixedIPs: 4 - subnet: name: subnet-v6 network: 5 name: v6-network imageContentSources: 6 - mirrors: - <mirror> source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - mirrors: - <mirror> source: registry.ci.openshift.org/ocp/release additionalTrustBundle: | <certificate_of_the_mirror>",
"apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.10.0/24 platform: openstack: cloud: mycloud machinesSubnet: 8586bf1a-cc3c-4d40-bdf6-c243decc603a 1 apiVIPs: - 192.168.10.5 ingressVIPs: - 192.168.10.7 loadBalancer: type: UserManaged 2",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc get nodes",
"oc get clusterversion",
"oc get clusteroperator",
"oc get pods -A",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"sudo subscription-manager register # If not done already",
"sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already",
"sudo subscription-manager repos --disable=* # If not done already",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr ansible-collections-openstack",
"sudo alternatives --set python /usr/bin/python3",
"xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/down-containers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.18/upi/openstack/update-network-resources.yaml'",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"file <name_of_downloaded_file>",
"openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"bootstrap machine\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"export OS_NET_ID=\"openshift-USD(dd if=/dev/urandom count=4 bs=1 2>/dev/null |hexdump -e '\"%02x\"')\"",
"echo USDOS_NET_ID",
"echo \"{\\\"os_net_id\\\": \\\"USDOS_NET_ID\\\"}\" | tee netid.json",
"ansible-playbook -i inventory.yaml network.yaml",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.25.0/24 - cidr: fd2e:6f44:5dd8:c956::/64 serviceNetwork: - 172.30.0.0/16 - fd02::/112 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiVIPs: - 192.168.25.10 - fd2e:6f44:5dd8:c956:f816:3eff:fec3:5955 ingressVIPs: - 192.168.25.132 - fd2e:6f44:5dd8:c956:f816:3eff:fe40:aecb controlPlanePort: fixedIPs: - subnet: name: openshift-dual4 - subnet: name: openshift-dual6 network: name: openshift-dual fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"python -c 'import os import sys import yaml import re re_os_net_id = re.compile(r\"{{\\s*os_net_id\\s*}}\") os_net_id = os.getenv(\"OS_NET_ID\") path = \"common.yaml\" facts = None for _dict in yaml.safe_load(open(path))[0][\"tasks\"]: if \"os_network\" in _dict.get(\"set_fact\", {}): facts = _dict[\"set_fact\"] break if not facts: print(\"Cannot find `os_network` in common.yaml file. Make sure OpenStack resource names are defined in one of the tasks.\") sys.exit(1) os_network = re_os_net_id.sub(os_net_id, facts[\"os_network\"]) os_subnet = re_os_net_id.sub(os_net_id, facts[\"os_subnet\"]) path = \"install-config.yaml\" data = yaml.safe_load(open(path)) inventory = yaml.safe_load(open(\"inventory.yaml\"))[\"all\"][\"hosts\"][\"localhost\"] machine_net = [{\"cidr\": inventory[\"os_subnet_range\"]}] api_vips = [inventory[\"os_apiVIP\"]] ingress_vips = [inventory[\"os_ingressVIP\"]] ctrl_plane_port = {\"network\": {\"name\": os_network}, \"fixedIPs\": [{\"subnet\": {\"name\": os_subnet}}]} if inventory.get(\"os_subnet6_range\"): 1 os_subnet6 = re_os_net_id.sub(os_net_id, facts[\"os_subnet6\"]) machine_net.append({\"cidr\": inventory[\"os_subnet6_range\"]}) api_vips.append(inventory[\"os_apiVIP6\"]) ingress_vips.append(inventory[\"os_ingressVIP6\"]) data[\"networking\"][\"networkType\"] = \"OVNKubernetes\" data[\"networking\"][\"clusterNetwork\"].append({\"cidr\": inventory[\"cluster_network6_cidr\"], \"hostPrefix\": inventory[\"cluster_network6_prefix\"]}) data[\"networking\"][\"serviceNetwork\"].append(inventory[\"service_subnet6_range\"]) ctrl_plane_port[\"fixedIPs\"].append({\"subnet\": {\"name\": os_subnet6}}) data[\"networking\"][\"machineNetwork\"] = machine_net data[\"platform\"][\"openstack\"][\"apiVIPs\"] = api_vips data[\"platform\"][\"openstack\"][\"ingressVIPs\"] = ingress_vips data[\"platform\"][\"openstack\"][\"controlPlanePort\"] = ctrl_plane_port del data[\"platform\"][\"openstack\"][\"externalDNS\"] open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"compute\"][0][\"replicas\"] = 0; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"openstack network create --project openshift",
"openstack subnet create --project openshift",
"openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2",
"platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"export INFRA_ID=USD(jq -r .infraID metadata.json)",
"import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f)",
"openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name>",
"openstack image show <image_name>",
"openstack catalog show image",
"openstack token issue -c id -f value",
"{ \"ignition\": { \"config\": { \"merge\": [{ \"source\": \"<storage_url>\", 1 \"httpHeaders\": [{ \"name\": \"X-Auth-Token\", 2 \"value\": \"<token_ID>\" 3 }] }] }, \"security\": { \"tls\": { \"certificateAuthorities\": [{ \"source\": \"data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>\" 4 }] } }, \"version\": \"3.2.0\" } }",
"for index in USD(seq 0 2); do MASTER_HOSTNAME=\"USDINFRA_ID-master-USDindex\\n\" python -c \"import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)\" <master.ign >\"USDINFRA_ID-master-USDindex-ignition.json\" done",
"# The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external'",
"# OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20'",
"ansible-playbook -i inventory.yaml security-groups.yaml",
"ansible-playbook -i inventory.yaml update-network-resources.yaml 1",
"openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> \"USDINFRA_ID-nodes\"",
"all: hosts: localhost: ansible_connection: local ansible_python_interpreter: \"{{ansible_playbook_python}}\" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external'",
"./openshift-install wait-for install-complete --log-level debug",
"ansible-playbook -i inventory.yaml bootstrap.yaml",
"openstack console log show \"USDINFRA_ID-bootstrap\"",
"ansible-playbook -i inventory.yaml control-plane.yaml",
"openshift-install wait-for bootstrap-complete",
"INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ansible-playbook -i inventory.yaml down-bootstrap.yaml",
"ansible-playbook -i inventory.yaml compute-nodes.yaml",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3",
"openshift-install --log-level debug wait-for install-complete",
"openstack role add --user <user> --project <project> swiftoperator",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"openshift-install --dir <destination_directory> create manifests",
"vi openshift/manifests/cloud-provider-config.yaml",
"# [LoadBalancer] lb-provider = \"amphora\" 1 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #",
"oc edit configmap -n openshift-config cloud-provider-config",
"file <name_of_downloaded_file>",
"openstack image create --file rhcos-44.81.202003110027-0-openstack.x86_64.qcow2 --disk-format qcow2 rhcos-USD{RHCOS_VERSION}",
"./openshift-install create install-config --dir <installation_directory> 1",
"platform: openstack: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"publish: Internal",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: region: region1 cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc get nodes",
"oc get clusterversion",
"oc get clusteroperator",
"oc get pods -A",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0",
"openstack port show <cluster_name>-<cluster_ID>-ingress-port",
"openstack floating ip set --port <ingress_port_ID> <apps_FIP>",
"*.apps.<cluster_name>.<base_domain> IN A <apps_FIP>",
"<apps_FIP> console-openshift-console.apps.<cluster name>.<base domain> <apps_FIP> integrated-oauth-server-openshift-authentication.apps.<cluster name>.<base domain> <apps_FIP> oauth-openshift.apps.<cluster name>.<base domain> <apps_FIP> prometheus-k8s-openshift-monitoring.apps.<cluster name>.<base domain> <apps_FIP> <app name>.apps.<cluster name>.<base domain>",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy 1 metadata: name: \"hwoffload9\" namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: true nicSelector: pfNames: 2 - ens6 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 1 priority: 99 resourceName: \"hwoffload9\"",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy 1 metadata: name: \"hwoffload10\" namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: true nicSelector: pfNames: 2 - ens5 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 1 priority: 99 resourceName: \"hwoffload10\"",
"apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload9 name: hwoffload9 namespace: default spec: config: '{ \"cniVersion\":\"0.3.1\", \"name\":\"hwoffload9\",\"type\":\"host-device\",\"device\":\"ens6\" }'",
"apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload10 name: hwoffload10 namespace: default spec: config: '{ \"cniVersion\":\"0.3.1\", \"name\":\"hwoffload10\",\"type\":\"host-device\",\"device\":\"ens5\" }'",
"apiVersion: v1 kind: Pod metadata: name: dpdk-testpmd namespace: default annotations: irq-load-balancing.crio.io: disable cpu-quota.crio.io: disable k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload9 k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload10 spec: restartPolicy: Never containers: - name: dpdk-testpmd image: quay.io/krister/centos8_nfv-container-dpdk-testpmd:latest",
"spec: additionalNetworks: - name: hwoffload1 namespace: cnf rawCNIConfig: '{ \"cniVersion\": \"0.3.1\", \"name\": \"hwoffload1\", \"type\": \"host-device\",\"pciBusId\": \"0000:00:05.0\", \"ipam\": {}}' 1 type: Raw",
"oc describe SriovNetworkNodeState -n openshift-sriov-network-operator",
"oc apply -f network.yaml",
"openstack port set --no-security-group --disable-port-security <compute_ipv6_port> 1",
"apiVersion: apps/v1 kind: Deployment metadata: name: hello-openshift namespace: ipv6 spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - hello-openshift replicas: 2 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift annotations: k8s.v1.cni.cncf.io/networks: ipv6 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: hello-openshift securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL image: quay.io/openshift/origin-hello-openshift ports: - containerPort: 8080",
"oc create -f <ipv6_enabled_resource> 1",
"oc edit networks.operator.openshift.io cluster",
"spec: additionalNetworks: - name: ipv6 namespace: ipv6 1 rawCNIConfig: '{ \"cniVersion\": \"0.3.1\", \"name\": \"ipv6\", \"type\": \"macvlan\", \"master\": \"ens4\"}' 2 type: Raw",
"oc get network-attachment-definitions -A",
"NAMESPACE NAME AGE ipv6 ipv6 21h",
"[Global] use-clouds = true clouds-file = /etc/openstack/secret/clouds.yaml cloud = openstack [LoadBalancer] enabled = true",
"apiVersion: v1 data: cloud.conf: | [Global] 1 secret-name = openstack-credentials secret-namespace = kube-system region = regionOne [LoadBalancer] enabled = True kind: ConfigMap metadata: creationTimestamp: \"2022-12-20T17:01:08Z\" name: cloud-conf namespace: openshift-cloud-controller-manager resourceVersion: \"2519\" uid: cbbeedaf-41ed-41c2-9f37-4885732d3677",
"openstack flavor create --<ram 16384> --<disk 0> --ephemeral 10 --vcpus 4 <flavor_name>",
"controlPlane: name: master platform: openstack: type: USD{CONTROL_PLANE_FLAVOR} rootVolume: size: 25 types: - USD{CINDER_TYPE} replicas: 3",
"openshift-install create cluster --dir <installation_directory> 1",
"oc wait clusteroperators --all --for=condition=Progressing=false 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-etcd spec: config: ignition: version: 3.4.0 systemd: units: - contents: | [Unit] Description=Mount local-etcd to /var/lib/etcd [Mount] What=/dev/disk/by-label/local-etcd 1 Where=/var/lib/etcd Type=xfs Options=defaults,prjquota [Install] WantedBy=local-fs.target enabled: true name: var-lib-etcd.mount - contents: | [Unit] Description=Create local-etcd filesystem DefaultDependencies=no After=local-fs-pre.target ConditionPathIsSymbolicLink=!/dev/disk/by-label/local-etcd 2 [Service] Type=oneshot RemainAfterExit=yes ExecStart=/bin/bash -c \"[ -L /dev/disk/by-label/ephemeral0 ] || ( >&2 echo Ephemeral disk does not exist; /usr/bin/false )\" ExecStart=/usr/sbin/mkfs.xfs -f -L local-etcd /dev/disk/by-label/ephemeral0 3 [Install] RequiredBy=dev-disk-by\\x2dlabel-local\\x2detcd.device enabled: true name: create-local-etcd.service - contents: | [Unit] Description=Migrate existing data to local etcd After=var-lib-etcd.mount Before=crio.service 4 Requisite=var-lib-etcd.mount ConditionPathExists=!/var/lib/etcd/member ConditionPathIsDirectory=/sysroot/ostree/deploy/rhcos/var/lib/etcd/member 5 [Service] Type=oneshot RemainAfterExit=yes ExecStart=/bin/bash -c \"if [ -d /var/lib/etcd/member.migrate ]; then rm -rf /var/lib/etcd/member.migrate; fi\" 6 ExecStart=/usr/bin/cp -aZ /sysroot/ostree/deploy/rhcos/var/lib/etcd/member/ /var/lib/etcd/member.migrate ExecStart=/usr/bin/mv /var/lib/etcd/member.migrate /var/lib/etcd/member 7 [Install] RequiredBy=var-lib-etcd.mount enabled: true name: migrate-to-local-etcd.service - contents: | [Unit] Description=Relabel /var/lib/etcd After=migrate-to-local-etcd.service Before=crio.service Requisite=var-lib-etcd.mount [Service] Type=oneshot RemainAfterExit=yes ExecCondition=/bin/bash -c \"[ -n \\\"USD(restorecon -nv /var/lib/etcd)\\\" ]\" 8 ExecStart=/usr/sbin/restorecon -R /var/lib/etcd [Install] RequiredBy=var-lib-etcd.mount enabled: true name: relabel-var-lib-etcd.service",
"oc create -f 98-var-lib-etcd.yaml",
"oc wait --timeout=45m --for=condition=Updating=false machineconfigpool/master",
"oc wait node --selector='node-role.kubernetes.io/master' --for condition=Ready --timeout=30s",
"oc wait clusteroperators --timeout=30m --all --for=condition=Progressing=false",
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2",
"sudo subscription-manager register # If not done already",
"sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already",
"sudo subscription-manager repos --disable=* # If not done already",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"sudo yum install python3-openstackclient ansible python3-openstacksdk",
"sudo alternatives --set python /usr/bin/python3",
"ansible-playbook -i inventory.yaml down-bootstrap.yaml down-control-plane.yaml down-compute-nodes.yaml down-load-balancers.yaml down-network.yaml down-security-groups.yaml",
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"networking: ovnKubernetesConfig: ipv4: internalJoinSubnet:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"platform: aws: lbType:",
"publish:",
"sshKey:",
"compute: platform: aws: amiID:",
"compute: platform: aws: iamProfile:",
"compute: platform: aws: iamRole:",
"compute: platform: aws: rootVolume: iops:",
"compute: platform: aws: rootVolume: size:",
"compute: platform: aws: rootVolume: type:",
"compute: platform: aws: rootVolume: kmsKeyARN:",
"compute: platform: aws: type:",
"compute: platform: aws: zones:",
"compute: aws: region:",
"aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge",
"controlPlane: platform: aws: amiID:",
"controlPlane: platform: aws: iamProfile:",
"controlPlane: platform: aws: iamRole:",
"controlPlane: platform: aws: rootVolume: iops:",
"controlPlane: platform: aws: rootVolume: size:",
"controlPlane: platform: aws: rootVolume: type:",
"controlPlane: platform: aws: rootVolume: kmsKeyARN:",
"controlPlane: platform: aws: type:",
"controlPlane: platform: aws: zones:",
"controlPlane: aws: region:",
"platform: aws: amiID:",
"platform: aws: hostedZone:",
"platform: aws: hostedZoneRole:",
"platform: aws: serviceEndpoints: - name: url:",
"platform: aws: userTags:",
"platform: aws: propagateUserTags:",
"platform: aws: subnets:",
"platform: aws: publicIpv4Pool:",
"platform: aws: preserveBootstrapIgnition:",
"compute: platform: openstack: rootVolume: size:",
"compute: platform: openstack: rootVolume: types:",
"compute: platform: openstack: rootVolume: type:",
"compute: platform: openstack: rootVolume: zones:",
"controlPlane: platform: openstack: rootVolume: size:",
"controlPlane: platform: openstack: rootVolume: types:",
"controlPlane: platform: openstack: rootVolume: type:",
"controlPlane: platform: openstack: rootVolume: zones:",
"platform: openstack: cloud:",
"platform: openstack: externalNetwork:",
"platform: openstack: computeFlavor:",
"compute: platform: openstack: additionalNetworkIDs:",
"compute: platform: openstack: additionalSecurityGroupIDs:",
"compute: platform: openstack: zones:",
"compute: platform: openstack: serverGroupPolicy:",
"controlPlane: platform: openstack: additionalNetworkIDs:",
"controlPlane: platform: openstack: additionalSecurityGroupIDs:",
"controlPlane: platform: openstack: zones:",
"controlPlane: platform: openstack: serverGroupPolicy:",
"platform: openstack: clusterOSImage:",
"platform: openstack: clusterOSImageProperties:",
"platform: openstack: controlPlanePort: fixedIPs:",
"platform: openstack: controlPlanePort: network:",
"platform: openstack: defaultMachinePlatform:",
"{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }",
"platform: openstack: ingressFloatingIP:",
"platform: openstack: apiFloatingIP:",
"platform: openstack: externalDNS:",
"platform: openstack: loadbalancer:",
"platform: openstack: machinesSubnet:",
"controlPlane: platform: gcp: osImage: project:",
"controlPlane: platform: gcp: osImage: name:",
"compute: platform: gcp: osImage: project:",
"compute: platform: gcp: osImage: name:",
"compute: platform: gcp: serviceAccount:",
"platform: gcp: network:",
"platform: gcp: networkProjectID:",
"platform: gcp: projectID:",
"platform: gcp: region:",
"platform: gcp: controlPlaneSubnet:",
"platform: gcp: computeSubnet:",
"platform: gcp: defaultMachinePlatform: zones:",
"platform: gcp: defaultMachinePlatform: osDisk: diskSizeGB:",
"platform: gcp: defaultMachinePlatform: osDisk: diskType:",
"platform: gcp: defaultMachinePlatform: osImage: project:",
"platform: gcp: defaultMachinePlatform: osImage: name:",
"platform: gcp: defaultMachinePlatform: tags:",
"platform: gcp: defaultMachinePlatform: type:",
"platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKey: name:",
"platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKey: keyRing:",
"platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKey: location:",
"platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKey: projectID:",
"platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKeyServiceAccount:",
"platform: gcp: defaultMachinePlatform: secureBoot:",
"platform: gcp: defaultMachinePlatform: confidentialCompute:",
"platform: gcp: defaultMachinePlatform: onHostMaintenance:",
"controlPlane: platform: gcp: osDisk: encryptionKey: kmsKey: name:",
"controlPlane: platform: gcp: osDisk: encryptionKey: kmsKey: keyRing:",
"controlPlane: platform: gcp: osDisk: encryptionKey: kmsKey: location:",
"controlPlane: platform: gcp: osDisk: encryptionKey: kmsKey: projectID:",
"controlPlane: platform: gcp: osDisk: encryptionKey: kmsKeyServiceAccount:",
"controlPlane: platform: gcp: osDisk: diskSizeGB:",
"controlPlane: platform: gcp: osDisk: diskType:",
"controlPlane: platform: gcp: tags:",
"controlPlane: platform: gcp: type:",
"controlPlane: platform: gcp: zones:",
"controlPlane: platform: gcp: secureBoot:",
"controlPlane: platform: gcp: confidentialCompute:",
"controlPlane: platform: gcp: onHostMaintenance:",
"controlPlane: platform: gcp: serviceAccount:",
"compute: platform: gcp: osDisk: encryptionKey: kmsKey: name:",
"compute: platform: gcp: osDisk: encryptionKey: kmsKey: keyRing:",
"compute: platform: gcp: osDisk: encryptionKey: kmsKey: location:",
"compute: platform: gcp: osDisk: encryptionKey: kmsKey: projectID:",
"compute: platform: gcp: osDisk: encryptionKey: kmsKeyServiceAccount:",
"compute: platform: gcp: osDisk: diskSizeGB:",
"compute: platform: gcp: osDisk: diskType:",
"compute: platform: gcp: tags:",
"compute: platform: gcp: type:",
"compute: platform: gcp: zones:",
"compute: platform: gcp: secureBoot:",
"compute: platform: gcp: confidentialCompute:",
"compute: platform: gcp: onHostMaintenance:"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/installing_on_openstack/index
|
probe::signal.pending.return
|
probe::signal.pending.return Name probe::signal.pending.return - Examination of pending signal completed Synopsis signal.pending.return Values name Name of the probe point retstr Return value as a string
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-signal-pending-return
|
Chapter 2. Differences from upstream OpenJDK 17
|
Chapter 2. Differences from upstream OpenJDK 17 Red Hat build of OpenJDK in Red Hat Enterprise Linux contains a number of structural changes from the upstream distribution of OpenJDK. The Microsoft Windows version of Red Hat build of OpenJDK attempts to follow Red Hat Enterprise Linux updates as closely as possible. The following list details the most notable Red Hat build of OpenJDK 17 changes: FIPS support. Red Hat build of OpenJDK 17 automatically detects whether RHEL is in FIPS mode and automatically configures Red Hat build of OpenJDK 17 to operate in that mode. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Cryptographic policy support. Red Hat build of OpenJDK 17 obtains the list of enabled cryptographic algorithms and key size constraints from the RHEL system configuration. These configuration components are used by the Transport Layer Security (TLS) encryption protocol, the certificate path validation, and any signed JARs. You can set different security profiles to balance safety and compatibility. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Red Hat build of OpenJDK on RHEL dynamically links against native libraries such as zlib for archive format support and libjpeg-turbo , libpng , and giflib for image support. RHEL also dynamically links against Harfbuzz and Freetype for font rendering and management. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. The src.zip file includes the source for all of the JAR libraries shipped with Red Hat build of OpenJDK. Red Hat build of OpenJDK on RHEL uses system-wide timezone data files as a source for timezone information. Red Hat build of OpenJDK on RHEL uses system-wide CA certificates. Red Hat build of OpenJDK on Microsoft Windows includes the latest available timezone data from RHEL. Red Hat build of OpenJDK on Microsoft Windows uses the latest available CA certificate from RHEL. Additional resources See, Improve system FIPS detection (RHEL Planning Jira) See, Using system-wide cryptographic policies (RHEL documentation)
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.6/rn-openjdk-diff-from-upstream
|
Chapter 8. Performing a canary rollout update
|
Chapter 8. Performing a canary rollout update There might be some scenarios where you want a more controlled rollout of an update to the worker nodes in order to ensure that mission-critical applications stay available during the whole update, even if the update process causes your applications to fail. Depending on your organizational needs, you might want to update a small subset of worker nodes, evaluate cluster and workload health over a period of time, then update the remaining nodes. This is commonly referred to as a canary update. Or, you might also want to fit worker node updates, which often require a host reboot, into smaller defined maintenance windows when it is not possible to take a large maintenance window to update the entire cluster at one time. In these scenarios, you can create multiple custom machine config pools (MCPs) to prevent certain worker nodes from updating when you update the cluster. After the rest of the cluster is updated, you can update those worker nodes in batches at appropriate times. For example, if you have a cluster with 100 nodes with 10% excess capacity, maintenance windows that must not exceed 4 hours, and you know that it takes no longer than 8 minutes to drain and reboot a worker node, you can leverage MCPs to meet your goals. For example, you could define four MCPs, named workerpool-canary , workerpool-A , workerpool-B , and workerpool-C , with 10, 30, 30, and 30 nodes respectively. During your first maintenance window, you would pause the MCP for workerpool-A , workerpool-B , and workerpool-C , then initiate the cluster update. This updates components that run on top of OpenShift Container Platform and the 10 nodes which are members of the workerpool-canary MCP, because that pool was not paused. The other three MCPs are not updated, because they were paused. If for some reason, you determine that your cluster or workload health was negatively affected by the workerpool-canary update, you would then cordon and drain all nodes in that pool while still maintaining sufficient capacity until you have diagnosed the problem. When everything is working as expected, you would then evaluate the cluster and workload health before deciding to unpause, and thus update, workerpool-A , workerpool-B , and workerpool-C in succession during each additional maintenance window. While managing worker node updates using custom MCPs provides flexibility, it can be a time-consuming process that requires you execute multiple commands. This complexity can result in errors that can affect the entire cluster. It is recommended that you carefully consider your organizational needs and carefully plan the implemention of the process before you start. Note It is not recommended to update the MCPs to different OpenShift Container Platform versions. For example, do not update one MCP from 4.y.10 to 4.y.11 and another to 4.y.12. This scenario has not been tested and might result in an undefined cluster state. Important Pausing a machine config pool prevents the Machine Config Operator from applying any configuration changes on the associated nodes. Pausing an MCP also prevents any automatically-rotated certificates from being pushed to the associated nodes, including the automatic CA rotation of the kube-apiserver-to-kubelet-signer CA certificate. If the MCP is paused when the kube-apiserver-to-kubelet-signer CA certificate expires and the MCO attempts to automatially renew the certificate, the new certificate is created but not applied across the nodes in the respective machine config pool. This causes failure in multiple oc commands, including but not limited to oc debug , oc logs , oc exec , and oc attach . Pausing an MCP should be done with careful consideration about the kube-apiserver-to-kubelet-signer CA certificate expiration and for short periods of time only. 8.1. About the canary rollout update process and MCPs In OpenShift Container Platform, nodes are not considered individually. Nodes are grouped into machine config pools (MCP). There are two MCPs in a default OpenShift Container Platform cluster: one for the control plane nodes and one for the worker nodes. An OpenShift Container Platform update affects all MCPs concurrently. During the update, the Machine Config Operator (MCO) drains and cordons all nodes within a MCP up to the specified maxUnavailable number of nodes (if specified), by default 1 . Draining and cordoning a node deschedules all pods on the node and marks the node as unschedulable. After the node is drained, the Machine Config Daemon applies a new machine configuration, which can include updating the operating system (OS). Updating the OS requires the host to reboot. To prevent specific nodes from being updated, and thus, not drained, cordoned, and updated, you can create custom MCPs. Then, pause those MCPs to ensure that the nodes associated with those MCPs are not updated. The MCO does not update any paused MCPs. You can create one or more custom MCPs, which can give you more control over the sequence in which you update those nodes. After you update the nodes in the first MCP, you can verify the application compatibility, and then update the rest of the nodes gradually to the new version. Note To ensure the stability of the control plane, creating a custom MCP from the control plane nodes is not supported. The Machine Config Operator (MCO) ignores any custom MCP created for the control plane nodes. You should give careful consideration to the number of MCPs you create and the number of nodes in each MCP, based on your workload deployment topology. For example, If you need to fit updates into specific maintenance windows, you need to know how many nodes that OpenShift Container Platform can update within a window. This number is dependent on your unique cluster and workload characteristics. Also, you need to consider how much extra capacity you have available in your cluster. For example, in the case where your applications fail to work as expected on the updated nodes, you can cordon and drain those nodes in the pool, which moves the application pods to other nodes. You need to consider how much extra capacity you have available in order to determine the number of custom MCPs you need and how many nodes are in each MCP. For example, if you use two custom MCPs and 50% of your nodes are in each pool, you need to determine if running 50% of your nodes would provide sufficient quality-of-service (QoS) for your applications. You can use this update process with all documented OpenShift Container Platform update processes. However, the process does not work with Red Hat Enterprise Linux (RHEL) machines, which are updated using Ansible playbooks. 8.2. About performing a canary rollout update This topic describes the general workflow of this canary rollout update process. The steps to perform each task in the workflow are described in the following sections. Create MCPs based on the worker pool. The number of nodes in each MCP depends on a few factors, such as your maintenance window duration for each MCP, and the amount of reserve capacity, meaning extra worker nodes, available in your cluster. Note You can change the maxUnavailable setting in an MCP to specify the percentage or the number of machines that can be updating at any given time. The default is 1. Add a node selector to the custom MCPs. For each node that you do not want to update simultaneously with the rest of the cluster, add a matching label to the nodes. This label associates the node to the MCP. Note Do not remove the default worker label from the nodes. The nodes must have a role label to function properly in the cluster. Pause the MCPs you do not want to update as part of the update process. Note Pausing the MCP also pauses the kube-apiserver-to-kubelet-signer automatic CA certificates rotation. New CA certificates are generated at 292 days from the installation date and old certificates are removed 365 days from the installation date. See the Understand CA cert auto renewal in Red Hat OpenShift 4 to find out how much time you have before the automatic CA certificate rotation. Make sure the pools are unpaused when the CA cert rotation happens. If the MCPs are paused, the cert rotation does not happen, which causes the cluster to become degraded and causes failure in multiple oc commands, including but not limited to oc debug , oc logs , oc exec , and oc attach . Perform the cluster update. The update process updates the MCPs that are not paused, including the control plane nodes. Test the applications on the updated nodes to ensure they are working as expected. Unpause the remaining MCPs one-by-one and test the applications on those nodes until all worker nodes are updated. Unpausing an MCP starts the update process for the nodes associated with that MCP. You can check the progress of the update from the web console by clicking Administration Cluster settings . Or, use the oc get machineconfigpools CLI command. Optionally, remove the custom label from updated nodes and delete the custom MCPs. 8.3. Creating machine config pools to perform a canary rollout update The first task in performing this canary rollout update is to create one or more machine config pools (MCP). Create an MCP from a worker node. List the worker nodes in your cluster. USD oc get -l 'node-role.kubernetes.io/master!=' -o 'jsonpath={range .items[*]}{.metadata.name}{"\n"}{end}' nodes Example output ci-ln-pwnll6b-f76d1-s8t9n-worker-a-s75z4 ci-ln-pwnll6b-f76d1-s8t9n-worker-b-dglj2 ci-ln-pwnll6b-f76d1-s8t9n-worker-c-lldbm For the nodes you want to delay, add a custom label to the node: USD oc label node <node name> node-role.kubernetes.io/<custom-label>= For example: USD oc label node ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz node-role.kubernetes.io/workerpool-canary= Example output node/ci-ln-gtrwm8t-f76d1-spbl7-worker-a-xk76k labeled Create the new MCP: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: workerpool-canary 1 spec: machineConfigSelector: matchExpressions: 2 - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,workerpool-canary] } nodeSelector: matchLabels: node-role.kubernetes.io/workerpool-canary: "" 3 1 Specify a name for the MCP. 2 Specify the worker and custom MCP name. 3 Specify the custom label you added to the nodes that you want in this pool. USD oc create -f <file_name> Example output machineconfigpool.machineconfiguration.openshift.io/workerpool-canary created View the list of MCPs in the cluster and their current state: USD oc get machineconfigpool Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-b0bb90c4921860f2a5d8a2f8137c1867 True False False 3 3 3 0 97m workerpool-canary rendered-workerpool-canary-87ba3dec1ad78cb6aecebf7fbb476a36 True False False 1 1 1 0 2m42s worker rendered-worker-87ba3dec1ad78cb6aecebf7fbb476a36 True False False 2 2 2 0 97m The new machine config pool, workerpool-canary , is created and the number of nodes to which you added the custom label are shown in the machine counts. The worker MCP machine counts are reduced by the same number. It can take several minutes to update the machine counts. In this example, one node was moved from the worker MCP to the workerpool-canary MCP. 8.4. Pausing the machine config pools In this canary rollout update process, after you label the nodes that you do not want to update with the rest of your OpenShift Container Platform cluster and create the machine config pools (MCPs), you pause those MCPs. Pausing an MCP prevents the Machine Config Operator (MCO) from updating the nodes associated with that MCP. Note Pausing the MCP also pauses the kube-apiserver-to-kubelet-signer automatic CA certificates rotation. New CA certificates are generated at 292 days from the installation date and old certificates are removed 365 days from the installation date. See the Understand CA cert auto renewal in Red Hat OpenShift 4 to find out how much time you have before the automatic CA certificate rotation. Make sure the pools are unpaused when the CA cert rotation happens. If the MCPs are paused, the cert rotation does not happen, which causes the cluster to become degraded and causes failure in multiple oc commands, including but not limited to oc debug , oc logs , oc exec , and oc attach . To pause an MCP: Patch the MCP that you want paused: USD oc patch mcp/<mcp_name> --patch '{"spec":{"paused":true}}' --type=merge For example: USD oc patch mcp/workerpool-canary --patch '{"spec":{"paused":true}}' --type=merge Example output machineconfigpool.machineconfiguration.openshift.io/workerpool-canary patched 8.5. Performing the cluster update When the MCPs enter ready state, you can peform the cluster update. See one of the following update methods, as appropriate for your cluster: Updating a cluster using the web console Updating a cluster using the CLI After the update is complete, you can start to unpause the MCPs one-by-one. 8.6. Unpausing the machine config pools In this canary rollout update process, after the OpenShift Container Platform update is complete, unpause your custom MCPs one-by-one. Unpausing an MCP allows the Machine Config Operator (MCO) to update the nodes associated with that MCP. To unpause an MCP: Patch the MCP that you want to unpause: USD oc patch mcp/<mcp_name> --patch '{"spec":{"paused":false}}' --type=merge For example: USD oc patch mcp/workerpool-canary --patch '{"spec":{"paused":false}}' --type=merge Example output machineconfigpool.machineconfiguration.openshift.io/workerpool-canary patched You can check the progress of the update by using the oc get machineconfigpools command. Test your applications on the updated nodes to ensure that they are working as expected. Unpause any other paused MCPs one-by-one and verify that your applications work. 8.6.1. In case of application failure In case of a failure, such as your applications not working on the updated nodes, you can cordon and drain the nodes in the pool, which moves the application pods to other nodes to help maintain the quality-of-service for the applications. This first MCP should be no larger than the excess capacity. 8.7. Moving a node to the original machine config pool In this canary rollout update process, after you have unpaused a custom machine config pool (MCP) and verified that the applications on the nodes associated with that MCP are working as expected, you should move the node back to its original MCP by removing the custom label you added to the node. Important A node must have a role to be properly functioning in the cluster. To move a node to its original MCP: Remove the custom label from the node. USD oc label node <node_name> node-role.kubernetes.io/<custom-label>- For example: USD oc label node ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz node-role.kubernetes.io/workerpool-canary- Example output node/ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz labeled The MCO moves the nodes back to the original MCP and reconciles the node to the MCP configuration. View the list of MCPs in the cluster and their current state: USDoc get mcp NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-1203f157d053fd987c7cbd91e3fbc0ed True False False 3 3 3 0 61m workerpool-canary rendered-mcp-noupdate-5ad4791166c468f3a35cd16e734c9028 True False False 0 0 0 0 21m worker rendered-worker-5ad4791166c468f3a35cd16e734c9028 True False False 3 3 3 0 61m The node is removed from the custom MCP and moved back to the original MCP. It can take several minutes to update the machine counts. In this example, one node was moved from the removed workerpool-canary MCP to the `worker`MCP. Optional: Delete the custom MCP: USD oc delete mcp <mcp_name>
|
[
"oc get -l 'node-role.kubernetes.io/master!=' -o 'jsonpath={range .items[*]}{.metadata.name}{\"\\n\"}{end}' nodes",
"ci-ln-pwnll6b-f76d1-s8t9n-worker-a-s75z4 ci-ln-pwnll6b-f76d1-s8t9n-worker-b-dglj2 ci-ln-pwnll6b-f76d1-s8t9n-worker-c-lldbm",
"oc label node <node name> node-role.kubernetes.io/<custom-label>=",
"oc label node ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz node-role.kubernetes.io/workerpool-canary=",
"node/ci-ln-gtrwm8t-f76d1-spbl7-worker-a-xk76k labeled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: workerpool-canary 1 spec: machineConfigSelector: matchExpressions: 2 - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,workerpool-canary] } nodeSelector: matchLabels: node-role.kubernetes.io/workerpool-canary: \"\" 3",
"oc create -f <file_name>",
"machineconfigpool.machineconfiguration.openshift.io/workerpool-canary created",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-b0bb90c4921860f2a5d8a2f8137c1867 True False False 3 3 3 0 97m workerpool-canary rendered-workerpool-canary-87ba3dec1ad78cb6aecebf7fbb476a36 True False False 1 1 1 0 2m42s worker rendered-worker-87ba3dec1ad78cb6aecebf7fbb476a36 True False False 2 2 2 0 97m",
"oc patch mcp/<mcp_name> --patch '{\"spec\":{\"paused\":true}}' --type=merge",
"oc patch mcp/workerpool-canary --patch '{\"spec\":{\"paused\":true}}' --type=merge",
"machineconfigpool.machineconfiguration.openshift.io/workerpool-canary patched",
"oc patch mcp/<mcp_name> --patch '{\"spec\":{\"paused\":false}}' --type=merge",
"oc patch mcp/workerpool-canary --patch '{\"spec\":{\"paused\":false}}' --type=merge",
"machineconfigpool.machineconfiguration.openshift.io/workerpool-canary patched",
"oc label node <node_name> node-role.kubernetes.io/<custom-label>-",
"oc label node ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz node-role.kubernetes.io/workerpool-canary-",
"node/ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz labeled",
"USDoc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-1203f157d053fd987c7cbd91e3fbc0ed True False False 3 3 3 0 61m workerpool-canary rendered-mcp-noupdate-5ad4791166c468f3a35cd16e734c9028 True False False 0 0 0 0 21m worker rendered-worker-5ad4791166c468f3a35cd16e734c9028 True False False 3 3 3 0 61m",
"oc delete mcp <mcp_name>"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/updating_clusters/update-using-custom-machine-config-pools
|
Chapter 1. Updating Red Hat Enterprise Linux AI
|
Chapter 1. Updating Red Hat Enterprise Linux AI Red Hat Enterprise Linux AI allows you to update your instance so you can use the latest version of RHEL AI and InstructLab. 1.1. Updating your RHEL AI instance You can update your instance to use the latest version of RHEL AI and InstructLab Prerequisites You installed and deployed a Red Hat Enterprise Linux AI instance on one of the supported platforms. You created a Red Hat registry account. You have root user access on your machine. Procedure Log into your Red Hat registry account with the podman command: USD sudo podman login registry.redhat.io --username <user-name> --password <user-password> --authfile /etc/ostree/auth.json Or you can log in with the skopeo command: USD sudo skopeo login registry.redhat.io --username <user-name> --password <user-password> --authfile /etc/ostree/auth.json Upgrading to a minor version of Red Hat Enterprise Linux AI You can upgrade your instance to use the latest version of Red Hat Enterprise Linux AI by using the bootc image for your machines hardware. USD sudo bootc switch registry.redhat.io/rhelai1/bootc-<hardware-vendor>-rhel9:<rhel-ai-version> where <hardware-vendor> Specify the hardare vendor for your accelerators. Valid values include: nvidia , amd , and intel . <rhel-ai-version> Specify the RHEL AI version you want to upgrade to. Example command for machines with NVIDIA accelerators USD sudo bootc switch registry.redhat.io/rhelai1/bootc-nvidia-rhel9:1.4 Restart your system by running the following command: USD sudo reboot -n Important When upgrading to RHEL AI version 1.4, the /etc/skel/.config/containers/storage.conf storage configuration needs to be copied to your <user-home>/.config/containers/storage.conf home directory before re-initializing. For example: USD cp /etc/skel/.config/containers/storage.conf <user-home>/.config/containers/storage.con Upgrading to a z-stream version of Red Hat Enterprise Linux AI If a z-stream exists, you can upgrade your system to a z-stream version of RHEL AI by running the folllowing command: USD sudo bootc upgrade --apply It is recommended to re-initialize your environment and configurations after you upgrade to the latest RHEL AI version. Warning The ilab config init command overrides your existing config.yaml file and sets it to the default configurations per your system hardware. If you customized the config.yaml file, ensure you are familiar with your custom configurations before re-initializing. USD ilab config init
|
[
"sudo podman login registry.redhat.io --username <user-name> --password <user-password> --authfile /etc/ostree/auth.json",
"sudo skopeo login registry.redhat.io --username <user-name> --password <user-password> --authfile /etc/ostree/auth.json",
"sudo bootc switch registry.redhat.io/rhelai1/bootc-<hardware-vendor>-rhel9:<rhel-ai-version>",
"sudo bootc switch registry.redhat.io/rhelai1/bootc-nvidia-rhel9:1.4",
"sudo reboot -n",
"cp /etc/skel/.config/containers/storage.conf <user-home>/.config/containers/storage.con",
"sudo bootc upgrade --apply",
"ilab config init"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4/html/updating/updating_system
|
Chapter 1. Understanding image builds
|
Chapter 1. Understanding image builds 1.1. Builds A build is the process of transforming input parameters into a resulting object. Most often, the process is used to transform input parameters or source code into a runnable image. A BuildConfig object is the definition of the entire build process. Red Hat OpenShift Service on AWS uses Kubernetes by creating containers from build images and pushing them to a container image registry. Build objects share common characteristics including inputs for a build, the requirement to complete a build process, logging the build process, publishing resources from successful builds, and publishing the final status of the build. Builds take advantage of resource restrictions, specifying limitations on resources such as CPU usage, memory usage, and build or pod execution time. The resulting object of a build depends on the builder used to create it. For docker and S2I builds, the resulting objects are runnable images. For custom builds, the resulting objects are whatever the builder image author has specified. Additionally, the pipeline build strategy can be used to implement sophisticated workflows: Continuous integration Continuous deployment 1.1.1. Docker build Red Hat OpenShift Service on AWS uses Buildah to build a container image from a Dockerfile. For more information on building container images with Dockerfiles, see the Dockerfile reference documentation . Tip If you set Docker build arguments by using the buildArgs array, see Understand how ARG and FROM interact in the Dockerfile reference documentation. 1.1.2. Source-to-image build Source-to-image (S2I) is a tool for building reproducible container images. It produces ready-to-run images by injecting application source into a container image and assembling a new image. The new image incorporates the base image, the builder, and built source and is ready to use with the buildah run command. S2I supports incremental builds, which re-use previously downloaded dependencies, previously built artifacts, and so on. 1.1.3. Pipeline build Important The Pipeline build strategy is deprecated in Red Hat OpenShift Service on AWS 4. Equivalent and improved functionality is present in the Red Hat OpenShift Service on AWS Pipelines based on Tekton. Jenkins images on Red Hat OpenShift Service on AWS are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. The Pipeline build strategy allows developers to define a Jenkins pipeline for use by the Jenkins pipeline plugin. The build can be started, monitored, and managed by Red Hat OpenShift Service on AWS in the same way as any other build type. Pipeline workflows are defined in a jenkinsfile , either embedded directly in the build configuration, or supplied in a Git repository and referenced by the build configuration.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/builds_using_buildconfig/understanding-image-builds
|
Chapter 5. Reference materials
|
Chapter 5. Reference materials To learn more about the Red Hat Insights for OpenShift, see the following resources: Red Hat Insights overview page Red Hat Insights for OpenShift Documentation OpenShift Cluster Manager
| null |
https://docs.redhat.com/en/documentation/red_hat_insights_for_openshift/1-latest/html/assessing_security_vulnerabilities_in_your_openshift_cluster_using_red_hat_insights/assembly_vuln-reference
|
Chapter 1. Supported upgrade paths
|
Chapter 1. Supported upgrade paths Currently, it is possible to perform an in-place upgrade from RHEL 7 to the following target RHEL 8 minor versions: System configuration Source OS version Target OS version SAP HANA RHEL 7.9 RHEL 8.8 (default) RHEL 8.10 SAP NetWeaver and other SAP Applications RHEL 7.9 RHEL 8.8 RHEL 8.10 (default) SAP HANA is validated by SAP for RHEL minor versions, which are getting package updates for longer than 6 months. Therefore, for the SAP HANA hosts, the upgrade paths include only EUS/E4S releases plus the last minor release for a given major release. Upgrading SAP HANA System describes restrictions and detailed steps for upgrading a SAP HANA system. SAP NetWeaver is validated by SAP for each major RHEL version. The supported in-place upgrade path for this scenario is from RHEL 7.9 to the RHEL 8.x minor version, which is supported by Leapp for non-HANA systems as per the Upgrading from RHEL 7 to RHEL 8 document. Exceptionally for Cloud Providers, the upgrade of SAP NetWaver systems is supported by two latest EUS/E4S releases. Upgrading SAP NetWeaver System describes certain deviations from the default upgrade procedure. For systems on which both SAP HANA and SAP NetWeaver are installed, the SAP HANA restrictions apply. For more information about supported upgrade paths, see Supported in-place upgrade paths for Red Hat Enterprise Linux .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/upgrading_sap_environments_from_rhel_7_to_rhel_8/asmb_supported-upgrade-paths_upgrading-7-to-8
|
Appendix C. Using AMQ Broker with the examples
|
Appendix C. Using AMQ Broker with the examples The AMQ Core Protocol JMS examples require a running message broker with a queue named exampleQueue . Use the procedures below to install and start the broker and define the queue. C.1. Installing the broker Follow the instructions in Getting Started with AMQ Broker to install the broker and create a broker instance . Enable anonymous access. The following procedures refer to the location of the broker instance as <broker-instance-dir> . C.2. Starting the broker Procedure Use the artemis run command to start the broker. USD <broker-instance-dir> /bin/artemis run Check the console output for any critical errors logged during startup. The broker logs Server is now live when it is ready. USD example-broker/bin/artemis run __ __ ____ ____ _ /\ | \/ |/ __ \ | _ \ | | / \ | \ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\ \ | |\/| | | | | | _ <| '__/ _ \| |/ / _ \ '__| / ____ \| | | | |__| | | |_) | | | (_) | < __/ | /_/ \_\_| |_|\___\_\ |____/|_| \___/|_|\_\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server ... 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live ... C.3. Creating a queue In a new terminal, use the artemis queue command to create a queue named exampleQueue . USD <broker-instance-dir> /bin/artemis queue create --name exampleQueue --address exampleQueue --auto-create-address --anycast You are prompted to answer a series of yes or no questions. Answer N for no to all of them. Once the queue is created, the broker is ready for use with the example programs. C.4. Stopping the broker When you are done running the examples, use the artemis stop command to stop the broker. USD <broker-instance-dir> /bin/artemis stop Revised on 2024-01-22 11:05:17 UTC
|
[
"<broker-instance-dir> /bin/artemis run",
"example-broker/bin/artemis run __ __ ____ ____ _ /\\ | \\/ |/ __ \\ | _ \\ | | / \\ | \\ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\\ \\ | |\\/| | | | | | _ <| '__/ _ \\| |/ / _ \\ '__| / ____ \\| | | | |__| | | |_) | | | (_) | < __/ | /_/ \\_\\_| |_|\\___\\_\\ |____/|_| \\___/|_|\\_\\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live",
"<broker-instance-dir> /bin/artemis queue create --name exampleQueue --address exampleQueue --auto-create-address --anycast",
"<broker-instance-dir> /bin/artemis stop"
] |
https://docs.redhat.com/en/documentation/red_hat_amq_core_protocol_jms/7.11/html/using_amq_core_protocol_jms/using_the_broker_with_the_examples
|
15.4. Specifying Default User and Group Attributes
|
15.4. Specifying Default User and Group Attributes Identity Management uses a template when it creates new entries. For users, the template is very specific. Identity Management uses default values for several core attributes for IdM user accounts. These defaults can define actual values for user account attributes (such as the home directory location) or it can define the format of attribute values, such as the user name length. These settings also define the object classes assigned to users. For groups, the template only defines the assigned object classes. These default definitions are all contained in a single configuration entry for the IdM server, cn=ipaconfig,cn=etc,dc=example,dc=com . The configuration can be changed using the ipa config-mod command. Table 15.3. Default User Parameters Field Command-Line Option Descriptions Maximum user name length --maxusername Sets the maximum number of characters for user names. The default value is 32. Root for home directories --homedirectory Sets the default directory to use for user home directories. The default value is /home . Default shell --defaultshell Sets the default shell to use for users. The default value is /bin/sh . Default user group --defaultgroup Sets the default group to which all newly created accounts are added. The default value is ipausers , which is automatically created during the IdM server installation process. Default e-mail domain --emaildomain Sets the email domain to use to create email addresses based on the new accounts. The default is the IdM server domain. Search time limit --searchtimelimit Sets the maximum amount of time, in seconds, to spend on a search before the server returns results. Search size limit --searchrecordslimit Sets the maximum number of records to return in a search. User search fields --usersearch Sets the fields in a user entry that can be used as a search string. Any attribute listed has an index kept for that attribute, so setting too many attributes could affect server performance. Group search fields --groupsearch Sets the fields in a group entry that can be used as a search string. Certificate subject base Sets the base DN to use when creating subject DNs for client certificates. This is configured when the server is set up. Default user object classes --userobjectclasses Defines an object class that is used to create IdM user accounts. This can be invoked multiple times. The complete list of object classes must be given because the list is overwritten when the command is run. Default group object classes --groupobjectclasses Defines an object class that is used to create IdM group accounts. This can be invoked multiple times. The complete list of object classes must be given because the list is overwritten when the command is run. Password expiration notification --pwdexpnotify Sets how long, in days, before a password expires for the server to send a notification. Password plug-in features Sets the format of passwords that are allowed for users. 15.4.1. Viewing Attributes from the Web UI Open the IPA Server tab. Select the Configuration subtab. The complete configuration entry is shown in three sections, one for all search limits, one for user templates, and one for group templates. Figure 15.4. Setting Search Limits Figure 15.5. User Attributes Figure 15.6. Group Attributes 15.4.2. Viewing Attributes from the Command Line The config-show command shows the current configuration which applies to all new user accounts. By default, only the most common attributes are displayed; use the --all option to show the complete configuration.
|
[
"[bjensen@server ~]USD kinit admin [bjensen@server ~]USD ipa config-show --all dn: cn=ipaConfig,cn=etc,dc=example,dc=com Maximum username length: 32 Home directory base: /home Default shell: /bin/sh Default users group: ipausers Default e-mail domain: example.com Search time limit: 2 Search size limit: 100 User search fields: uid,givenname,sn,telephonenumber,ou,title Group search fields: cn,description Enable migration mode: FALSE Certificate Subject base: O=EXAMPLE.COM Default group objectclasses: top, groupofnames, nestedgroup, ipausergroup, ipaobject Default user objectclasses: top, person, organizationalperson, inetorgperson, inetuser, posixaccount, krbprincipalaux, krbticketpolicyaux, ipaobject, ipasshuser Password Expiration Notification (days): 4 Password plugin features: AllowNThash SELinux user map order: guest_u:s0USDxguest_u:s0USDuser_u:s0USDstaff_u:s0-s0:c0.c1023USDunconfined_u:s0-s0:c0.c1023 Default SELinux user: unconfined_u:s0-s0:c0.c1023 Default PAC types: MS-PAC, nfs:NONE cn: ipaConfig objectclass: nsContainer, top, ipaGuiConfig, ipaConfigObject"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/Configuring_IPA_Users-Specifying_Default_User_Settings
|
20.8. Disabling Network Encryption
|
20.8. Disabling Network Encryption Follow this section to disable network encryption on clients and servers. Procedure 20.13. Disabling I/O encryption Unmount volumes from all clients Run the following command on each client for any volume that should have encryption disabled. Stop encrypted volumes Run the following command on any server to stop volumes that should have encryption disabled. Disable server and client SSL usage Run the following commands for each volume that should have encryption disabled. Start volumes Mount volumes on clients The process for mounting a volume depends on the protocol your client is using. The following command mounts a volume using the native FUSE protocol. Procedure 20.14. Disabling management encryption Unmount volumes from all clients Run the following command on each client for any volume that should have encryption disabled. Stop glusterd on all nodes For Red Hat Enterprise Linux 7 based installations: For Red Hat Enterprise Linux 6 based installations: Important Red Hat Gluster Storage is not supported on Red Hat Enterprise Linux 6 (RHEL 6) from 3.5 Batch Update 1 onwards. See Version Details table in section Red Hat Gluster Storage Software Components and Versions of the Installation Guide Remove the secure-access file Run the following command on all servers and clients to remove the secure-access file. You can just rename the file if you are only disabling encryption temporarily. Start glusterd on all nodes For Red Hat Enterprise Linux 7 based installations: For Red Hat Enterprise Linux 6 based installations: Important Red Hat Gluster Storage is not supported on Red Hat Enterprise Linux 6 (RHEL 6) from 3.5 Batch Update 1 onwards. See Version Details table in section Red Hat Gluster Storage Software Components and Versions of the Installation Guide Mount volumes on clients The process for mounting a volume depends on the protocol your client is using. The following command mounts a volume using the native FUSE protocol. Important If you are permanently disabling network encryption, you can now delete the SSL certificate files. Do not delete these files if you are only disabling encryption temporarily.
|
[
"umount /mountpoint",
"gluster volume stop volname",
"gluster volume set volname server.ssl off gluster volume set volname client.ssl off",
"gluster volume start volname",
"mount -t glusterfs server1:/testvolume /mnt/glusterfs",
"umount /mountpoint",
"systemctl stop glusterd",
"service glusterd stop",
"rm -f /var/lib/glusterd/secure-access",
"systemctl start glusterd",
"service glusterd start",
"mount -t glusterfs server1:/testvolume /mnt/glusterfs"
] |
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/sect-disable-network-encryption
|
Chapter 7. Clair for Red Hat Quay
|
Chapter 7. Clair for Red Hat Quay Clair v4 (Clair) is an open source application that leverages static code analyses for parsing image content and reporting vulnerabilities affecting the content. Clair is packaged with Red Hat Quay and can be used in both standalone and Operator deployments. It can be run in highly scalable configurations, where components can be scaled separately as appropriate for enterprise environments. 7.1. Clair vulnerability databases Clair uses the following vulnerability databases to report for issues in your images: Ubuntu Oval database Debian Security Tracker Red Hat Enterprise Linux (RHEL) Oval database SUSE Oval database Oracle Oval database Alpine SecDB database VMWare Photon OS database Amazon Web Services (AWS) UpdateInfo Open Source Vulnerability (OSV) Database For information about how Clair does security mapping with the different databases, see Claircore Severity Mapping . 7.1.1. Information about Open Source Vulnerability (OSV) database for Clair Open Source Vulnerability (OSV) is a vulnerability database and monitoring service that focuses on tracking and managing security vulnerabilities in open source software. OSV provides a comprehensive and up-to-date database of known security vulnerabilities in open source projects. It covers a wide range of open source software, including libraries, frameworks, and other components that are used in software development. For a full list of included ecosystems, see defined ecosystems . Clair also reports vulnerability and security information for golang , java , and ruby ecosystems through the Open Source Vulnerability (OSV) database. By leveraging OSV, developers and organizations can proactively monitor and address security vulnerabilities in open source components that they use, which helps to reduce the risk of security breaches and data compromises in projects. For more information about OSV, see the OSV website . 7.2. Setting up Clair on standalone Red Hat Quay deployments For standalone Red Hat Quay deployments, you can set up Clair manually. Procedure In your Red Hat Quay installation directory, create a new directory for the Clair database data: USD mkdir /home/<user-name>/quay-poc/postgres-clairv4 Set the appropriate permissions for the postgres-clairv4 file by entering the following command: USD setfacl -m u:26:-wx /home/<user-name>/quay-poc/postgres-clairv4 Deploy a Clair Postgres database by entering the following command: USD sudo podman run -d --name postgresql-clairv4 \ -e POSTGRESQL_USER=clairuser \ -e POSTGRESQL_PASSWORD=clairpass \ -e POSTGRESQL_DATABASE=clair \ -e POSTGRESQL_ADMIN_PASSWORD=adminpass \ -p 5433:5433 \ -v /home/<user-name>/quay-poc/postgres-clairv4:/var/lib/pgsql/data:Z \ registry.redhat.io/rhel8/postgresql-13:1-109 Install the Postgres uuid-ossp module for your Clair deployment: USD podman exec -it postgresql-clairv4 /bin/bash -c 'echo "CREATE EXTENSION IF NOT EXISTS \"uuid-ossp\"" | psql -d clair -U postgres' Example output CREATE EXTENSION Note Clair requires the uuid-ossp extension to be added to its Postgres database. For users with proper privileges, creating the extension will automatically be added by Clair. If users do not have the proper privileges, the extension must be added before start Clair. If the extension is not present, the following error will be displayed when Clair attempts to start: ERROR: Please load the "uuid-ossp" extension. (SQLSTATE 42501) . Stop the Quay container if it is running and restart it in configuration mode, loading the existing configuration as a volume: USD sudo podman run --rm -it --name quay_config \ -p 80:8080 -p 443:8443 \ -v USDQUAY/config:/conf/stack:Z \ registry.redhat.io/quay/quay-rhel8:{productminv} config secret Log in to the configuration tool and click Enable Security Scanning in the Security Scanner section of the UI. Set the HTTP endpoint for Clair using a port that is not already in use on the quay-server system, for example, 8081 . Create a pre-shared key (PSK) using the Generate PSK button. Security Scanner UI Validate and download the config.yaml file for Red Hat Quay, and then stop the Quay container that is running the configuration editor. Extract the new configuration bundle into your Red Hat Quay installation directory, for example: USD tar xvf quay-config.tar.gz -d /home/<user-name>/quay-poc/ Create a folder for your Clair configuration file, for example: USD mkdir /etc/opt/clairv4/config/ Change into the Clair configuration folder: USD cd /etc/opt/clairv4/config/ Create a Clair configuration file, for example: http_listen_addr: :8081 introspection_addr: :8088 log_level: debug indexer: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true matcher: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable max_conn_pool: 100 migrations: true indexer_addr: clair-indexer notifier: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable delivery_interval: 1m poll_interval: 5m migrations: true auth: psk: key: "MTU5YzA4Y2ZkNzJoMQ==" iss: ["quay"] # tracing and metrics trace: name: "jaeger" probability: 1 jaeger: agent: endpoint: "localhost:6831" service_name: "clair" metrics: name: "prometheus" For more information about Clair's configuration format, see Clair configuration reference . Start Clair by using the container image, mounting in the configuration from the file you created: Note Running multiple Clair containers is also possible, but for deployment scenarios beyond a single container the use of a container orchestrator like Kubernetes or OpenShift Container Platform is strongly recommended. 7.3. Clair on OpenShift Container Platform To set up Clair v4 (Clair) on a Red Hat Quay deployment on OpenShift Container Platform, it is recommended to use the Red Hat Quay Operator. By default, the Red Hat Quay Operator will install or upgrade a Clair deployment along with your Red Hat Quay deployment and configure Clair automatically. 7.4. Testing Clair Use the following procedure to test Clair on either a standalone Red Hat Quay deployment, or on an OpenShift Container Platform Operator-based deployment. Prerequisites You have deployed the Clair container image. Procedure Pull a sample image by entering the following command: USD podman pull ubuntu:20.04 Tag the image to your registry by entering the following command: USD sudo podman tag docker.io/library/ubuntu:20.04 <quay-server.example.com>/<user-name>/ubuntu:20.04 Push the image to your Red Hat Quay registry by entering the following command: USD sudo podman push --tls-verify=false quay-server.example.com/quayadmin/ubuntu:20.04 Log in to your Red Hat Quay deployment through the UI. Click the repository name, for example, quayadmin/ubuntu . In the navigation pane, click Tags . Report summary Click the image report, for example, 45 medium , to show a more detailed report: Report details Note In some cases, Clair shows duplicate reports on images, for example, ubi8/nodejs-12 or ubi8/nodejs-16 . This occurs because vulnerabilities with same name are for different packages. This behavior is expected with Clair vulnerability reporting and will not be addressed as a bug.
|
[
"mkdir /home/<user-name>/quay-poc/postgres-clairv4",
"setfacl -m u:26:-wx /home/<user-name>/quay-poc/postgres-clairv4",
"sudo podman run -d --name postgresql-clairv4 -e POSTGRESQL_USER=clairuser -e POSTGRESQL_PASSWORD=clairpass -e POSTGRESQL_DATABASE=clair -e POSTGRESQL_ADMIN_PASSWORD=adminpass -p 5433:5433 -v /home/<user-name>/quay-poc/postgres-clairv4:/var/lib/pgsql/data:Z registry.redhat.io/rhel8/postgresql-13:1-109",
"podman exec -it postgresql-clairv4 /bin/bash -c 'echo \"CREATE EXTENSION IF NOT EXISTS \\\"uuid-ossp\\\"\" | psql -d clair -U postgres'",
"CREATE EXTENSION",
"sudo podman run --rm -it --name quay_config -p 80:8080 -p 443:8443 -v USDQUAY/config:/conf/stack:Z registry.redhat.io/quay/quay-rhel8:{productminv} config secret",
"tar xvf quay-config.tar.gz -d /home/<user-name>/quay-poc/",
"mkdir /etc/opt/clairv4/config/",
"cd /etc/opt/clairv4/config/",
"http_listen_addr: :8081 introspection_addr: :8088 log_level: debug indexer: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true matcher: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable max_conn_pool: 100 migrations: true indexer_addr: clair-indexer notifier: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable delivery_interval: 1m poll_interval: 5m migrations: true auth: psk: key: \"MTU5YzA4Y2ZkNzJoMQ==\" iss: [\"quay\"] tracing and metrics trace: name: \"jaeger\" probability: 1 jaeger: agent: endpoint: \"localhost:6831\" service_name: \"clair\" metrics: name: \"prometheus\"",
"sudo podman run -d --name clairv4 -p 8081:8081 -p 8088:8088 -e CLAIR_CONF=/clair/config.yaml -e CLAIR_MODE=combo -v /etc/opt/clairv4/config:/clair:Z registry.redhat.io/quay/clair-rhel8:v3.9.10",
"podman pull ubuntu:20.04",
"sudo podman tag docker.io/library/ubuntu:20.04 <quay-server.example.com>/<user-name>/ubuntu:20.04",
"sudo podman push --tls-verify=false quay-server.example.com/quayadmin/ubuntu:20.04"
] |
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/manage_red_hat_quay/clair-vulnerability-scanner
|
20.16.9.8. Multicast tunnel
|
20.16.9.8. Multicast tunnel A multicast group may be used to represent a virtual network. Any guest virtual machine whose network devices are within the same multicast group will talk to each other, even if they reside across multiple physical host physical machines. This mode may be used as an unprivileged user. There is no default DNS or DHCP support and no outgoing network access. To provide outgoing network access, one of the guest virtual machines should have a second NIC which is connected to one of the first 4 network types in order to provide appropriate routing. The multicast protocol is compatible with protocols used by user mode linux guest virtual machines as well. Note that the source address used must be from the multicast address block. A multicast tunnel is created by manipulating the interface type using a management tool and setting/changing it to mcast , and providing a mac and source address. The result is shown in changes made to the domain XML: ... <devices> <interface type='mcast'> <mac address='52:54:00:6d:90:01'> <source address='230.0.0.1' port='5558'/> </interface> </devices> ... Figure 20.45. Devices - network interfaces- multicast tunnel
|
[
"<devices> <interface type='mcast'> <mac address='52:54:00:6d:90:01'> <source address='230.0.0.1' port='5558'/> </interface> </devices>"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sub-section-libvirt-dom-xml-devices-network-interfaces-multicast-tunnel
|
32.7. Post-installation Script
|
32.7. Post-installation Script You have the option of adding commands to run on the system once the installation is complete. This section must be placed towards the end of the kickstart file, after the kickstart commands described in Section 32.4, "Kickstart Options" , and must start with the %post command and end with the %end command. If your kickstart file also includes a %pre section, the order of the %pre and %post sections does not matter. See Section 32.8, "Kickstart Examples" for example configuration files. This section is useful for functions such as installing additional software and configuring an additional nameserver. Note If you configured the network with static IP information, including a nameserver, you can access the network and resolve IP addresses in the %post section. If you configured the network for DHCP, the /etc/resolv.conf file has not been completed when the installation executes the %post section. You can access the network, but you can not resolve IP addresses. Thus, if you are using DHCP, you must specify IP addresses in the %post section. Note The post-install script is run in a chroot environment; therefore, performing tasks such as copying scripts or RPMs from the installation media do not work. --nochroot Allows you to specify commands that you would like to run outside of the chroot environment. The following example copies the file /etc/resolv.conf to the file system that was just installed. --interpreter /usr/bin/python Allows you to specify a different scripting language, such as Python. Replace /usr/bin/python with the scripting language of your choice. --log /path/to/logfile Logs the output of the post-install script. Note that the path of the log file must take into account whether or not you use the --nochroot option. For example, without --nochroot : with --nochroot :
|
[
"%post --nochroot cp /etc/resolv.conf /mnt/sysimage/etc/resolv.conf",
"%post --log=/root/ks-post.log",
"%post --nochroot --log=/mnt/sysimage/root/ks-post.log"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-kickstart2-postinstallconfig
|
Chapter 58. Securing passwords with a keystore
|
Chapter 58. Securing passwords with a keystore You can use a keystore to encrypt passwords that are used for communication between Business Central and KIE Server. You should encrypt both controller and KIE Server passwords. If Business Central and KIE Server are deployed to different application servers, then both application servers should use the keystore. Use Java Cryptography Extension KeyStore (JCEKS) for your keystore because it supports symmetric keys. Use KeyTool, which is part of the JDK installation, to create a new JCEKS. Note If KIE Server is not configured with JCEKS, KIE Server passwords are stored in system properties in plain text form. Prerequisites KIE Server is installed in Oracle WebLogic Server. A KIE Server user with the kie-server role has been created, as described in Section 54.1, "Configuring the KIE Server group and users" . Java 8 or higher is installed. Procedure To use KeyTool to create a JCEKS, enter the following command in the Java 8 home directory: USD<JAVA_HOME>/bin/keytool -importpassword -keystore <KEYSTORE_PATH> -keypass <ALIAS_KEY_PASSWORD> -alias <PASSWORD_ALIAS> -storepass <KEYSTORE_PASSWORD> -storetype JCEKS In this example, replace the following variables: <KEYSTORE_PATH> : The path where the keystore will be stored <KEYSTORE_PASSWORD> : The keystore password <ALIAS_KEY_PASSWORD> : The password used to access values stored with the alias <PASSWORD_ALIAS> : The alias of the entry to the process When prompted, enter the password for the KIE Server user that you created. Set the system properties listed in the following table: Table 58.1. System properties used to load a KIE Server JCEKS System property Placeholder Description kie.keystore.keyStoreURL <KEYSTORE_URL> URL for the JCEKS that you want to use, for example file:///home/kie/keystores/keystore.jceks kie.keystore.keyStorePwd <KEYSTORE_PWD> Password for the JCEKS kie.keystore.key.server.alias <KEY_SERVER_ALIAS> Alias of the key for REST services where the password is stored kie.keystore.key.server.pwd <KEY_SERVER_PWD> Password of the alias for REST services with the stored password kie.keystore.key.ctrl.alias <KEY_CONTROL_ALIAS> Alias of the key for default REST Process Automation Controller where the password is stored kie.keystore.key.ctrl.pwd <KEY_CONTROL_PWD> Password of the alias for default REST Process Automation Controller with the stored password Start KIE Server to verify the configuration.
|
[
"USD<JAVA_HOME>/bin/keytool -importpassword -keystore <KEYSTORE_PATH> -keypass <ALIAS_KEY_PASSWORD> -alias <PASSWORD_ALIAS> -storepass <KEYSTORE_PASSWORD> -storetype JCEKS"
] |
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/installing_and_configuring_red_hat_decision_manager/securing-passwords-wls-proc_kie-server-on-wls
|
6.9 Release Notes
|
6.9 Release Notes Red Hat Enterprise Linux 6.9 Release Notes for Red Hat Enterprise Linux 6.9 Edition 9 Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.9_release_notes/index
|
Chapter 7. Monitoring brokers for problems
|
Chapter 7. Monitoring brokers for problems AMQ Broker includes an internal tool called the Critical Analyzer that actively monitors running brokers for problems such as deadlock conditions. In a production environment, a problem such as a deadlock condition can be caused by IO errors, a defective disk, memory shortage, or excess CPU usage caused by other processes. The Critical Analyzer periodically measures the response time for critical operations such as queue delivery (that is, adding of messages to a queue on the broker) and journal operations. If the response time of a checked operation exceeds a configurable timeout value, the broker is considered unstable. In this case, you can configure the Critical Analyzer to simply log a message or take action to protect the broker, such as shutting down the broker or stopping the virtual machine (VM) that is running the broker. 7.1. Configuring the Critical Analyzer The following procedure shows how to configure the Critical Analyzer to monitor the broker for problems. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. The default configuration for the Critical Analyzer is shown below. Specify parameter values, as described below. critical-analyzer Specifies whether to enable or disable the Critical Analyzer tool. The default value is true , which means that the tool is enabled. critical-analyzer-timeout Timeout, in milliseconds, for the checks run by the Critical Analyzer. If the time taken by one of the checked operations exceeds this value, the broker is considered unstable. critical-analyzer-check-period Time period, in milliseconds, between consecutive checks by the Critical Analyzer for each operation. critical-analyzer-policy If the broker fails a check and is considered unstable, this parameter specifies whether the broker logs a message ( LOG ), stops the virtual machine (VM) hosting the broker ( HALT ), or shuts down the broker ( SHUTDOWN ). Based on the policy option that you have configured, if the response time for a critical operation exceeds the configured timeout value, you see output that resembles one of the following: critical-analyzer-policy = LOG critical-analyzer-policy = HALT critical-analyzer-policy = SHUTDOWN You also see a thread dump on the broker that resembles the following: Revised on 2024-06-10 15:29:41 UTC
|
[
"<critical-analyzer>true</critical-analyzer> <critical-analyzer-timeout>120000</critical-analyzer-timeout> <critical-analyzer-check-period>60000</critical-analyzer-check-period> <critical-analyzer-policy>HALT</critical-analyzer-policy>",
"[Artemis Critical Analyzer] 18:11:52,145 WARN [org.apache.activemq.artemis.core.server] AMQ224081: The component org.apache.activemq.artemis.tests.integration.critical.CriticalSimpleTestUSD2@5af97850 is not responsive",
"[Artemis Critical Analyzer] 18:10:00,831 ERROR [org.apache.activemq.artemis.core.server] AMQ224079: The process for the virtual machine will be killed, as component org.apache.activemq.artemis.tests.integration.critical.CriticalSimpleTestUSD2@5af97850 is not responsive",
"[Artemis Critical Analyzer] 18:07:53,475 ERROR [org.apache.activemq.artemis.core.server] AMQ224080: The server process will now be stopped, as component org.apache.activemq.artemis.tests.integration.critical.CriticalSimpleTestUSD2@5af97850 is not responsive",
"[Artemis Critical Analyzer] 18:10:00,836 WARN [org.apache.activemq.artemis.core.server] AMQ222199: Thread dump: AMQ119001: Generating thread dump * =============================================================================== AMQ119002: Thread Thread[Thread-1 (ActiveMQ-scheduled-threads),5,main] name = Thread-1 (ActiveMQ-scheduled-threads) id = 19 group = java.lang.ThreadGroup[name=main,maxpri=10] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizerUSDConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutorUSDDelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088) java.util.concurrent.ScheduledThreadPoolExecutorUSDDelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutorUSDWorker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) =============================================================================== ..... .......... =============================================================================== AMQ119003: End Thread dump *"
] |
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.11/html/managing_amq_broker/assembly-br-monitoring-brokers-for-problems_managing
|
Chapter 3. CSINode [storage.k8s.io/v1]
|
Chapter 3. CSINode [storage.k8s.io/v1] Description CSINode holds information about all CSI drivers installed on a node. CSI drivers do not need to create the CSINode object directly. As long as they use the node-driver-registrar sidecar container, the kubelet will automatically populate the CSINode object for the CSI driver as part of kubelet plugin registration. CSINode has the same name as a node. If the object is missing, it means either there are no CSI Drivers available on the node, or the Kubelet version is low enough that it doesn't create this object. CSINode has an OwnerReference that points to the corresponding node object. Type object Required spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata.name must be the Kubernetes node name. spec object CSINodeSpec holds information about the specification of all CSI drivers installed on a node 3.1.1. .spec Description CSINodeSpec holds information about the specification of all CSI drivers installed on a node Type object Required drivers Property Type Description drivers array drivers is a list of information of all CSI Drivers existing on a node. If all drivers in the list are uninstalled, this can become empty. drivers[] object CSINodeDriver holds information about the specification of one CSI driver installed on a node 3.1.2. .spec.drivers Description drivers is a list of information of all CSI Drivers existing on a node. If all drivers in the list are uninstalled, this can become empty. Type array 3.1.3. .spec.drivers[] Description CSINodeDriver holds information about the specification of one CSI driver installed on a node Type object Required name nodeID Property Type Description allocatable object VolumeNodeResources is a set of resource limits for scheduling of volumes. name string This is the name of the CSI driver that this object refers to. This MUST be the same name returned by the CSI GetPluginName() call for that driver. nodeID string nodeID of the node from the driver point of view. This field enables Kubernetes to communicate with storage systems that do not share the same nomenclature for nodes. For example, Kubernetes may refer to a given node as "node1", but the storage system may refer to the same node as "nodeA". When Kubernetes issues a command to the storage system to attach a volume to a specific node, it can use this field to refer to the node name using the ID that the storage system will understand, e.g. "nodeA" instead of "node1". This field is required. topologyKeys array (string) topologyKeys is the list of keys supported by the driver. When a driver is initialized on a cluster, it provides a set of topology keys that it understands (e.g. "company.com/zone", "company.com/region"). When a driver is initialized on a node, it provides the same topology keys along with values. Kubelet will expose these topology keys as labels on its own node object. When Kubernetes does topology aware provisioning, it can use this list to determine which labels it should retrieve from the node object and pass back to the driver. It is possible for different nodes to use different topology keys. This can be empty if driver does not support topology. 3.1.4. .spec.drivers[].allocatable Description VolumeNodeResources is a set of resource limits for scheduling of volumes. Type object Property Type Description count integer Maximum number of unique volumes managed by the CSI driver that can be used on a node. A volume that is both attached and mounted on a node is considered to be used once, not twice. The same rule applies for a unique volume that is shared among multiple pods on the same node. If this field is not specified, then the supported number of volumes on this node is unbounded. 3.2. API endpoints The following API endpoints are available: /apis/storage.k8s.io/v1/csinodes DELETE : delete collection of CSINode GET : list or watch objects of kind CSINode POST : create a CSINode /apis/storage.k8s.io/v1/watch/csinodes GET : watch individual changes to a list of CSINode. deprecated: use the 'watch' parameter with a list operation instead. /apis/storage.k8s.io/v1/csinodes/{name} DELETE : delete a CSINode GET : read the specified CSINode PATCH : partially update the specified CSINode PUT : replace the specified CSINode /apis/storage.k8s.io/v1/watch/csinodes/{name} GET : watch changes to an object of kind CSINode. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 3.2.1. /apis/storage.k8s.io/v1/csinodes Table 3.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of CSINode Table 3.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 3.3. Body parameters Parameter Type Description body DeleteOptions schema Table 3.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind CSINode Table 3.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.6. HTTP responses HTTP code Reponse body 200 - OK CSINodeList schema 401 - Unauthorized Empty HTTP method POST Description create a CSINode Table 3.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.8. Body parameters Parameter Type Description body CSINode schema Table 3.9. HTTP responses HTTP code Reponse body 200 - OK CSINode schema 201 - Created CSINode schema 202 - Accepted CSINode schema 401 - Unauthorized Empty 3.2.2. /apis/storage.k8s.io/v1/watch/csinodes Table 3.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of CSINode. deprecated: use the 'watch' parameter with a list operation instead. Table 3.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /apis/storage.k8s.io/v1/csinodes/{name} Table 3.12. Global path parameters Parameter Type Description name string name of the CSINode Table 3.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a CSINode Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 3.15. Body parameters Parameter Type Description body DeleteOptions schema Table 3.16. HTTP responses HTTP code Reponse body 200 - OK CSINode schema 202 - Accepted CSINode schema 401 - Unauthorized Empty HTTP method GET Description read the specified CSINode Table 3.17. HTTP responses HTTP code Reponse body 200 - OK CSINode schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CSINode Table 3.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 3.19. Body parameters Parameter Type Description body Patch schema Table 3.20. HTTP responses HTTP code Reponse body 200 - OK CSINode schema 201 - Created CSINode schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CSINode Table 3.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.22. Body parameters Parameter Type Description body CSINode schema Table 3.23. HTTP responses HTTP code Reponse body 200 - OK CSINode schema 201 - Created CSINode schema 401 - Unauthorized Empty 3.2.4. /apis/storage.k8s.io/v1/watch/csinodes/{name} Table 3.24. Global path parameters Parameter Type Description name string name of the CSINode Table 3.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind CSINode. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/storage_apis/csinode-storage-k8s-io-v1
|
Chapter 74. KafkaClientAuthenticationScramSha512 schema reference
|
Chapter 74. KafkaClientAuthenticationScramSha512 schema reference Used in: KafkaBridgeSpec , KafkaConnectSpec , KafkaMirrorMaker2ClusterSpec , KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec Full list of KafkaClientAuthenticationScramSha512 schema properties To configure SASL-based SCRAM-SHA-512 authentication, set the type property to scram-sha-512 . The SCRAM-SHA-512 authentication mechanism requires a username and password. 74.1. username Specify the username in the username property. 74.2. passwordSecret In the passwordSecret property, specify a link to a Secret containing the password. You can use the secrets created by the User Operator. If required, you can create a text file that contains the password, in cleartext, to use for authentication: echo -n PASSWORD > MY-PASSWORD .txt You can then create a Secret from the text file, setting your own field name (key) for the password: oc create secret generic MY-CONNECT-SECRET-NAME --from-file= MY-PASSWORD-FIELD-NAME =./ MY-PASSWORD .txt Example Secret for SCRAM-SHA-512 client authentication for Kafka Connect apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-connect-password-field: LFTIyFRFlMmU2N2Tm The secretName property contains the name of the Secret , and the password property contains the name of the key under which the password is stored inside the Secret . Important Do not specify the actual password in the password property. Example SASL-based SCRAM-SHA-512 client authentication configuration for Kafka Connect authentication: type: scram-sha-512 username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-connect-password-field 74.3. KafkaClientAuthenticationScramSha512 schema properties Property Description passwordSecret Reference to the Secret which holds the password. PasswordSecretSource type Must be scram-sha-512 . string username Username used for the authentication. string
|
[
"echo -n PASSWORD > MY-PASSWORD .txt",
"create secret generic MY-CONNECT-SECRET-NAME --from-file= MY-PASSWORD-FIELD-NAME =./ MY-PASSWORD .txt",
"apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-connect-password-field: LFTIyFRFlMmU2N2Tm",
"authentication: type: scram-sha-512 username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-connect-password-field"
] |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-kafkaclientauthenticationscramsha512-reference
|
Chapter 1. System requirements and supported architectures
|
Chapter 1. System requirements and supported architectures Red Hat Enterprise Linux 9 delivers a stable, secure, consistent foundation across hybrid cloud deployments with the tools needed to deliver workloads faster with less effort. You can deploy RHEL as a guest on supported hypervisors and Cloud provider environments as well as on physical infrastructure, so your applications can take advantage of innovations in the leading hardware architecture platforms. Review the guidelines provided for system, hardware, security, memory, and RAID before installing. If you want to use your system as a virtualization host, review the necessary hardware requirements for virtualization . Red Hat Enterprise Linux supports the following architectures: AMD and Intel 64-bit architectures The 64-bit ARM architecture IBM Power Systems, Little Endian 64-bit IBM Z architectures 1.1. Supported installation targets An installation target is a storage device that stores Red Hat Enterprise Linux and boots the system. Red Hat Enterprise Linux supports the following installation targets for IBMZ , IBM Power, AMD64, Intel 64, and 64-bit ARM systems: Storage connected by a standard internal interface, such as DASD, SCSI, SATA, or SAS BIOS/firmware RAID devices on the Intel64, AMD64 and arm64 architectures NVDIMM devices in sector mode on the Intel64 and AMD64 architectures, supported by the nd_pmem driver. Storage connected via Fibre Channel Host Bus Adapters, such as DASDs (IBM Z architecture only) and SCSI LUNs, including multipath devices. Some might require vendor-provided drivers. Xen block devices on Intel processors in Xen virtual machines. VirtIO block devices on Intel processors in KVM virtual machines. Red Hat does not support installation to USB drives or SD memory cards. For information about support for third-party virtualization technologies, see the Red Hat Hardware Compatibility List . 1.2. Disk and memory requirements If several operating systems are installed, it is important that you verify that the allocated disk space is separate from the disk space required by Red Hat Enterprise Linux. In some cases, it is important to dedicate specific partitions to Red Hat Enterprise Linux, for example, for AMD64, Intel 64, and 64-bit ARM, at least two partitions ( / and swap ) must be dedicated to RHEL and for IBM Power Systems servers, at least three partitions ( / , swap , and a PReP boot partition) must be dedicated to RHEL. Additionally, you must have a minimum of 10 GiB of available disk space. To install Red Hat Enterprise Linux, you must have a minimum of 10 GiB of space in either unpartitioned disk space or in partitions that can be deleted. For more information, see Partitioning reference . Table 1.1. Minimum RAM requirements Installation type Minimum RAM Local media installation (USB, DVD) 1.5 GiB for aarch64, IBM Z and x86_64 architectures 3 GiB for ppc64le architecture NFS network installation 1.5 GiB for aarch64, IBM Z and x86_64 architectures 3 GiB for ppc64le architecture HTTP, HTTPS or FTP network installation 3 GiB for IBM Z and x86_64 architectures 4 GiB for aarch64 and ppc64le architectures It is possible to complete the installation with less memory than the minimum requirements. The exact requirements depend on your environment and installation path. Test various configurations to determine the minimum required RAM for your environment. Installing Red Hat Enterprise Linux using a Kickstart file has the same minimum RAM requirements as a standard installation. However, additional RAM may be required if your Kickstart file includes commands that require additional memory, or write data to the RAM disk. For more information, see Automatically installing RHEL . 1.3. Graphics display resolution requirements Your system must have the following minimum resolution to ensure a smooth and error-free installation of Red Hat Enterprise Linux. Table 1.2. Display resolution Product version Resolution Red Hat Enterprise Linux 9 Minimum : 800 x 600 Recommended : 1026 x 768 1.4. UEFI Secure Boot and Beta release requirements If you plan to install a Beta release of Red Hat Enterprise Linux, on systems having UEFI Secure Boot enabled, then first disable the UEFI Secure Boot option and then begin the installation. UEFI Secure Boot requires that the operating system kernel is signed with a recognized private key, which the system's firmware verifies using the corresponding public key. For Red Hat Enterprise Linux Beta releases, the kernel is signed with a Red Hat Beta-specific public key, which the system fails to recognize by default. As a result, the system fails to even boot the installation media. Additional resources For information about installing RHEL on IBM, see IBM installation documentation Security hardening Composing a customized RHEL system image Red Hat ecosystem catalog RHEL technology capabilities and limits
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/interactively_installing_rhel_from_installation_media/system-requirements-and-supported-architectures_rhel-installer
|
Chapter 42. Understanding the eBPF networking features in RHEL 9
|
Chapter 42. Understanding the eBPF networking features in RHEL 9 The extended Berkeley Packet Filter (eBPF) is an in-kernel virtual machine that allows code execution in the kernel space. This code runs in a restricted sandbox environment with access only to a limited set of functions. In networking, you can use eBPF to complement or replace kernel packet processing. Depending on the hook you use, eBPF programs have, for example: Read and write access to packet data and metadata Can look up sockets and routes Can set socket options Can redirect packets 42.1. Overview of networking eBPF features in RHEL 9 You can attach extended Berkeley Packet Filter (eBPF) networking programs to the following hooks in RHEL: eXpress Data Path (XDP): Provides early access to received packets before the kernel networking stack processes them. tc eBPF classifier with direct-action flag: Provides powerful packet processing on ingress and egress. Programs can be attached as an eBPF classifier with direct-action flag in the qdisc hierarchy, or using the link-based tcx API. Control Groups version 2 (cgroup v2): Enables filtering and overriding socket-based operations performed by programs in a control group. Socket filtering: Enables filtering of packets received from sockets. This feature was also available in the classic Berkeley Packet Filter (cBPF), but has been extended to support eBPF programs. Stream parser: Enables splitting up streams to individual messages, filtering, and redirecting them to sockets. SO_REUSEPORT socket selection: Provides a programmable selection of a receiving socket from a reuseport socket group. Flow dissector: Enables overriding the way the kernel parses packet headers in certain situations. TCP congestion control callbacks: Enables implementing a custom TCP congestion control algorithm. Routes with encapsulation: Enables creating custom tunnel encapsulation. XDP You can attach programs of the BPF_PROG_TYPE_XDP type to a network interface. The kernel then executes the program on received packets before the kernel network stack starts processing them. This allows fast packet forwarding in certain situations, such as fast packet dropping to prevent distributed denial of service (DDoS) attacks and fast packet redirects for load balancing scenarios. You can also use XDP for different forms of packet monitoring and sampling. The kernel allows XDP programs to modify packets and to pass them for further processing to the kernel network stack. The following XDP modes are available: Native (driver) XDP: The kernel executes the program from the earliest possible point during packet reception. At this moment, the kernel did not parse the packet and, therefore, no metadata provided by the kernel is available. This mode requires that the network interface driver supports XDP but not all drivers support this native mode. Generic XDP: The kernel network stack executes the XDP program early in the processing. At that time, kernel data structures have been allocated, and the packet has been pre-processed. If a packet should be dropped or redirected, it requires a significant overhead compared to the native mode. However, the generic mode does not require network interface driver support and works with all network interfaces. Offloaded XDP: The kernel executes the XDP program on the network interface instead of on the host CPU. Note that this requires specific hardware, and only certain eBPF features are available in this mode. On RHEL, load all XDP programs using the libxdp library. This library enables system-controlled usage of XDP. Note Currently, there are some system configuration limitations for XDP programs. For example, you must disable certain hardware offload features on the receiving interface. Additionally, not all features are available with all drivers that support the native mode. In RHEL 9, Red Hat supports the XDP features only if you use the libxdp library to load the program into the kernel. AF_XDP Using an XDP program that filters and redirects packets to a given AF_XDP socket, you can use one or more sockets from the AF_XDP protocol family to quickly copy packets from the kernel to the user space. Traffic Control The Traffic Control ( tc ) subsystem offers the following types of eBPF programs: BPF_PROG_TYPE_SCHED_CLS BPF_PROG_TYPE_SCHED_ACT These types enable you to write custom tc classifiers and tc actions in eBPF. Together with the parts of the tc ecosystem, this provides the ability for powerful packet processing and is the core part of several container networking orchestration solutions. In most cases, only the classifier is used, as with the direct-action flag, the eBPF classifier can execute actions directly from the same eBPF program. The clsact Queueing Discipline ( qdisc ) has been designed to enable this on the ingress side. Note that using a flow dissector eBPF program can influence operation of some other qdiscs and tc classifiers, such as flower . The link-based tcx API is provided along with the qdisc API. It enables your applications to maintain ownership over a BPF program to prevent accidental removal of the BPF program. Also, the tcx API has multiprogram support that allows multiple applications to attach BPF programs in the tc layer in parallel. Socket filter Several utilities use or have used the classic Berkeley Packet Filter (cBPF) for filtering packets received on a socket. For example, the tcpdump utility enables the user to specify expressions, which tcpdump then translates into cBPF code. As an alternative to cBPF, the kernel allows eBPF programs of the BPF_PROG_TYPE_SOCKET_FILTER type for the same purpose. Control Groups In RHEL, you can use multiple types of eBPF programs that you can attach to a cgroup. The kernel executes these programs when a program in the given cgroup performs an operation. Note that you can use only cgroups version 2. The following networking-related cgroup eBPF programs are available in RHEL: BPF_PROG_TYPE_SOCK_OPS : The kernel calls this program on various TCP events. The program can adjust the behavior of the kernel TCP stack, including custom TCP header options, and so on. BPF_PROG_TYPE_CGROUP_SOCK_ADDR : The kernel calls this program during connect , bind , sendto , recvmsg , getpeername , and getsockname operations. This program allows changing IP addresses and ports. This is useful when you implement socket-based network address translation (NAT) in eBPF. BPF_PROG_TYPE_CGROUP_SOCKOPT : The kernel calls this program during setsockopt and getsockopt operations and allows changing the options. BPF_PROG_TYPE_CGROUP_SOCK : The kernel calls this program during socket creation, socket releasing, and binding to addresses. You can use these programs to allow or deny the operation, or only to inspect socket creation for statistics. BPF_PROG_TYPE_CGROUP_SKB : This program filters individual packets on ingress and egress, and can accept or reject packets. BPF_PROG_TYPE_CGROUP_SYSCTL : This program allows filtering of access to system controls ( sysctl ). Stream Parser A stream parser operates on a group of sockets that are added to a special eBPF map. The eBPF program then processes packets that the kernel receives or sends on those sockets. The following stream parser eBPF programs are available in RHEL: BPF_PROG_TYPE_SK_SKB : An eBPF program parses packets received on the socket into individual messages, and instructs the kernel to drop those messages, accept them, or send them to another socket. BPF_PROG_TYPE_SK_MSG : This program filters egress messages. An eBPF program parses the packets and either approves or rejects them. SO_REUSEPORT socket selection Using this socket option, you can bind multiple sockets to the same IP address and port. Without eBPF, the kernel selects the receiving socket based on a connection hash. With the BPF_PROG_TYPE_SK_REUSEPORT program, the selection of the receiving socket is fully programmable. Flow dissector When the kernel needs to process packet headers without going through the full protocol decode, they are dissected . For example, this happens in the tc subsystem, in multipath routing, in bonding, or when calculating a packet hash. In this situation the kernel parses the packet headers and fills internal structures with the information from the packet headers. You can replace this internal parsing using the BPF_PROG_TYPE_FLOW_DISSECTOR program. Note that you can only dissect TCP and UDP over IPv4 and IPv6 in eBPF in RHEL. TCP Congestion Control You can write a custom TCP congestion control algorithm using a group of BPF_PROG_TYPE_STRUCT_OPS programs that implement struct tcp_congestion_oops callbacks. An algorithm that is implemented this way is available to the system alongside the built-in kernel algorithms. Routes with encapsulation You can attach one of the following eBPF program types to routes in the routing table as a tunnel encapsulation attribute: BPF_PROG_TYPE_LWT_IN BPF_PROG_TYPE_LWT_OUT BPF_PROG_TYPE_LWT_XMIT The functionality of such an eBPF program is limited to specific tunnel configurations and does not allow creating a generic encapsulation or decapsulation solution. Socket lookup To bypass limitations of the bind system call, use an eBPF program of the BPF_PROG_TYPE_SK_LOOKUP type. Such programs can select a listening socket for new incoming TCP connections or an unconnected socket for UDP packets. 42.2. Overview of XDP features in RHEL 9 by network cards The following is an overview of XDP-enabled network cards and the XDP features you can use with them: Network card Driver Basic Redirect Target HW offload Zero-copy Large MTU Amazon Elastic Network Adapter ena yes yes yes [a] no no no aQuantia AQtion Ethernet card atlantic yes yes no no no no Broadcom NetXtreme-C/E 10/25/40/50 gigabit Ethernet bnxt_en yes yes yes [a] no no yes Cavium Thunder Virtual function nicvf yes no no no no no Google Virtual NIC (gVNIC) support gve yes yes yes no yes no Intel(R) 10GbE PCI Express Virtual Function Ethernet ixgbevf yes no no no no no Intel(R) 10GbE PCI Express adapters ixgbe yes yes yes [a] no yes yes [b] Intel(R) Ethernet Connection E800 Series ice yes yes yes [a] no yes yes Intel(R) Ethernet Controller I225-LM/I225-V family igc yes yes yes no yes yes [b] Intel(R) PCI Express Gigabit adapters igb yes yes yes [a] no no yes [b] Intel(R) Ethernet Controller XL710 Family i40e yes yes yes [a] [c] no yes no Marvell OcteonTX2 rvu_nicpf yes yes yes [a] [c] no no no Mellanox 5th generation network adapters (ConnectX series) mlx5_core yes yes yes [c] no yes yes Mellanox Technologies 1/10/40Gbit Ethernet mlx4_en yes yes no no no no Microsoft Azure Network Adapter mana yes yes yes no no no Microsoft Hyper-V virtual network hv_netvsc yes yes yes no no no Netronome(R) NFP4000/NFP6000 NIC [d] nfp yes no no yes yes no QEMU Virtio network virtio_net yes yes yes [a] no no yes QLogic QED 25/40/100Gb Ethernet NIC qede yes yes yes no no no STMicroelectronics Multi-Gigabit Ethernet stmmac yes yes yes no yes no Solarflare SFC9000/SFC9100/EF100-family sfc yes yes yes [c] no no no Universal TUN/TAP device tun yes yes yes no no no Virtual Ethernet pair device veth yes yes yes no no yes VMware VMXNET3 ethernet driver vmxnet3 yes yes yes [a] [c] no no no Xen paravirtual network device xen-netfront yes yes yes no no no [a] Only if an XDP program is loaded on the interface. [b] Transmitting side only. Cannot receive large packets through XDP. [c] Requires several XDP TX queues allocated that are larger or equal to the largest CPU index. [d] Some of the listed features are not available for the Netronome(R) NFP3800 NIC. Legend: Basic: Supports basic return codes: DROP , PASS , ABORTED , and TX . Redirect: Supports the XDP_REDIRECT return code. Target: Can be a target of a XDP_REDIRECT return code. HW offload: Supports XDP hardware offload. Zero-copy: Supports the zero-copy mode for the AF_XDP protocol family. Large MTU: Supports packets larger than page size.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_networking/assembly_understanding-the-ebpf-features-in-rhel-9_configuring-and-managing-networking
|
Chapter 13. Notifications overview
|
Chapter 13. Notifications overview Quay.io supports adding notifications to a repository for various events that occur in the repository's lifecycle. 13.1. Notification actions Notifications are added to the Events and Notifications section of the Repository Settings page. They are also added to the Notifications window, which can be found by clicking the bell icon in the navigation pane of Quay.io. Quay.io notifications can be setup to be sent to a User , Team , or the entire organization . Notifications can be delivered by one of the following methods. E-mail notifications E-mails are sent to specified addresses that describe the specified event. E-mail addresses must be verified on a per-repository basis. Webhook POST notifications An HTTP POST call is made to the specified URL with the event's data. For more information about event data, see "Repository events description". When the URL is HTTPS, the call has an SSL client certificate set from Quay.io. Verification of this certificate proves that the call originated from Quay.io. Responses with the status code in the 2xx range are considered successful. Responses with any other status code are considered failures and result in a retry of the webhook notification. Flowdock notifications Posts a message to Flowdock. Hipchat notifications Posts a message to HipChat. Slack notifications Posts a message to Slack. 13.2. Creating notifications by using the UI Use the following procedure to add notifications. Prerequisites You have created a repository. You have administrative privileges for the repository. Procedure Navigate to a repository on Quay.io. In the navigation pane, click Settings . In the Events and Notifications category, click Create Notification to add a new notification for a repository event. The Create notification popup box appears. On the Create repository popup box, click the When this event occurs box to select an event. You can select a notification for the following types of events: Push to Repository Image build failed Image build queued Image build started Image build success Image build cancelled Image expiry trigger After you have selected the event type, select the notification method. The following methods are supported: Quay Notification E-mail Notification Webhook POST Flowdock Team Notification HipChat Room Notification Slack Notification Depending on the method that you choose, you must include additional information. For example, if you select E-mail , you are required to include an e-mail address and an optional notification title. After selecting an event and notification method, click Create Notification . 13.2.1. Creating an image expiration notification Image expiration event triggers can be configured to notify users through email, Slack, webhooks, and so on, and can be configured at the repository level. Triggers can be set for images expiring in any amount of days, and can work in conjunction with the auto-pruning feature. Image expiration notifications can be set by using the Red Hat Quay v2 UI or by using the createRepoNotification API endpoint. Prerequisites FEATURE_GARBAGE_COLLECTION: true is set in your config.yaml file. Optional. FEATURE_AUTO_PRUNE: true is set in your config.yaml file. Procedure On the Red Hat Quay v2 UI, click Repositories . Select the name of a repository. Click Settings Events and notifications . Click Create notification . The Create notification popup box appears. Click the Select event... box, then click Image expiry trigger . In the When the image is due to expiry in days box, enter the number of days before the image's expiration when you want to receive an alert. For example, use 1 for 1 day. In the Select method... box, click one of the following: E-mail Webhook POST Flowdock Team Notification HipChat Room Notification Slack Notification Depending on which method you chose, include the necessary data. For example, if you chose Webhook POST , include the Webhook URL . Optional. Provide a POST JSON body template . Optional. Provide a Title for your notification. Click Submit . You are returned to the Events and notifications page, and the notification now appears. Optional. You can set the NOTIFICATION_TASK_RUN_MINIMUM_INTERVAL_MINUTES variable in your config.yaml file. with this field set, if there are any expiring images notifications will be sent automatically. By default, this is set to 300 , or 5 hours, however it can be adjusted as warranted. NOTIFICATION_TASK_RUN_MINIMUM_INTERVAL_MINUTES: 300 1 1 By default, this field is set to 300 , or 5 hours. Verification Click the menu kebab Test Notification . The following message is returned: Test Notification Queued A test version of this notification has been queued and should appear shortly Depending on which method you chose, check your e-mail, webhook address, Slack channel, and so on. The information sent should look similar to the following example: { "repository": "sample_org/busybox", "namespace": "sample_org", "name": "busybox", "docker_url": "quay-server.example.com/sample_org/busybox", "homepage": "http://quay-server.example.com/repository/sample_org/busybox", "tags": [ "latest", "v1" ], "expiring_in": "1 days" } 13.3. Creating notifications by using the API Use the following procedure to add notifications. Prerequisites You have created a repository. You have administrative privileges for the repository. You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the following POST /api/v1/repository/{repository}/notification command to create a notification on your repository: USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ --data '{ "event": "<event>", "method": "<method>", "config": { "<config_key>": "<config_value>" }, "eventConfig": { "<eventConfig_key>": "<eventConfig_value>" } }' \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification/ This command does not return output in the CLI. Instead, you can enter the following GET /api/v1/repository/{repository}/notification/{uuid} command to obtain information about the repository notification: {"uuid": "240662ea-597b-499d-98bb-2b57e73408d6", "title": null, "event": "repo_push", "method": "quay_notification", "config": {"target": {"name": "quayadmin", "kind": "user", "is_robot": false, "avatar": {"name": "quayadmin", "hash": "b28d563a6dc76b4431fc7b0524bbff6b810387dac86d9303874871839859c7cc", "color": "#17becf", "kind": "user"}}}, "event_config": {}, "number_of_failures": 0} You can test your repository notification by entering the following POST /api/v1/repository/{repository}/notification/{uuid}/test command: USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ https://<quay-server.example.com>/api/v1/repository/<repository>/notification/<uuid>/test Example output {} You can reset repository notification failures to 0 by entering the following POST /api/v1/repository/{repository}/notification/{uuid} command: USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ https://<quay-server.example.com>/api/v1/repository/<repository>/notification/<uuid> Enter the following DELETE /api/v1/repository/{repository}/notification/{uuid} command to delete a repository notification: USD curl -X DELETE \ -H "Authorization: Bearer <bearer_token>" \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification/<uuid> This command does not return output in the CLI. Instead, you can enter the following GET /api/v1/repository/{repository}/notification/ command to retrieve a list of all notifications: USD curl -X GET -H "Authorization: Bearer <bearer_token>" -H "Accept: application/json" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification Example output {"notifications": []} 13.4. Repository events description The following sections detail repository events. Repository Push A successful push of one or more images was made to the repository: Dockerfile Build Queued The following example is a response from a Dockerfile Build that has been queued into the Build system. Note Responses can differ based on the use of optional attributes. Dockerfile Build started The following example is a response from a Dockerfile Build that has been queued into the Build system. Note Responses can differ based on the use of optional attributes. Dockerfile Build successfully completed The following example is a response from a Dockerfile Build that has been successfully completed by the Build system. Note This event occurs simultaneously with a Repository Push event for the built image or images. Dockerfile Build failed The following example is a response from a Dockerfile Build that has failed. Dockerfile Build cancelled The following example is a response from a Dockerfile Build that has been cancelled.
|
[
"NOTIFICATION_TASK_RUN_MINIMUM_INTERVAL_MINUTES: 300 1",
"Test Notification Queued A test version of this notification has been queued and should appear shortly",
"{ \"repository\": \"sample_org/busybox\", \"namespace\": \"sample_org\", \"name\": \"busybox\", \"docker_url\": \"quay-server.example.com/sample_org/busybox\", \"homepage\": \"http://quay-server.example.com/repository/sample_org/busybox\", \"tags\": [ \"latest\", \"v1\" ], \"expiring_in\": \"1 days\" }",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"event\": \"<event>\", \"method\": \"<method>\", \"config\": { \"<config_key>\": \"<config_value>\" }, \"eventConfig\": { \"<eventConfig_key>\": \"<eventConfig_value>\" } }' https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification/",
"{\"uuid\": \"240662ea-597b-499d-98bb-2b57e73408d6\", \"title\": null, \"event\": \"repo_push\", \"method\": \"quay_notification\", \"config\": {\"target\": {\"name\": \"quayadmin\", \"kind\": \"user\", \"is_robot\": false, \"avatar\": {\"name\": \"quayadmin\", \"hash\": \"b28d563a6dc76b4431fc7b0524bbff6b810387dac86d9303874871839859c7cc\", \"color\": \"#17becf\", \"kind\": \"user\"}}}, \"event_config\": {}, \"number_of_failures\": 0}",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<repository>/notification/<uuid>/test",
"{}",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<repository>/notification/<uuid>",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification/<uuid>",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification",
"{\"notifications\": []}",
"{ \"name\": \"repository\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"homepage\": \"https://quay.io/repository/dgangaia/repository\", \"updated_tags\": [ \"latest\" ] }",
"{ \"build_id\": \"296ec063-5f86-4706-a469-f0a400bf9df2\", \"trigger_kind\": \"github\", //Optional \"name\": \"test\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", //Optional \"docker_tags\": [ \"master\", \"latest\" ], \"repo\": \"test\", \"trigger_metadata\": { \"default_branch\": \"master\", \"commit\": \"b7f7d2b948aacbe844ee465122a85a9368b2b735\", \"ref\": \"refs/heads/master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { //Optional \"url\": \"https://github.com/dgangaia/test/commit/b7f7d2b948aacbe844ee465122a85a9368b2b735\", \"date\": \"2019-03-06T12:48:24+11:00\", \"message\": \"adding 5\", \"author\": { //Optional \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", //Optional \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" //Optional }, \"committer\": { \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" } } }, \"is_manual\": false, \"manual_user\": null, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/296ec063-5f86-4706-a469-f0a400bf9df2\" }",
"{ \"build_id\": \"a8cc247a-a662-4fee-8dcb-7d7e822b71ba\", \"trigger_kind\": \"github\", //Optional \"name\": \"test\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", //Optional \"docker_tags\": [ \"master\", \"latest\" ], \"build_name\": \"50bc599\", \"trigger_metadata\": { //Optional \"commit\": \"50bc5996d4587fd4b2d8edc4af652d4cec293c42\", \"ref\": \"refs/heads/master\", \"default_branch\": \"master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { //Optional \"url\": \"https://github.com/dgangaia/test/commit/50bc5996d4587fd4b2d8edc4af652d4cec293c42\", \"date\": \"2019-03-06T14:10:14+11:00\", \"message\": \"test build\", \"committer\": { //Optional \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", //Optional \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" //Optional }, \"author\": { //Optional \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", //Optional \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" //Optional } } }, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/a8cc247a-a662-4fee-8dcb-7d7e822b71ba\" }",
"{ \"build_id\": \"296ec063-5f86-4706-a469-f0a400bf9df2\", \"trigger_kind\": \"github\", //Optional \"name\": \"test\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", //Optional \"docker_tags\": [ \"master\", \"latest\" ], \"build_name\": \"b7f7d2b\", \"image_id\": \"sha256:0339f178f26ae24930e9ad32751d6839015109eabdf1c25b3b0f2abf8934f6cb\", \"trigger_metadata\": { \"commit\": \"b7f7d2b948aacbe844ee465122a85a9368b2b735\", \"ref\": \"refs/heads/master\", \"default_branch\": \"master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { //Optional \"url\": \"https://github.com/dgangaia/test/commit/b7f7d2b948aacbe844ee465122a85a9368b2b735\", \"date\": \"2019-03-06T12:48:24+11:00\", \"message\": \"adding 5\", \"committer\": { //Optional \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", //Optional \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" //Optional }, \"author\": { //Optional \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", //Optional \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" //Optional } } }, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/296ec063-5f86-4706-a469-f0a400bf9df2\", \"manifest_digests\": [ \"quay.io/dgangaia/test@sha256:2a7af5265344cc3704d5d47c4604b1efcbd227a7a6a6ff73d6e4e08a27fd7d99\", \"quay.io/dgangaia/test@sha256:569e7db1a867069835e8e97d50c96eccafde65f08ea3e0d5debaf16e2545d9d1\" ] }",
"{ \"build_id\": \"5346a21d-3434-4764-85be-5be1296f293c\", \"trigger_kind\": \"github\", //Optional \"name\": \"test\", \"repository\": \"dgangaia/test\", \"docker_url\": \"quay.io/dgangaia/test\", \"error_message\": \"Could not find or parse Dockerfile: unknown instruction: GIT\", \"namespace\": \"dgangaia\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", //Optional \"docker_tags\": [ \"master\", \"latest\" ], \"build_name\": \"6ae9a86\", \"trigger_metadata\": { //Optional \"commit\": \"6ae9a86930fc73dd07b02e4c5bf63ee60be180ad\", \"ref\": \"refs/heads/master\", \"default_branch\": \"master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { //Optional \"url\": \"https://github.com/dgangaia/test/commit/6ae9a86930fc73dd07b02e4c5bf63ee60be180ad\", \"date\": \"2019-03-06T14:18:16+11:00\", \"message\": \"failed build test\", \"committer\": { //Optional \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", //Optional \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" //Optional }, \"author\": { //Optional \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", //Optional \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" //Optional } } }, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/5346a21d-3434-4764-85be-5be1296f293c\" }",
"{ \"build_id\": \"cbd534c5-f1c0-4816-b4e3-55446b851e70\", \"trigger_kind\": \"github\", \"name\": \"test\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", \"docker_tags\": [ \"master\", \"latest\" ], \"build_name\": \"cbce83c\", \"trigger_metadata\": { \"commit\": \"cbce83c04bfb59734fc42a83aab738704ba7ec41\", \"ref\": \"refs/heads/master\", \"default_branch\": \"master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { \"url\": \"https://github.com/dgangaia/test/commit/cbce83c04bfb59734fc42a83aab738704ba7ec41\", \"date\": \"2019-03-06T14:27:53+11:00\", \"message\": \"testing cancel build\", \"committer\": { \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" }, \"author\": { \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" } } }, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/cbd534c5-f1c0-4816-b4e3-55446b851e70\" }"
] |
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/about_quay_io/repository-notifications
|
Chapter 7. Evaluating and testing policies
|
Chapter 7. Evaluating and testing policies When designing your policies, you can simulate authorization requests to test how your policies are being evaluated. You can access the Policy Evaluation Tool by clicking the Evaluate tab when editing a resource server. There you can specify different inputs to simulate real authorization requests and test the effect of your policies. Policy evaluation tool 7.1. Providing identity information The Identity Information filters can be used to specify the user requesting permissions. 7.2. Providing contextual information The Contextual Information filters can be used to define additional attributes to the evaluation context, so that policies can obtain these same attributes. 7.3. Providing the permissions The Permissions filters can be used to build an authorization request. You can request permissions for a set of one or more resources and scopes. If you want to simulate authorization requests based on all protected resources and scopes, click Add without specifying any Resources or Scopes . When you've specified your desired values, click Evaluate .
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/authorization_services_guide/policy_evaluation_overview
|
Chapter 78. OpenTelemetryTracing schema reference
|
Chapter 78. OpenTelemetryTracing schema reference Used in: KafkaBridgeSpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec The type property is a discriminator that distinguishes use of the OpenTelemetryTracing type from JaegerTracing . It must have the value opentelemetry for the type OpenTelemetryTracing . Property Description type Must be opentelemetry . string
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-opentelemetrytracing-reference
|
Chapter 6. Config [operator.openshift.io/v1]
|
Chapter 6. Config [operator.openshift.io/v1] Description Config specifies the behavior of the config operator which is responsible for creating the initial configuration of other components on the cluster. The operator also handles installation, migration or synchronization of cloud configurations for AWS and Azure cloud based clusters Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of the Config Operator. status object status defines the observed status of the Config Operator. 6.1.1. .spec Description spec is the specification of the desired behavior of the Config Operator. Type object Property Type Description logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". unsupportedConfigOverrides `` unsupportedConfigOverrides holds a sparse config that will override any previously set options. It only needs to be the fields to override it will end up overlaying in the following order: 1. hardcoded defaults 2. observedConfig 3. unsupportedConfigOverrides 6.1.2. .status Description status defines the observed status of the Config Operator. Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 6.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 6.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 6.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 6.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 6.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/configs DELETE : delete collection of Config GET : list objects of kind Config POST : create a Config /apis/operator.openshift.io/v1/configs/{name} DELETE : delete a Config GET : read the specified Config PATCH : partially update the specified Config PUT : replace the specified Config /apis/operator.openshift.io/v1/configs/{name}/status GET : read status of the specified Config PATCH : partially update status of the specified Config PUT : replace status of the specified Config 6.2.1. /apis/operator.openshift.io/v1/configs Table 6.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Config Table 6.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Config Table 6.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.5. HTTP responses HTTP code Reponse body 200 - OK ConfigList schema 401 - Unauthorized Empty HTTP method POST Description create a Config Table 6.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.7. Body parameters Parameter Type Description body Config schema Table 6.8. HTTP responses HTTP code Reponse body 200 - OK Config schema 201 - Created Config schema 202 - Accepted Config schema 401 - Unauthorized Empty 6.2.2. /apis/operator.openshift.io/v1/configs/{name} Table 6.9. Global path parameters Parameter Type Description name string name of the Config Table 6.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Config Table 6.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 6.12. Body parameters Parameter Type Description body DeleteOptions schema Table 6.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Config Table 6.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 6.15. HTTP responses HTTP code Reponse body 200 - OK Config schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Config Table 6.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.17. Body parameters Parameter Type Description body Patch schema Table 6.18. HTTP responses HTTP code Reponse body 200 - OK Config schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Config Table 6.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.20. Body parameters Parameter Type Description body Config schema Table 6.21. HTTP responses HTTP code Reponse body 200 - OK Config schema 201 - Created Config schema 401 - Unauthorized Empty 6.2.3. /apis/operator.openshift.io/v1/configs/{name}/status Table 6.22. Global path parameters Parameter Type Description name string name of the Config Table 6.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Config Table 6.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 6.25. HTTP responses HTTP code Reponse body 200 - OK Config schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Config Table 6.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.27. Body parameters Parameter Type Description body Patch schema Table 6.28. HTTP responses HTTP code Reponse body 200 - OK Config schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Config Table 6.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.30. Body parameters Parameter Type Description body Config schema Table 6.31. HTTP responses HTTP code Reponse body 200 - OK Config schema 201 - Created Config schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/operator_apis/config-operator-openshift-io-v1
|
Chapter 2. Security realms
|
Chapter 2. Security realms Security realms integrate Data Grid Server deployments with the network protocols and infrastructure in your environment that control access and verify user identities. 2.1. Creating security realms Add security realms to Data Grid Server configuration to control access to deployments. You can add one or more security realm to your configuration. Note When you add security realms to your configuration, Data Grid Server automatically enables the matching authentication mechanisms for the Hot Rod and REST endpoints. Prerequisites Add socket bindings to your Data Grid Server configuration as required. Create keystores, or have a PEM file, to configure the security realm with TLS/SSL encryption. Data Grid Server can also generate keystores at startup. Provision the resources or services that the security realm configuration relies on. For example, if you add a token realm, you need to provision OAuth services. This procedure demonstrates how to configure multiple property realms. Before you begin, you need to create properties files that add users and assign permissions with the Command Line Interface (CLI). Use the user create commands as follows: Tip Run user create --help for examples and more information. Note Adding credentials to a properties realm with the CLI creates the user only on the server instance to which you are connected. You must manually synchronize credentials in a properties realm to each node in the cluster. Procedure Open your Data Grid Server configuration for editing. Use the security-realms element in the security configuration to contain create multiple security realms. Add a security realm with the security-realm element and give it a unique name with the name attribute. To follow the example, create one security realm named application-realm and another named management-realm . Provide the TLS/SSL identify for Data Grid Server with the server-identities element and configure a keystore as required. Specify the type of security realm by adding one the following elements or fields: properties-realm ldap-realm token-realm truststore-realm Specify properties for the type of security realm you are configuring as appropriate. To follow the example, specify the *.properties files you created with the CLI using the path attribute on the user-properties and group-properties elements or fields. If you add multiple different types of security realm to your configuration, include the distributed-realm element or field so that Data Grid Server uses the realms in combination with each other. Configure Data Grid Server endpoints to use the security realm with the with the security-realm attribute. Save the changes to your configuration. Multiple property realms XML <server xmlns="urn:infinispan:server:14.0"> <security> <security-realms> <security-realm name="application-realm"> <properties-realm groups-attribute="Roles"> <user-properties path="application-users.properties"/> <group-properties path="application-groups.properties"/> </properties-realm> </security-realm> <security-realm name="management-realm"> <properties-realm groups-attribute="Roles"> <user-properties path="management-users.properties"/> <group-properties path="management-groups.properties"/> </properties-realm> </security-realm> </security-realms> </security> </server> JSON { "server": { "security": { "security-realms": [{ "name": "management-realm", "properties-realm": { "groups-attribute": "Roles", "user-properties": { "digest-realm-name": "management-realm", "path": "management-users.properties" }, "group-properties": { "path": "management-groups.properties" } } }, { "name": "application-realm", "properties-realm": { "groups-attribute": "Roles", "user-properties": { "digest-realm-name": "application-realm", "path": "application-users.properties" }, "group-properties": { "path": "application-groups.properties" } } }] } } } YAML server: security: securityRealms: - name: "management-realm" propertiesRealm: groupsAttribute: "Roles" userProperties: digestRealmName: "management-realm" path: "management-users.properties" groupProperties: path: "management-groups.properties" - name: "application-realm" propertiesRealm: groupsAttribute: "Roles" userProperties: digestRealmName: "application-realm" path: "application-users.properties" groupProperties: path: "application-groups.properties" 2.2. Setting up Kerberos identities Add Kerberos identities to a security realm in your Data Grid Server configuration to use keytab files that contain service principal names and encrypted keys, derived from Kerberos passwords. Prerequisites Have Kerberos service account principals. Note keytab files can contain both user and service account principals. However, Data Grid Server uses service account principals only which means it can provide identity to clients and allow clients to authenticate with Kerberos servers. In most cases, you create unique principals for the Hot Rod and REST endpoints. For example, if you have a "datagrid" server in the "INFINISPAN.ORG" domain you should create the following service principals: hotrod/[email protected] identifies the Hot Rod service. HTTP/[email protected] identifies the REST service. Procedure Create keytab files for the Hot Rod and REST services. Linux Microsoft Windows Copy the keytab files to the server/conf directory of your Data Grid Server installation. Open your Data Grid Server configuration for editing. Add a server-identities definition to the Data Grid server security realm. Specify the location of keytab files that provide service principals to Hot Rod and REST connectors. Name the Kerberos service principals. Save the changes to your configuration. Kerberos identity configuration XML <server xmlns="urn:infinispan:server:14.0"> <security> <security-realms> <security-realm name="kerberos-realm"> <server-identities> <!-- Specifies a keytab file that provides a Kerberos identity. --> <!-- Names the Kerberos service principal for the Hot Rod endpoint. --> <!-- The required="true" attribute specifies that the keytab file must be present when the server starts. --> <kerberos keytab-path="hotrod.keytab" principal="hotrod/[email protected]" required="true"/> <!-- Specifies a keytab file and names the Kerberos service principal for the REST endpoint. --> <kerberos keytab-path="http.keytab" principal="HTTP/[email protected]" required="true"/> </server-identities> </security-realm> </security-realms> </security> <endpoints> <endpoint socket-binding="default" security-realm="kerberos-realm"> <hotrod-connector> <authentication> <sasl server-name="datagrid" server-principal="hotrod/[email protected]"/> </authentication> </hotrod-connector> <rest-connector> <authentication server-principal="HTTP/[email protected]"/> </rest-connector> </endpoint> </endpoints> </server> JSON { "server": { "security": { "security-realms": [{ "name": "kerberos-realm", "server-identities": [{ "kerberos": { "principal": "hotrod/[email protected]", "keytab-path": "hotrod.keytab", "required": true }, "kerberos": { "principal": "HTTP/[email protected]", "keytab-path": "http.keytab", "required": true } }] }] }, "endpoints": { "endpoint": { "socket-binding": "default", "security-realm": "kerberos-realm", "hotrod-connector": { "authentication": { "security-realm": "kerberos-realm", "sasl": { "server-name": "datagrid", "server-principal": "hotrod/[email protected]" } } }, "rest-connector": { "authentication": { "server-principal": "HTTP/[email protected]" } } } } } } YAML server: security: securityRealms: - name: "kerberos-realm" serverIdentities: - kerberos: principal: "hotrod/[email protected]" keytabPath: "hotrod.keytab" required: "true" - kerberos: principal: "HTTP/[email protected]" keytabPath: "http.keytab" required: "true" endpoints: endpoint: socketBinding: "default" securityRealm: "kerberos-realm" hotrodConnector: authentication: sasl: serverName: "datagrid" serverPrincipal: "hotrod/[email protected]" restConnector: authentication: securityRealm: "kerberos-realm" serverPrincipal" : "HTTP/[email protected]" 2.3. Property realms Property realms use property files to define users and groups. users.properties contains Data Grid user credentials. Passwords can be pre-digested with the DIGEST-MD5 and DIGEST authentication mechanisms. groups.properties associates users with roles and permissions. Note You can avoid authentication issues that relate to a property file by using the Data Grid CLI to enter the correct security realm name to the file. You can find the correct security realm name of your Data Grid Server by opening the infinispan.xml file and navigating to the <security-realm name> property. When you copy a property file from one Data Grid Server to another, make sure that the security realm name appropriates to the correct authentication mechanism for the target endpoint. users.properties groups.properties Property realm configuration XML <server xmlns="urn:infinispan:server:14.0"> <security> <security-realms> <security-realm name="default"> <!-- groups-attribute configures the "groups.properties" file to contain security authorization roles. --> <properties-realm groups-attribute="Roles"> <user-properties path="users.properties" relative-to="infinispan.server.config.path" plain-text="true"/> <group-properties path="groups.properties" relative-to="infinispan.server.config.path"/> </properties-realm> </security-realm> </security-realms> </security> </server> JSON { "server": { "security": { "security-realms": [{ "name": "default", "properties-realm": { "groups-attribute": "Roles", "user-properties": { "digest-realm-name": "default", "path": "users.properties", "relative-to": "infinispan.server.config.path", "plain-text": true }, "group-properties": { "path": "groups.properties", "relative-to": "infinispan.server.config.path" } } }] } } } YAML server: security: securityRealms: - name: "default" propertiesRealm: # groupsAttribute configures the "groups.properties" file # to contain security authorization roles. groupsAttribute: "Roles" userProperties: digestRealmName: "default" path: "users.properties" relative-to: 'infinispan.server.config.path' plainText: "true" groupProperties: path: "groups.properties" relative-to: 'infinispan.server.config.path' 2.4. LDAP realms LDAP realms connect to LDAP servers, such as OpenLDAP, Red Hat Directory Server, Apache Directory Server, or Microsoft Active Directory, to authenticate users and obtain membership information. Note LDAP servers can have different entry layouts, depending on the type of server and deployment. It is beyond the scope of this document to provide examples for all possible configurations. 2.4.1. LDAP connection properties Specify the LDAP connection properties in the LDAP realm configuration. The following properties are required: url Specifies the URL of the LDAP server. The URL should be in format ldap://hostname:port or ldaps://hostname:port for secure connections using TLS. principal Specifies a distinguished name (DN) of a valid user in the LDAp server. The DN uniquely identifies the user within the LDAP directory structure. credential Corresponds to the password associated with the principal mentioned above. Important The principal for LDAP connections must have necessary privileges to perform LDAP queries and access specific attributes. Tip Enabling connection-pooling significantly improves the performance of authentication to LDAP servers. The connection pooling mechanism is provided by the JDK. For more information see Connection Pooling Configuration and Java Tutorials: Pooling . 2.4.2. LDAP realm user authentication methods Configure the user authentication method in the LDAP realm. The LDAP realm can authenticate users in two ways: Hashed password comparison by comparing the hashed password stored in a user's password attribute (usually userPassword ) Direct verification by authenticating against the LDAP server using the supplied credentials Direct verification is the only approach that works with Active Directory, because access to the password attribute is forbidden. Important You cannot use endpoint authentication mechanisms that performs hashing with the direct-verification attribute, since this method requires having the password in clear text. As a result you must use the BASIC authentication mechanism with the REST endpoint and PLAIN with the Hot Rod endpoint to integrate with Active Directory Server. A more secure alternative is to use Kerberos, which allows the SPNEGO , GSSAPI , and GS2-KRB5 authentication mechanisms. The LDAP realm searches the directory to find the entry which corresponds to the authenticated user. The rdn-identifier attribute specifies an LDAP attribute that finds the user entry based on a provided identifier, which is typically a username; for example, the uid or sAMAccountName attribute. Add search-recursive="true" to the configuration to search the directory recursively. By default, the search for the user entry uses the (rdn_identifier={0}) filter. You can specify a different filter using the filter-name attribute. 2.4.3. Mapping user entries to their associated groups In the LDAP realm configuration, specify the attribute-mapping element to retrieve and associate all groups that a user is a member of. The membership information is stored typically in two ways: Under group entries that usually have class groupOfNames or groupOfUniqueNames in the member attribute. This is the default behavior in most LDAP installations, except for Active Directory. In this case, you can use an attribute filter. This filter searches for entries that match the supplied filter, which locates groups with a member attribute equal to the user's DN. The filter then extracts the group entry's CN as specified by from , and adds it to the user's Roles . In the user entry in the memberOf attribute. This is typically the case for Active Directory. In this case you should use an attribute reference such as the following: <attribute-reference reference="memberOf" from="cn" to="Roles" /> This reference gets all memberOf attributes from the user's entry, extracts the CN as specified by from , and adds them to the user's groups ( Roles is the internal name used to map the groups). 2.4.4. LDAP realm configuration reference XML <server xmlns="urn:infinispan:server:14.0"> <security> <security-realms> <security-realm name="ldap-realm"> <!-- Specifies connection properties. --> <ldap-realm url="ldap://my-ldap-server:10389" principal="uid=admin,ou=People,dc=infinispan,dc=org" credential="strongPassword" connection-timeout="3000" read-timeout="30000" connection-pooling="true" referral-mode="ignore" page-size="30" direct-verification="true"> <!-- Defines how principals are mapped to LDAP entries. --> <identity-mapping rdn-identifier="uid" search-dn="ou=People,dc=infinispan,dc=org" search-recursive="false"> <!-- Retrieves all the groups of which the user is a member. --> <attribute-mapping> <attribute from="cn" to="Roles" filter="(&(objectClass=groupOfNames)(member={1}))" filter-dn="ou=Roles,dc=infinispan,dc=org"/> </attribute-mapping> </identity-mapping> </ldap-realm> </security-realm> </security-realms> </security> </server> JSON { "server": { "security": { "security-realms": [{ "name": "ldap-realm", "ldap-realm": { "url": "ldap://my-ldap-server:10389", "principal": "uid=admin,ou=People,dc=infinispan,dc=org", "credential": "strongPassword", "connection-timeout": "3000", "read-timeout": "30000", "connection-pooling": "true", "referral-mode": "ignore", "page-size": "30", "direct-verification": "true", "identity-mapping": { "rdn-identifier": "uid", "search-dn": "ou=People,dc=infinispan,dc=org", "search-recursive": "false", "attribute-mapping": [{ "from": "cn", "to": "Roles", "filter": "(&(objectClass=groupOfNames)(member={1}))", "filter-dn": "ou=Roles,dc=infinispan,dc=org" }] } } }] } } } YAML server: security: securityRealms: - name: ldap-realm ldapRealm: url: 'ldap://my-ldap-server:10389' principal: 'uid=admin,ou=People,dc=infinispan,dc=org' credential: strongPassword connectionTimeout: '3000' readTimeout: '30000' connectionPooling: true referralMode: ignore pageSize: '30' directVerification: true identityMapping: rdnIdentifier: uid searchDn: 'ou=People,dc=infinispan,dc=org' searchRecursive: false attributeMapping: - filter: '(&(objectClass=groupOfNames)(member={1}))' filterDn: 'ou=Roles,dc=infinispan,dc=org' from: cn to: Roles 2.4.4.1. LDAP realm principal rewriting Principals obtained by SASL authentication mechanisms such as GSSAPI , GS2-KRB5 and Negotiate usually include the domain name, for example [email protected] . Before using these principals in LDAP queries, it is necessary to transform them to ensure their compatibility. This process is called rewriting. Data Grid includes the following transformers: case-principal-transformer rewrites the principal to either all uppercase or all lowercase. For example MyUser would be rewritten as MYUSER in uppercase mode and myuser in lowercase mode. common-name-principal-transformer rewrites principals in the LDAP Distinguished Name format (as defined by RFC 4514 ). It extracts the first attribute of type CN (commonName). For example, DN=CN=myuser,OU=myorg,DC=mydomain would be rewritten as myuser . regex-principal-transformer rewrites principals using a regular expression with capturing groups, allowing, for example, for extractions of any substring. 2.4.4.2. LDAP principal rewriting configuration reference Case principal transformer XML <server xmlns="urn:infinispan:server:14.0"> <security> <security-realms> <security-realm name="ldap-realm"> <ldap-realm url="ldap://USD{org.infinispan.test.host.address}:10389" principal="uid=admin,ou=People,dc=infinispan,dc=org" credential="strongPassword"> <name-rewriter> <!-- Defines a rewriter that transforms usernames to lowercase --> <case-principal-transformer uppercase="false"/> </name-rewriter> <!-- further configuration omitted --> </ldap-realm> </security-realm> </security-realms> </security> </server> JSON { "server": { "security": { "security-realms": [{ "name": "ldap-realm", "ldap-realm": { "principal": "uid=admin,ou=People,dc=infinispan,dc=org", "url": "ldap://USD{org.infinispan.test.host.address}:10389", "credential": "strongPassword", "name-rewriter": { "case-principal-transformer": { "uppercase": false } } } }] } } } YAML server: security: securityRealms: - name: "ldap-realm" ldapRealm: principal: "uid=admin,ou=People,dc=infinispan,dc=org" url: "ldap://USD{org.infinispan.test.host.address}:10389" credential: "strongPassword" nameRewriter: casePrincipalTransformer: uppercase: false # further configuration omitted Common name principal transformer XML <server xmlns="urn:infinispan:server:14.0"> <security> <security-realms> <security-realm name="ldap-realm"> <ldap-realm url="ldap://USD{org.infinispan.test.host.address}:10389" principal="uid=admin,ou=People,dc=infinispan,dc=org" credential="strongPassword"> <name-rewriter> <!-- Defines a rewriter that obtains the first CN from a DN --> <common-name-principal-transformer /> </name-rewriter> <!-- further configuration omitted --> </ldap-realm> </security-realm> </security-realms> </security> </server> JSON { "server": { "security": { "security-realms": [{ "name": "ldap-realm", "ldap-realm": { "principal": "uid=admin,ou=People,dc=infinispan,dc=org", "url": "ldap://USD{org.infinispan.test.host.address}:10389", "credential": "strongPassword", "name-rewriter": { "common-name-principal-transformer": {} } } }] } } } YAML server: security: securityRealms: - name: "ldap-realm" ldapRealm: principal: "uid=admin,ou=People,dc=infinispan,dc=org" url: "ldap://USD{org.infinispan.test.host.address}:10389" credential: "strongPassword" nameRewriter: commonNamePrincipalTransformer: ~ # further configuration omitted Regex principal transformer XML <server xmlns="urn:infinispan:server:14.0"> <security> <security-realms> <security-realm name="ldap-realm"> <ldap-realm url="ldap://USD{org.infinispan.test.host.address}:10389" principal="uid=admin,ou=People,dc=infinispan,dc=org" credential="strongPassword"> <name-rewriter> <!-- Defines a rewriter that extracts the username from the principal using a regular expression. --> <regex-principal-transformer pattern="(.*)@INFINISPAN\.ORG" replacement="USD1"/> </name-rewriter> <!-- further configuration omitted --> </ldap-realm> </security-realm> </security-realms> </security> </server> JSON { "server": { "security": { "security-realms": [{ "name": "ldap-realm", "ldap-realm": { "principal": "uid=admin,ou=People,dc=infinispan,dc=org", "url": "ldap://USD{org.infinispan.test.host.address}:10389", "credential": "strongPassword", "name-rewriter": { "regex-principal-transformer": { "pattern": "(.*)@INFINISPAN\\.ORG", "replacement": "USD1" } } } }] } } } YAML server: security: securityRealms: - name: "ldap-realm" ldapRealm: principal: "uid=admin,ou=People,dc=infinispan,dc=org" url: "ldap://USD{org.infinispan.test.host.address}:10389" credential: "strongPassword" nameRewriter: regexPrincipalTransformer: pattern: (.*)@INFINISPAN\.ORG replacement: "USD1" # further configuration omitted 2.4.4.3. LDAP user and group mapping process with Data Grid This example illustrates the process of loading and internally mapping LDAP users and groups to Data Grid subjects. The following is a LDIF (LDAP Data Interchange Format) file, which describes multiple LDAP entries: LDIF # Users dn: uid=root,ou=People,dc=infinispan,dc=org objectclass: top objectclass: uidObject objectclass: person uid: root cn: root sn: root userPassword: strongPassword # Groups dn: cn=admin,ou=Roles,dc=infinispan,dc=org objectClass: top objectClass: groupOfNames cn: admin description: the Infinispan admin group member: uid=root,ou=People,dc=infinispan,dc=org dn: cn=monitor,ou=Roles,dc=infinispan,dc=org objectClass: top objectClass: groupOfNames cn: monitor description: the Infinispan monitor group member: uid=root,ou=People,dc=infinispan,dc=org The root user is a member of the admin and monitor groups. When a request to authenticate the user root with the password strongPassword is made on one of the endpoints, the following operations are performed: The username is optionally rewritten using the chosen principal transformer. The realm searches within the ou=People,dc=infinispan,dc=org tree for an entry whose uid attribute is equal to root and finds the entry with DN uid=root,ou=People,dc=infinispan,dc=org , which becomes the user principal. The realm searches within the u=Roles,dc=infinispan,dc=org tree for entries of objectClass=groupOfNames that include uid=root,ou=People,dc=infinispan,dc=org in the member attribute. In this case it finds two entries: cn=admin,ou=Roles,dc=infinispan,dc=org and cn=monitor,ou=Roles,dc=infinispan,dc=org . From these entries, it extracts the cn attributes which become the group principals. The resulting subject will therefore look like: NamePrincipal: uid=root,ou=People,dc=infinispan,dc=org RolePrincipal: admin RolePrincipal: monitor At this point, the global authorization mappers are applied on the above subject to convert the principals into roles. The roles are then expanded into a set of permissions, which are validated against the requested cache and operation. 2.5. Token realms Token realms use external services to validate tokens and require providers that are compatible with RFC-7662 (OAuth2 Token Introspection), such as Red Hat SSO. Token realm configuration XML <server xmlns="urn:infinispan:server:14.0"> <security> <security-realms> <security-realm name="token-realm"> <!-- Specifies the URL of the authentication server. --> <token-realm name="token" auth-server-url="https://oauth-server/auth/"> <!-- Specifies the URL of the token introspection endpoint. --> <oauth2-introspection introspection-url="https://oauth-server/auth/realms/infinispan/protocol/openid-connect/token/introspect" client-id="infinispan-server" client-secret="1fdca4ec-c416-47e0-867a-3d471af7050f"/> </token-realm> </security-realm> </security-realms> </security> </server> JSON { "server": { "security": { "security-realms": [{ "name": "token-realm", "token-realm": { "auth-server-url": "https://oauth-server/auth/", "oauth2-introspection": { "client-id": "infinispan-server", "client-secret": "1fdca4ec-c416-47e0-867a-3d471af7050f", "introspection-url": "https://oauth-server/auth/realms/infinispan/protocol/openid-connect/token/introspect" } } }] } } } YAML server: security: securityRealms: - name: token-realm tokenRealm: authServerUrl: 'https://oauth-server/auth/' oauth2Introspection: clientId: infinispan-server clientSecret: '1fdca4ec-c416-47e0-867a-3d471af7050f' introspectionUrl: 'https://oauth-server/auth/realms/infinispan/protocol/openid-connect/token/introspect' 2.6. Trust store realms Trust store realms use certificates, or certificates chains, that verify Data Grid Server and client identities when they negotiate connections. Keystores Contain server certificates that provide a Data Grid Server identity to clients. If you configure a keystore with server certificates, Data Grid Server encrypts traffic using industry standard SSL/TLS protocols. Trust stores Contain client certificates, or certificate chains, that clients present to Data Grid Server. Client trust stores are optional and allow Data Grid Server to perform client certificate authentication. Client certificate authentication You must add the require-ssl-client-auth="true" attribute to the endpoint configuration if you want Data Grid Server to validate or authenticate client certificates. Trust store realm configuration XML <server xmlns="urn:infinispan:server:14.0"> <security> <security-realms> <security-realm name="trust-store-realm"> <server-identities> <ssl> <!-- Provides an SSL/TLS identity with a keystore that contains server certificates. --> <keystore path="server.p12" relative-to="infinispan.server.config.path" keystore-password="secret" alias="server"/> <!-- Configures a trust store that contains client certificates or part of a certificate chain. --> <truststore path="trust.p12" relative-to="infinispan.server.config.path" password="secret"/> </ssl> </server-identities> <!-- Authenticates client certificates against the trust store. If you configure this, the trust store must contain the public certificates for all clients. --> <truststore-realm/> </security-realm> </security-realms> </security> </server> JSON { "server": { "security": { "security-realms": [{ "name": "trust-store-realm", "server-identities": { "ssl": { "keystore": { "path": "server.p12", "relative-to": "infinispan.server.config.path", "keystore-password": "secret", "alias": "server" }, "truststore": { "path": "trust.p12", "relative-to": "infinispan.server.config.path", "password": "secret" } } }, "truststore-realm": {} }] } } } YAML server: security: securityRealms: - name: "trust-store-realm" serverIdentities: ssl: keystore: path: "server.p12" relative-to: "infinispan.server.config.path" keystore-password: "secret" alias: "server" truststore: path: "trust.p12" relative-to: "infinispan.server.config.path" password: "secret" truststoreRealm: ~ 2.7. Distributed security realms Distributed realms combine multiple different types of security realms. When users attempt to access the Hot Rod or REST endpoints, Data Grid Server uses each security realm in turn until it finds one that can perform the authentication. Distributed realm configuration XML <server xmlns="urn:infinispan:server:14.0"> <security> <security-realms> <security-realm name="distributed-realm"> <ldap-realm url="ldap://my-ldap-server:10389" principal="uid=admin,ou=People,dc=infinispan,dc=org" credential="strongPassword"> <identity-mapping rdn-identifier="uid" search-dn="ou=People,dc=infinispan,dc=org" search-recursive="false"> <attribute-mapping> <attribute from="cn" to="Roles" filter="(&(objectClass=groupOfNames)(member={1}))" filter-dn="ou=Roles,dc=infinispan,dc=org"/> </attribute-mapping> </identity-mapping> </ldap-realm> <properties-realm groups-attribute="Roles"> <user-properties path="users.properties" relative-to="infinispan.server.config.path"/> <group-properties path="groups.properties" relative-to="infinispan.server.config.path"/> </properties-realm> <distributed-realm/> </security-realm> </security-realms> </security> </server> JSON { "server": { "security": { "security-realms": [{ "name": "distributed-realm", "ldap-realm": { "principal": "uid=admin,ou=People,dc=infinispan,dc=org", "url": "ldap://my-ldap-server:10389", "credential": "strongPassword", "identity-mapping": { "rdn-identifier": "uid", "search-dn": "ou=People,dc=infinispan,dc=org", "search-recursive": false, "attribute-mapping": { "attribute": { "filter": "(&(objectClass=groupOfNames)(member={1}))", "filter-dn": "ou=Roles,dc=infinispan,dc=org", "from": "cn", "to": "Roles" } } } }, "properties-realm": { "groups-attribute": "Roles", "user-properties": { "digest-realm-name": "distributed-realm", "path": "users.properties" }, "group-properties": { "path": "groups.properties" } }, "distributed-realm": {} }] } } } YAML server: security: securityRealms: - name: "distributed-realm" ldapRealm: principal: "uid=admin,ou=People,dc=infinispan,dc=org" url: "ldap://my-ldap-server:10389" credential: "strongPassword" identityMapping: rdnIdentifier: "uid" searchDn: "ou=People,dc=infinispan,dc=org" searchRecursive: "false" attributeMapping: attribute: filter: "(&(objectClass=groupOfNames)(member={1}))" filterDn: "ou=Roles,dc=infinispan,dc=org" from: "cn" to: "Roles" propertiesRealm: groupsAttribute: "Roles" userProperties: digestRealmName: "distributed-realm" path: "users.properties" groupProperties: path: "groups.properties" distributedRealm: ~
|
[
"user create <username> -p <changeme> -g <role> --users-file=application-users.properties --groups-file=application-groups.properties user create <username> -p <changeme> -g <role> --users-file=management-users.properties --groups-file=management-groups.properties",
"<server xmlns=\"urn:infinispan:server:14.0\"> <security> <security-realms> <security-realm name=\"application-realm\"> <properties-realm groups-attribute=\"Roles\"> <user-properties path=\"application-users.properties\"/> <group-properties path=\"application-groups.properties\"/> </properties-realm> </security-realm> <security-realm name=\"management-realm\"> <properties-realm groups-attribute=\"Roles\"> <user-properties path=\"management-users.properties\"/> <group-properties path=\"management-groups.properties\"/> </properties-realm> </security-realm> </security-realms> </security> </server>",
"{ \"server\": { \"security\": { \"security-realms\": [{ \"name\": \"management-realm\", \"properties-realm\": { \"groups-attribute\": \"Roles\", \"user-properties\": { \"digest-realm-name\": \"management-realm\", \"path\": \"management-users.properties\" }, \"group-properties\": { \"path\": \"management-groups.properties\" } } }, { \"name\": \"application-realm\", \"properties-realm\": { \"groups-attribute\": \"Roles\", \"user-properties\": { \"digest-realm-name\": \"application-realm\", \"path\": \"application-users.properties\" }, \"group-properties\": { \"path\": \"application-groups.properties\" } } }] } } }",
"server: security: securityRealms: - name: \"management-realm\" propertiesRealm: groupsAttribute: \"Roles\" userProperties: digestRealmName: \"management-realm\" path: \"management-users.properties\" groupProperties: path: \"management-groups.properties\" - name: \"application-realm\" propertiesRealm: groupsAttribute: \"Roles\" userProperties: digestRealmName: \"application-realm\" path: \"application-users.properties\" groupProperties: path: \"application-groups.properties\"",
"ktutil ktutil: addent -password -p [email protected] -k 1 -e aes256-cts Password for [email protected]: [enter your password] ktutil: wkt http.keytab ktutil: quit",
"ktpass -princ HTTP/[email protected] -pass * -mapuser INFINISPAN\\USER_NAME ktab -k http.keytab -a HTTP/[email protected]",
"<server xmlns=\"urn:infinispan:server:14.0\"> <security> <security-realms> <security-realm name=\"kerberos-realm\"> <server-identities> <!-- Specifies a keytab file that provides a Kerberos identity. --> <!-- Names the Kerberos service principal for the Hot Rod endpoint. --> <!-- The required=\"true\" attribute specifies that the keytab file must be present when the server starts. --> <kerberos keytab-path=\"hotrod.keytab\" principal=\"hotrod/[email protected]\" required=\"true\"/> <!-- Specifies a keytab file and names the Kerberos service principal for the REST endpoint. --> <kerberos keytab-path=\"http.keytab\" principal=\"HTTP/[email protected]\" required=\"true\"/> </server-identities> </security-realm> </security-realms> </security> <endpoints> <endpoint socket-binding=\"default\" security-realm=\"kerberos-realm\"> <hotrod-connector> <authentication> <sasl server-name=\"datagrid\" server-principal=\"hotrod/[email protected]\"/> </authentication> </hotrod-connector> <rest-connector> <authentication server-principal=\"HTTP/[email protected]\"/> </rest-connector> </endpoint> </endpoints> </server>",
"{ \"server\": { \"security\": { \"security-realms\": [{ \"name\": \"kerberos-realm\", \"server-identities\": [{ \"kerberos\": { \"principal\": \"hotrod/[email protected]\", \"keytab-path\": \"hotrod.keytab\", \"required\": true }, \"kerberos\": { \"principal\": \"HTTP/[email protected]\", \"keytab-path\": \"http.keytab\", \"required\": true } }] }] }, \"endpoints\": { \"endpoint\": { \"socket-binding\": \"default\", \"security-realm\": \"kerberos-realm\", \"hotrod-connector\": { \"authentication\": { \"security-realm\": \"kerberos-realm\", \"sasl\": { \"server-name\": \"datagrid\", \"server-principal\": \"hotrod/[email protected]\" } } }, \"rest-connector\": { \"authentication\": { \"server-principal\": \"HTTP/[email protected]\" } } } } } }",
"server: security: securityRealms: - name: \"kerberos-realm\" serverIdentities: - kerberos: principal: \"hotrod/[email protected]\" keytabPath: \"hotrod.keytab\" required: \"true\" - kerberos: principal: \"HTTP/[email protected]\" keytabPath: \"http.keytab\" required: \"true\" endpoints: endpoint: socketBinding: \"default\" securityRealm: \"kerberos-realm\" hotrodConnector: authentication: sasl: serverName: \"datagrid\" serverPrincipal: \"hotrod/[email protected]\" restConnector: authentication: securityRealm: \"kerberos-realm\" serverPrincipal\" : \"HTTP/[email protected]\"",
"myuser=a_password user2=another_password",
"myuser=supervisor,reader,writer user2=supervisor",
"<server xmlns=\"urn:infinispan:server:14.0\"> <security> <security-realms> <security-realm name=\"default\"> <!-- groups-attribute configures the \"groups.properties\" file to contain security authorization roles. --> <properties-realm groups-attribute=\"Roles\"> <user-properties path=\"users.properties\" relative-to=\"infinispan.server.config.path\" plain-text=\"true\"/> <group-properties path=\"groups.properties\" relative-to=\"infinispan.server.config.path\"/> </properties-realm> </security-realm> </security-realms> </security> </server>",
"{ \"server\": { \"security\": { \"security-realms\": [{ \"name\": \"default\", \"properties-realm\": { \"groups-attribute\": \"Roles\", \"user-properties\": { \"digest-realm-name\": \"default\", \"path\": \"users.properties\", \"relative-to\": \"infinispan.server.config.path\", \"plain-text\": true }, \"group-properties\": { \"path\": \"groups.properties\", \"relative-to\": \"infinispan.server.config.path\" } } }] } } }",
"server: security: securityRealms: - name: \"default\" propertiesRealm: # groupsAttribute configures the \"groups.properties\" file # to contain security authorization roles. groupsAttribute: \"Roles\" userProperties: digestRealmName: \"default\" path: \"users.properties\" relative-to: 'infinispan.server.config.path' plainText: \"true\" groupProperties: path: \"groups.properties\" relative-to: 'infinispan.server.config.path'",
"<server xmlns=\"urn:infinispan:server:14.0\"> <security> <security-realms> <security-realm name=\"ldap-realm\"> <!-- Specifies connection properties. --> <ldap-realm url=\"ldap://my-ldap-server:10389\" principal=\"uid=admin,ou=People,dc=infinispan,dc=org\" credential=\"strongPassword\" connection-timeout=\"3000\" read-timeout=\"30000\" connection-pooling=\"true\" referral-mode=\"ignore\" page-size=\"30\" direct-verification=\"true\"> <!-- Defines how principals are mapped to LDAP entries. --> <identity-mapping rdn-identifier=\"uid\" search-dn=\"ou=People,dc=infinispan,dc=org\" search-recursive=\"false\"> <!-- Retrieves all the groups of which the user is a member. --> <attribute-mapping> <attribute from=\"cn\" to=\"Roles\" filter=\"(&(objectClass=groupOfNames)(member={1}))\" filter-dn=\"ou=Roles,dc=infinispan,dc=org\"/> </attribute-mapping> </identity-mapping> </ldap-realm> </security-realm> </security-realms> </security> </server>",
"{ \"server\": { \"security\": { \"security-realms\": [{ \"name\": \"ldap-realm\", \"ldap-realm\": { \"url\": \"ldap://my-ldap-server:10389\", \"principal\": \"uid=admin,ou=People,dc=infinispan,dc=org\", \"credential\": \"strongPassword\", \"connection-timeout\": \"3000\", \"read-timeout\": \"30000\", \"connection-pooling\": \"true\", \"referral-mode\": \"ignore\", \"page-size\": \"30\", \"direct-verification\": \"true\", \"identity-mapping\": { \"rdn-identifier\": \"uid\", \"search-dn\": \"ou=People,dc=infinispan,dc=org\", \"search-recursive\": \"false\", \"attribute-mapping\": [{ \"from\": \"cn\", \"to\": \"Roles\", \"filter\": \"(&(objectClass=groupOfNames)(member={1}))\", \"filter-dn\": \"ou=Roles,dc=infinispan,dc=org\" }] } } }] } } }",
"server: security: securityRealms: - name: ldap-realm ldapRealm: url: 'ldap://my-ldap-server:10389' principal: 'uid=admin,ou=People,dc=infinispan,dc=org' credential: strongPassword connectionTimeout: '3000' readTimeout: '30000' connectionPooling: true referralMode: ignore pageSize: '30' directVerification: true identityMapping: rdnIdentifier: uid searchDn: 'ou=People,dc=infinispan,dc=org' searchRecursive: false attributeMapping: - filter: '(&(objectClass=groupOfNames)(member={1}))' filterDn: 'ou=Roles,dc=infinispan,dc=org' from: cn to: Roles",
"<server xmlns=\"urn:infinispan:server:14.0\"> <security> <security-realms> <security-realm name=\"ldap-realm\"> <ldap-realm url=\"ldap://USD{org.infinispan.test.host.address}:10389\" principal=\"uid=admin,ou=People,dc=infinispan,dc=org\" credential=\"strongPassword\"> <name-rewriter> <!-- Defines a rewriter that transforms usernames to lowercase --> <case-principal-transformer uppercase=\"false\"/> </name-rewriter> <!-- further configuration omitted --> </ldap-realm> </security-realm> </security-realms> </security> </server>",
"{ \"server\": { \"security\": { \"security-realms\": [{ \"name\": \"ldap-realm\", \"ldap-realm\": { \"principal\": \"uid=admin,ou=People,dc=infinispan,dc=org\", \"url\": \"ldap://USD{org.infinispan.test.host.address}:10389\", \"credential\": \"strongPassword\", \"name-rewriter\": { \"case-principal-transformer\": { \"uppercase\": false } } } }] } } }",
"server: security: securityRealms: - name: \"ldap-realm\" ldapRealm: principal: \"uid=admin,ou=People,dc=infinispan,dc=org\" url: \"ldap://USD{org.infinispan.test.host.address}:10389\" credential: \"strongPassword\" nameRewriter: casePrincipalTransformer: uppercase: false # further configuration omitted",
"<server xmlns=\"urn:infinispan:server:14.0\"> <security> <security-realms> <security-realm name=\"ldap-realm\"> <ldap-realm url=\"ldap://USD{org.infinispan.test.host.address}:10389\" principal=\"uid=admin,ou=People,dc=infinispan,dc=org\" credential=\"strongPassword\"> <name-rewriter> <!-- Defines a rewriter that obtains the first CN from a DN --> <common-name-principal-transformer /> </name-rewriter> <!-- further configuration omitted --> </ldap-realm> </security-realm> </security-realms> </security> </server>",
"{ \"server\": { \"security\": { \"security-realms\": [{ \"name\": \"ldap-realm\", \"ldap-realm\": { \"principal\": \"uid=admin,ou=People,dc=infinispan,dc=org\", \"url\": \"ldap://USD{org.infinispan.test.host.address}:10389\", \"credential\": \"strongPassword\", \"name-rewriter\": { \"common-name-principal-transformer\": {} } } }] } } }",
"server: security: securityRealms: - name: \"ldap-realm\" ldapRealm: principal: \"uid=admin,ou=People,dc=infinispan,dc=org\" url: \"ldap://USD{org.infinispan.test.host.address}:10389\" credential: \"strongPassword\" nameRewriter: commonNamePrincipalTransformer: ~ # further configuration omitted",
"<server xmlns=\"urn:infinispan:server:14.0\"> <security> <security-realms> <security-realm name=\"ldap-realm\"> <ldap-realm url=\"ldap://USD{org.infinispan.test.host.address}:10389\" principal=\"uid=admin,ou=People,dc=infinispan,dc=org\" credential=\"strongPassword\"> <name-rewriter> <!-- Defines a rewriter that extracts the username from the principal using a regular expression. --> <regex-principal-transformer pattern=\"(.*)@INFINISPAN\\.ORG\" replacement=\"USD1\"/> </name-rewriter> <!-- further configuration omitted --> </ldap-realm> </security-realm> </security-realms> </security> </server>",
"{ \"server\": { \"security\": { \"security-realms\": [{ \"name\": \"ldap-realm\", \"ldap-realm\": { \"principal\": \"uid=admin,ou=People,dc=infinispan,dc=org\", \"url\": \"ldap://USD{org.infinispan.test.host.address}:10389\", \"credential\": \"strongPassword\", \"name-rewriter\": { \"regex-principal-transformer\": { \"pattern\": \"(.*)@INFINISPAN\\\\.ORG\", \"replacement\": \"USD1\" } } } }] } } }",
"server: security: securityRealms: - name: \"ldap-realm\" ldapRealm: principal: \"uid=admin,ou=People,dc=infinispan,dc=org\" url: \"ldap://USD{org.infinispan.test.host.address}:10389\" credential: \"strongPassword\" nameRewriter: regexPrincipalTransformer: pattern: (.*)@INFINISPAN\\.ORG replacement: \"USD1\" # further configuration omitted",
"Users dn: uid=root,ou=People,dc=infinispan,dc=org objectclass: top objectclass: uidObject objectclass: person uid: root cn: root sn: root userPassword: strongPassword Groups dn: cn=admin,ou=Roles,dc=infinispan,dc=org objectClass: top objectClass: groupOfNames cn: admin description: the Infinispan admin group member: uid=root,ou=People,dc=infinispan,dc=org dn: cn=monitor,ou=Roles,dc=infinispan,dc=org objectClass: top objectClass: groupOfNames cn: monitor description: the Infinispan monitor group member: uid=root,ou=People,dc=infinispan,dc=org",
"<server xmlns=\"urn:infinispan:server:14.0\"> <security> <security-realms> <security-realm name=\"token-realm\"> <!-- Specifies the URL of the authentication server. --> <token-realm name=\"token\" auth-server-url=\"https://oauth-server/auth/\"> <!-- Specifies the URL of the token introspection endpoint. --> <oauth2-introspection introspection-url=\"https://oauth-server/auth/realms/infinispan/protocol/openid-connect/token/introspect\" client-id=\"infinispan-server\" client-secret=\"1fdca4ec-c416-47e0-867a-3d471af7050f\"/> </token-realm> </security-realm> </security-realms> </security> </server>",
"{ \"server\": { \"security\": { \"security-realms\": [{ \"name\": \"token-realm\", \"token-realm\": { \"auth-server-url\": \"https://oauth-server/auth/\", \"oauth2-introspection\": { \"client-id\": \"infinispan-server\", \"client-secret\": \"1fdca4ec-c416-47e0-867a-3d471af7050f\", \"introspection-url\": \"https://oauth-server/auth/realms/infinispan/protocol/openid-connect/token/introspect\" } } }] } } }",
"server: security: securityRealms: - name: token-realm tokenRealm: authServerUrl: 'https://oauth-server/auth/' oauth2Introspection: clientId: infinispan-server clientSecret: '1fdca4ec-c416-47e0-867a-3d471af7050f' introspectionUrl: 'https://oauth-server/auth/realms/infinispan/protocol/openid-connect/token/introspect'",
"<server xmlns=\"urn:infinispan:server:14.0\"> <security> <security-realms> <security-realm name=\"trust-store-realm\"> <server-identities> <ssl> <!-- Provides an SSL/TLS identity with a keystore that contains server certificates. --> <keystore path=\"server.p12\" relative-to=\"infinispan.server.config.path\" keystore-password=\"secret\" alias=\"server\"/> <!-- Configures a trust store that contains client certificates or part of a certificate chain. --> <truststore path=\"trust.p12\" relative-to=\"infinispan.server.config.path\" password=\"secret\"/> </ssl> </server-identities> <!-- Authenticates client certificates against the trust store. If you configure this, the trust store must contain the public certificates for all clients. --> <truststore-realm/> </security-realm> </security-realms> </security> </server>",
"{ \"server\": { \"security\": { \"security-realms\": [{ \"name\": \"trust-store-realm\", \"server-identities\": { \"ssl\": { \"keystore\": { \"path\": \"server.p12\", \"relative-to\": \"infinispan.server.config.path\", \"keystore-password\": \"secret\", \"alias\": \"server\" }, \"truststore\": { \"path\": \"trust.p12\", \"relative-to\": \"infinispan.server.config.path\", \"password\": \"secret\" } } }, \"truststore-realm\": {} }] } } }",
"server: security: securityRealms: - name: \"trust-store-realm\" serverIdentities: ssl: keystore: path: \"server.p12\" relative-to: \"infinispan.server.config.path\" keystore-password: \"secret\" alias: \"server\" truststore: path: \"trust.p12\" relative-to: \"infinispan.server.config.path\" password: \"secret\" truststoreRealm: ~",
"<server xmlns=\"urn:infinispan:server:14.0\"> <security> <security-realms> <security-realm name=\"distributed-realm\"> <ldap-realm url=\"ldap://my-ldap-server:10389\" principal=\"uid=admin,ou=People,dc=infinispan,dc=org\" credential=\"strongPassword\"> <identity-mapping rdn-identifier=\"uid\" search-dn=\"ou=People,dc=infinispan,dc=org\" search-recursive=\"false\"> <attribute-mapping> <attribute from=\"cn\" to=\"Roles\" filter=\"(&(objectClass=groupOfNames)(member={1}))\" filter-dn=\"ou=Roles,dc=infinispan,dc=org\"/> </attribute-mapping> </identity-mapping> </ldap-realm> <properties-realm groups-attribute=\"Roles\"> <user-properties path=\"users.properties\" relative-to=\"infinispan.server.config.path\"/> <group-properties path=\"groups.properties\" relative-to=\"infinispan.server.config.path\"/> </properties-realm> <distributed-realm/> </security-realm> </security-realms> </security> </server>",
"{ \"server\": { \"security\": { \"security-realms\": [{ \"name\": \"distributed-realm\", \"ldap-realm\": { \"principal\": \"uid=admin,ou=People,dc=infinispan,dc=org\", \"url\": \"ldap://my-ldap-server:10389\", \"credential\": \"strongPassword\", \"identity-mapping\": { \"rdn-identifier\": \"uid\", \"search-dn\": \"ou=People,dc=infinispan,dc=org\", \"search-recursive\": false, \"attribute-mapping\": { \"attribute\": { \"filter\": \"(&(objectClass=groupOfNames)(member={1}))\", \"filter-dn\": \"ou=Roles,dc=infinispan,dc=org\", \"from\": \"cn\", \"to\": \"Roles\" } } } }, \"properties-realm\": { \"groups-attribute\": \"Roles\", \"user-properties\": { \"digest-realm-name\": \"distributed-realm\", \"path\": \"users.properties\" }, \"group-properties\": { \"path\": \"groups.properties\" } }, \"distributed-realm\": {} }] } } }",
"server: security: securityRealms: - name: \"distributed-realm\" ldapRealm: principal: \"uid=admin,ou=People,dc=infinispan,dc=org\" url: \"ldap://my-ldap-server:10389\" credential: \"strongPassword\" identityMapping: rdnIdentifier: \"uid\" searchDn: \"ou=People,dc=infinispan,dc=org\" searchRecursive: \"false\" attributeMapping: attribute: filter: \"(&(objectClass=groupOfNames)(member={1}))\" filterDn: \"ou=Roles,dc=infinispan,dc=org\" from: \"cn\" to: \"Roles\" propertiesRealm: groupsAttribute: \"Roles\" userProperties: digestRealmName: \"distributed-realm\" path: \"users.properties\" groupProperties: path: \"groups.properties\" distributedRealm: ~"
] |
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_security_guide/security-realms
|
9.7. Import From Salesforce
|
9.7. Import From Salesforce You can create relational source models from your Salesforce connection using the steps below. Note Depending the detail provided in the database connection URL information and schema, Steps 5 through 7 may not be required. In Model Explorer, right-click and then click Import... or click the File > Import... action in the toolbar or select a project, folder or model in the tree and click Import... Select the import option Teiid Designer > Salesforce >> Source Model and click > Select existing or connection profile from the drop-down selector or click New... button to launch the New Connection Profile dialog or Edit... to modify or change an existing connection profile prior to selection. Note that the Connection Profile selection list will be populated with only Salesforce connection profiles. Figure 9.24. Select Salesforce Credentials Dialog After selecting a Connection Profile, enter the password (if not provided). Click > to display the Salesforce Objects selection page. Figure 9.25. Select Salesforce Objects Dialog On the Target Model Selection page, specify the target folder location for your generated model, a unique model name and select desired import options. You have to enter the JNDI name. Click > (or Finish if enabled). Figure 9.26. Target Model Selection Dialog If you are updating an existing relational model, the page will be Review Model Updates page. Any differences. Click Finish to create your models and tables. Figure 9.27. Review Model Updates Dialog When finished, the new or changed relational model's package diagram will be displayed showing your new tables. Figure 9.28. New Saleforce Tables Diagram
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/import_from_salesforce
|
Chapter 10. Multicloud Object Gateway
|
Chapter 10. Multicloud Object Gateway 10.1. About the Multicloud Object Gateway The Multicloud Object Gateway (MCG) is a lightweight object storage service for OpenShift, allowing users to start small and then scale as needed on-premise, in multiple clusters, and with cloud-native storage. 10.2. Accessing the Multicloud Object Gateway with your applications You can access the object service with any application targeting AWS S3 or code that uses AWS S3 Software Development Kit (SDK). Applications need to specify the Multicloud Object Gateway (MCG) endpoint, an access key, and a secret access key. You can use your terminal or the MCG CLI to retrieve this information. Prerequisites A running OpenShift Data Foundation Platform. Download the MCG command-line interface for easier management. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z infrastructure, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found at Download RedHat OpenShift Data Foundation page . Note Choose the correct Product Variant according to your architecture. You can access the relevant endpoint, access key, and secret access key in two ways: Section 10.2.1, "Accessing the Multicloud Object Gateway from the terminal" Section 10.2.2, "Accessing the Multicloud Object Gateway from the MCG command-line interface" Example 10.1. Example Accessing the MCG bucket(s) using the virtual-hosted style If the client application tries to access https:// <bucket-name> .s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com <bucket-name> is the name of the MCG bucket For example, https://mcg-test-bucket.s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com A DNS entry is needed for mcg-test-bucket.s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com to point to the S3 Service. Important Ensure that you have a DNS entry in order to point the client application to the MCG bucket(s) using the virtual-hosted style. 10.2.1. Accessing the Multicloud Object Gateway from the terminal Procedure Run the describe command to view information about the Multicloud Object Gateway (MCG) endpoint, including its access key ( AWS_ACCESS_KEY_ID value) and secret access key ( AWS_SECRET_ACCESS_KEY value). The output will look similar to the following: 1 access key ( AWS_ACCESS_KEY_ID value) 2 secret access key ( AWS_SECRET_ACCESS_KEY value) 3 MCG endpoint 10.2.2. Accessing the Multicloud Object Gateway from the MCG command-line interface Prerequisites Download the MCG command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z infrastructure, use the following command: Procedure Run the status command to access the endpoint, access key, and secret access key: The output will look similar to the following: 1 endpoint 2 access key 3 secret access key You now have the relevant endpoint, access key, and secret access key in order to connect to your applications. Example 10.2. Example If AWS S3 CLI is the application, the following command will list the buckets in OpenShift Data Foundation: 10.3. Allowing user access to the Multicloud Object Gateway Console To allow access to the Multicloud Object Gateway (MCG) Console to a user, ensure that the user meets the following conditions: User is in cluster-admins group. User is in system:cluster-admins virtual group. Prerequisites A running OpenShift Data Foundation Platform. Procedure Enable access to the MCG console. Perform the following steps once on the cluster : Create a cluster-admins group. Bind the group to the cluster-admin role. Add or remove users from the cluster-admins group to control access to the MCG console. To add a set of users to the cluster-admins group : where <user-name> is the name of the user to be added. Note If you are adding a set of users to the cluster-admins group, you do not need to bind the newly added users to the cluster-admin role to allow access to the OpenShift Data Foundation dashboard. To remove a set of users from the cluster-admins group : where <user-name> is the name of the user to be removed. Verification steps On the OpenShift Web Console, login as a user with access permission to Multicloud Object Gateway Console. Navigate to Storage OpenShift Data Foundation . In the Storage Systems tab, select the storage system and then click Overview Object tab. Select the Multicloud Object Gateway link. Click Allow selected permissions . 10.4. Adding storage resources for hybrid or Multicloud 10.4.1. Creating a new backing store Use this procedure to create a new backing store in OpenShift Data Foundation. Prerequisites Administrator access to OpenShift Data Foundation. Procedure In the OpenShift Web Console, click Storage OpenShift Data Foundation . Click the Backing Store tab. Click Create Backing Store . On the Create New Backing Store page, perform the following: Enter a Backing Store Name . Select a Provider . Select a Region . Enter an Endpoint . This is optional. Select a Secret from the drop-down list, or create your own secret. Optionally, you can Switch to Credentials view which lets you fill in the required secrets. For more information on creating an OCP secret, see the section Creating the secret in the Openshift Container Platform documentation. Each backingstore requires a different secret. For more information on creating the secret for a particular backingstore, see the Section 10.4.2, "Adding storage resources for hybrid or Multicloud using the MCG command line interface" and follow the procedure for the addition of storage resources using a YAML. Note This menu is relevant for all providers except Google Cloud and local PVC. Enter the Target bucket . The target bucket is a container storage that is hosted on the remote cloud service. It allows you to create a connection that tells the MCG that it can use this bucket for the system. Click Create Backing Store . Verification steps In the OpenShift Web Console, click Storage OpenShift Data Foundation . Click the Backing Store tab to view all the backing stores. 10.4.2. Adding storage resources for hybrid or Multicloud using the MCG command line interface The Multicloud Object Gateway (MCG) simplifies the process of spanning data across cloud provider and clusters. You must add a backing storage that can be used by the MCG. Depending on the type of your deployment, you can choose one of the following procedures to create a backing storage: For creating an AWS-backed backingstore, see Section 10.4.2.1, "Creating an AWS-backed backingstore" For creating an IBM COS-backed backingstore, see Section 10.4.2.2, "Creating an IBM COS-backed backingstore" For creating an Azure-backed backingstore, see Section 10.4.2.3, "Creating an Azure-backed backingstore" For creating a GCP-backed backingstore, see Section 10.4.2.4, "Creating a GCP-backed backingstore" For creating a local Persistent Volume-backed backingstore, see Section 10.4.2.5, "Creating a local Persistent Volume-backed backingstore" For VMware deployments, skip to Section 10.4.3, "Creating an s3 compatible Multicloud Object Gateway backingstore" for further instructions. 10.4.2.1. Creating an AWS-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Z infrastructure use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure From the MCG command-line interface, run the following command: Replace <backingstore_name> with the name of the backingstore. Replace <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> with an AWS access key ID and secret access key you created for this purpose. Replace <bucket-name> with an existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: You can also add storage resources using a YAML: Create a secret with the credentials: You must supply and encode your own AWS access key ID and secret access key using Base64, and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <backingstore-secret-name> with a unique name. Apply the following YAML for a specific backing store: Replace <bucket-name> with an existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Replace <backingstore-secret-name> with the name of the secret created in the step. 10.4.2.2. Creating an IBM COS-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, For IBM Power, use the following command: For IBM Z infrastructure, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure From the MCG command-line interface, run the following command: Replace <backingstore_name> with the name of the backingstore. Replace <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> with an IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. To generate the above keys on IBM cloud, you must include HMAC credentials while creating the service credentials for your target bucket. Replace <bucket-name> with an existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: You can also add storage resources using a YAML: Create a secret with the credentials: You must supply and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <backingstore-secret-name> with a unique name. Apply the following YAML for a specific backing store: Replace <bucket-name> with an existing IBM COS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Replace <endpoint> with a regional endpoint that corresponds to the location of the existing IBM bucket name. This argument tells Multicloud Object Gateway which endpoint to use for its backing store, and subsequently, data storage and administration. Replace <backingstore-secret-name> with the name of the secret created in the step. 10.4.2.3. Creating an Azure-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Z infrastructure use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure From the MCG command-line interface, run the following command: Replace <backingstore_name> with the name of the backingstore. Replace <AZURE ACCOUNT KEY> and <AZURE ACCOUNT NAME> with an AZURE account key and account name you created for this purpose. Replace <blob container name> with an existing Azure blob container name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: You can also add storage resources using a YAML: Create a secret with the credentials: You must supply and encode your own Azure Account Name and Account Key using Base64, and use the results in place of <AZURE ACCOUNT NAME ENCODED IN BASE64> and <AZURE ACCOUNT KEY ENCODED IN BASE64> . Replace <backingstore-secret-name> with a unique name. Apply the following YAML for a specific backing store: Replace <blob-container-name> with an existing Azure blob container name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Replace <backingstore-secret-name> with the name of the secret created in the step. 10.4.2.4. Creating a GCP-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Z infrastructure use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure From the MCG command-line interface, run the following command: Replace <backingstore_name> with the name of the backingstore. Replace <PATH TO GCP PRIVATE KEY JSON FILE> with a path to your GCP private key created for this purpose. Replace <GCP bucket name> with an existing GCP object storage bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: You can also add storage resources using a YAML: Create a secret with the credentials: You must supply and encode your own GCP service account private key using Base64, and use the results in place of <GCP PRIVATE KEY ENCODED IN BASE64> . Replace <backingstore-secret-name> with a unique name. Apply the following YAML for a specific backing store: Replace <target bucket> with an existing Google storage bucket. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Replace <backingstore-secret-name> with the name of the secret created in the step. 10.4.2.5. Creating a local Persistent Volume-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Z infrastructure use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure From the MCG command-line interface, run the following command: Replace <backingstore_name> with the name of the backingstore. Replace <NUMBER OF VOLUMES> with the number of volumes you would like to create. Note that increasing the number of volumes scales up the storage. Replace <VOLUME SIZE> with the required size, in GB, of each volume. Replace <LOCAL STORAGE CLASS> with the local storage class, recommended to use ocs-storagecluster-ceph-rbd . The output will be similar to the following: You can also add storage resources using a YAML: Apply the following YAML for a specific backing store: Replace <backingstore_name> with the name of the backingstore. Replace <NUMBER OF VOLUMES> with the number of volumes you would like to create. Note that increasing the number of volumes scales up the storage. Replace <VOLUME SIZE> with the required size, in GB, of each volume. Note that the letter G should remain. Replace <LOCAL STORAGE CLASS> with the local storage class, recommended to use ocs-storagecluster-ceph-rbd . 10.4.3. Creating an s3 compatible Multicloud Object Gateway backingstore The Multicloud Object Gateway (MCG) can use any S3 compatible object storage as a backing store, for example, Red Hat Ceph Storage's RADOS Object Gateway (RGW). The following procedure shows how to create an S3 compatible MCG backing store for Red Hat Ceph Storage's RGW. Note that when the RGW is deployed, OpenShift Data Foundation operator creates an S3 compatible backingstore for MCG automatically. Procedure From the MCG command-line interface, run the following command: To get the <RGW ACCESS KEY> and <RGW SECRET KEY> , run the following command using your RGW user secret name: Decode the access key ID and the access key from Base64 and keep them. Replace <RGW USER ACCESS KEY> and <RGW USER SECRET ACCESS KEY> with the appropriate, decoded data from the step. Replace <bucket-name> with an existing RGW bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. To get the <RGW endpoint> , see Accessing the RADOS Object Gateway S3 endpoint . The output will be similar to the following: You can also create the backingstore using a YAML: Create a CephObjectStore user. This also creates a secret containing the RGW credentials: Replace <RGW-Username> and <Display-name> with a unique username and display name. Apply the following YAML for an S3-Compatible backing store: Replace <backingstore-secret-name> with the name of the secret that was created with CephObjectStore in the step. Replace <bucket-name> with an existing RGW bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. To get the <RGW endpoint> , see Accessing the RADOS Object Gateway S3 endpoint . 10.4.4. Adding storage resources for hybrid and Multicloud using the user interface Procedure In the OpenShift Web Console, click Storage OpenShift Data Foundation . In the Storage Systems tab, select the storage system and then click Overview Object tab. Select the Multicloud Object Gateway link. Select the Resources tab in the left, highlighted below. From the list that populates, select Add Cloud Resource . Select Add new connection . Select the relevant native cloud provider or S3 compatible option and fill in the details. Select the newly created connection and map it to the existing bucket. Repeat these steps to create as many backing stores as needed. Note Resources created in NooBaa UI cannot be used by OpenShift UI or MCG CLI. 10.4.5. Creating a new bucket class Bucket class is a CRD representing a class of buckets that defines tiering policies and data placements for an Object Bucket Class (OBC). Use this procedure to create a bucket class in OpenShift Data Foundation. Procedure In the OpenShift Web Console, click Storage OpenShift Data Foundation . Click the Bucket Class tab. Click Create Bucket Class . On the Create new Bucket Class page, perform the following: Select the bucket class type and enter a bucket class name. Select the BucketClass type . Choose one of the following options: Standard : data will be consumed by a Multicloud Object Gateway (MCG), deduped, compressed and encrypted. Namespace : data is stored on the NamespaceStores without performing de-duplication, compression or encryption. By default, Standard is selected. Enter a Bucket Class Name . Click . In Placement Policy , select Tier 1 - Policy Type and click . You can choose either one of the options as per your requirements. Spread allows spreading of the data across the chosen resources. Mirror allows full duplication of the data across the chosen resources. Click Add Tier to add another policy tier. Select at least one Backing Store resource from the available list if you have selected Tier 1 - Policy Type as Spread and click . Alternatively, you can also create a new backing store . Note You need to select at least 2 backing stores when you select Policy Type as Mirror in step. Review and confirm Bucket Class settings. Click Create Bucket Class . Verification steps In the OpenShift Web Console, click Storage OpenShift Data Foundation . Click the Bucket Class tab and search the new Bucket Class. 10.4.6. Editing a bucket class Use the following procedure to edit the bucket class components through the YAML file by clicking the edit button on the Openshift web console. Prerequisites Administrator access to OpenShift Web Console. Procedure In the OpenShift Web Console, click Storage OpenShift Data Foundation . Click the Bucket Class tab. Click the Action Menu (...) to the Bucket class you want to edit. Click Edit Bucket Class . You are redirected to the YAML file, make the required changes in this file and click Save . 10.4.7. Editing backing stores for bucket class Use the following procedure to edit an existing Multicloud Object Gateway (MCG) bucket class to change the underlying backing stores used in a bucket class. Prerequisites Administrator access to OpenShift Web Console. A bucket class. Backing stores. Procedure In the OpenShift Web Console, click Storage OpenShift Data Foundation . Click the Bucket Class tab. Click the Action Menu (...) to the Bucket class you want to edit. Click Edit Bucket Class Resources . On the Edit Bucket Class Resources page, edit the bucket class resources either by adding a backing store to the bucket class or by removing a backing store from the bucket class. You can also edit bucket class resources created with one or two tiers and different placement policies. To add a backing store to the bucket class, select the name of the backing store. To remove a backing store from the bucket class, clear the name of the backing store. Click Save . 10.5. Managing namespace buckets Namespace buckets let you connect data repositories on different providers together, so you can interact with all of your data through a single unified view. Add the object bucket associated with each provider to the namespace bucket, and access your data through the namespace bucket to see all of your object buckets at once. This lets you write to your preferred storage provider while reading from multiple other storage providers, greatly reducing the cost of migrating to a new storage provider. You can interact with objects in a namespace bucket using the S3 API. See S3 API endpoints for objects in namespace buckets for more information. Note A namespace bucket can only be used if its write target is available and functional. 10.5.1. Amazon S3 API endpoints for objects in namespace buckets You can interact with objects in the namespace buckets using the Amazon Simple Storage Service (S3) API. Red Hat OpenShift Data Foundation 4.6 onwards supports the following namespace bucket operations: ListObjectVersions ListObjects PutObject CopyObject ListParts CreateMultipartUpload CompleteMultipartUpload UploadPart UploadPartCopy AbortMultipartUpload GetObjectAcl GetObject HeadObject DeleteObject DeleteObjects See the Amazon S3 API reference documentation for the most up-to-date information about these operations and how to use them. Additional resources Amazon S3 REST API Reference Amazon S3 CLI Reference 10.5.2. Adding a namespace bucket using the Multicloud Object Gateway CLI and YAML For more information about namespace buckets, see Managing namespace buckets . Depending on the type of your deployment and whether you want to use YAML or the Multicloud Object Gateway CLI, choose one of the following procedures to add a namespace bucket: Adding an AWS S3 namespace bucket using YAML Adding an IBM COS namespace bucket using YAML Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI 10.5.2.1. Adding an AWS S3 namespace bucket using YAML Prerequisites A running OpenShift Data Foundation Platform Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: You must supply and encode your own AWS access key ID and secret access key using Base64, and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Create a NamespaceStore resource using OpenShift Custom Resource Definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: Replace <resource-name> with the name you want to give to the resource. Replace <namespacestore-secret-name> with the secret created in step 1. Replace <namespace-secret> with the namespace where the secret can be found. Replace <target-bucket> with the target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . A namespace policy of type single requires the following configuration: Replace <my-bucket-class> with a unique namespace bucket class name. Replace <resource> with the name of a single namespace-store that defines the read and write target of the namespace bucket. A namespace policy of type multi requires the following configuration: Replace <my-bucket-class> with a unique bucket class name. Replace <write-resource> with the name of a single namespace-store that defines the write target of the namespace bucket. Replace <read-resources> with a list of the names of the namespace-stores that defines the read targets of the namespace bucket. Apply the following YAML to create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in step 2. Replace <resource-name> with the name you want to give to the resource. Replace <my-bucket> with the name you want to give to the bucket. Replace <my-bucket-class> with the bucket class created in the step. Once the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and on the same namespace of the OBC. 10.5.2.2. Adding an IBM COS namespace bucket using YAML Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: You must supply and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Create a NamespaceStore resource using OpenShift Custom Resource Definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: Replace <IBM COS ENDPOINT> with the appropriate IBM COS endpoint. Replace <namespacestore-secret-name> with the secret created in step 1. Replace <namespace-secret> with the namespace where the secret can be found. Replace <target-bucket> with the target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . A namespace policy of type single requires the following configuration: Replace <my-bucket-class> with a unique namespace bucket class name. Replace <resource> with a the name of a single namespace-store that defines the read and write target of the namespace bucket. A namespace policy of type multi requires the following configuration: Replace <my-bucket-class> with a unique bucket class name. Replace <write-resource> with the name of a single namespace-store that defines the write target of the namespace bucket. Replace <read-resources> with a list of the names of namespace-stores that defines the read targets of the namespace bucket. Apply the following YAML to create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in step 2. Replace <resource-name> with the name you want to give to the resource. Replace <my-bucket> with the name you want to give to the bucket. Replace <my-bucket-class> with the bucket class created in the step. Once the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and on the same namespace of the OBC. 10.5.2.3. Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the MCG command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For instance, in case of IBM Z infrastructure use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in MCG namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the NamespaceStore. Replace <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> with an AWS access key ID and secret access key you created for this purpose. Replace <bucket-name> with an existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . Run the following command to create a namespace bucket class with a namespace policy of type single : Replace <resource-name> with the name you want to give the resource. Replace <my-bucket-class> with a unique bucket class name. Replace <resource> with a single namespace-store that defines the read and write target of the namespace bucket. Run the following command to create a namespace bucket class with a namespace policy of type multi : Replace <resource-name> with the name you want to give the resource. Replace <my-bucket-class> with a unique bucket class name. Replace <write-resource> with a single namespace-store that defines the write target of the namespace bucket. Replace <read-resources> with a list of namespace-stores separated by commas that defines the read targets of the namespace bucket. Run the following command to create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in step 2. Replace <bucket-name> with a bucket name of your choice. Replace <custom-bucket-class> with the name of the bucket class created in step 2. Once the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and on the same namespace of the OBC. 10.5.2.4. Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the MCG command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For IBM Power, use the following command: For IBM Z infrastructure, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in MCG namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the NamespaceStore. Replace <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> with an IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. Replace <bucket-name> with an existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . Run the following command to create a namespace bucket class with a namespace policy of type single : Replace <resource-name> with the name you want to give the resource. Replace <my-bucket-class> with a unique bucket class name. Replace <resource> with a single namespace-store that defines the read and write target of the namespace bucket. Run the following command to create a namespace bucket class with a namespace policy of type multi : Replace <resource-name> with the name you want to give the resource. Replace <my-bucket-class> with a unique bucket class name. Replace <write-resource> with a single namespace-store that defines the write target of the namespace bucket. Replace <read-resources> with a list of namespace-stores separated by commas that defines the read targets of the namespace bucket. Run the following command to create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in step 2. Replace <bucket-name> with a bucket name of your choice. Replace <custom-bucket-class> with the name of the bucket class created in step 2. Once the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and on the same namespace of the OBC. 10.5.3. Adding a namespace bucket using the OpenShift Container Platform user interface With the release of OpenShift Data Foundation 4.8, namespace buckets can be added using the OpenShift Container Platform user interface. For more information about namespace buckets, see Managing namespace buckets . Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). Procedure Log into the OpenShift Web Console. Click Storage OpenShift Data Foundation. Click the Namespace Store tab to create a namespacestore resources to be used in the namespace bucket. Click Create namespace store . Enter a namespacestore name. Choose a provider. Choose a region. Either select an existing secret, or click Switch to credentials to create a secret by entering a secret key and secret access key. Choose a target bucket. Click Create . Verify the namespacestore is in the Ready state. Repeat these steps until you have the desired amount of resources. Click the Bucket Class tab Create a new Bucket Class . Select the Namespace radio button. Enter a Bucket Class name. Add a description (optional). Click . Choose a namespace policy type for your namespace bucket, and then click . Select the target resource(s). If your namespace policy type is Single , you need to choose a read resource. If your namespace policy type is Multi , you need to choose read resources and a write resource. If your namespace policy type is Cache , you need to choose a Hub namespace store that defines the read and write target of the namespace bucket. Click . Review your new bucket class, and then click Create Bucketclass . On the BucketClass page, verify that your newly created resource is in the Created phase. In the OpenShift Web Console, click Storage OpenShift Data Foundation . In the Status card, click Storage System and click the storage system link from the pop up that appears. In the Object tab, click Multicloud Object Gateway Buckets Namespace Buckets tab . Click Create Namespace Bucket . On the Choose Name tab, specify a Name for the namespace bucket and click . On the Set Placement tab: Under Read Policy , select the checkbox for each namespace resource created in step 5 that the namespace bucket should read data from. If the namespace policy type you are using is Multi , then Under Write Policy , specify which namespace resource the namespace bucket should write data to. Click . Click Create . Verification Verify that the namespace bucket is listed with a green check mark in the State column, the expected number of read resources, and the expected write resource name. 10.6. Mirroring data for hybrid and Multicloud buckets The Multicloud Object Gateway (MCG) simplifies the process of spanning data across cloud provider and clusters. Prerequisites You must first add a backing storage that can be used by the MCG, see Section 10.4, "Adding storage resources for hybrid or Multicloud" . Then you create a bucket class that reflects the data management policy, mirroring. Procedure You can set up mirroring data in three ways: Section 10.6.1, "Creating bucket classes to mirror data using the MCG command-line-interface" Section 10.6.2, "Creating bucket classes to mirror data using a YAML" Section 10.6.3, "Configuring buckets to mirror data using the user interface" 10.6.1. Creating bucket classes to mirror data using the MCG command-line-interface From the Multicloud Object Gateway (MCG) command-line interface, run the following command to create a bucket class with a mirroring policy: Set the newly created bucket class to a new bucket claim, generating a new bucket that will be mirrored between two locations: 10.6.2. Creating bucket classes to mirror data using a YAML Apply the following YAML. This YAML is a hybrid example that mirrors data between local Ceph storage and AWS: Add the following lines to your standard Object Bucket Claim (OBC): For more information about OBCs, see Section 10.8, "Object Bucket Claim" . 10.6.3. Configuring buckets to mirror data using the user interface In the OpenShift Web Console, click Storage OpenShift Data Foundation . In the Status card, click Storage System and click the storage system link from the pop up that appears. In the Object tab, click the Multicloud Object Gateway link. On the NooBaa page, click the buckets icon on the left side. You can see a list of your buckets: Click the bucket you want to update. Click Edit Tier 1 Resources : Select Mirror and check the relevant resources you want to use for this bucket. In the following example, the data between noobaa-default-backing-store which is on RGW and AWS-backingstore which is on AWS is mirrored: Click Save . Note Resources created in NooBaa UI cannot be used by OpenShift UI or Multicloud Object Gateway (MCG) CLI. 10.7. Bucket policies in the Multicloud Object Gateway OpenShift Data Foundation supports AWS S3 bucket policies. Bucket policies allow you to grant users access permissions for buckets and the objects in them. 10.7.1. About bucket policies Bucket policies are an access policy option available for you to grant permission to your AWS S3 buckets and objects. Bucket policies use JSON-based access policy language. For more information about access policy language, see AWS Access Policy Language Overview . 10.7.2. Using bucket policies Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway (MCG), see Section 10.2, "Accessing the Multicloud Object Gateway with your applications" Procedure To use bucket policies in the MCG: Create the bucket policy in JSON format. See the following example: There are many available elements for bucket policies with regard to access permissions. For details on these elements and examples of how they can be used to control the access permissions, see AWS Access Policy Language Overview . For more examples of bucket policies, see AWS Bucket Policy Examples . Instructions for creating S3 users can be found in Section 10.7.3, "Creating an AWS S3 user in the Multicloud Object Gateway" . Using AWS S3 client, use the put-bucket-policy command to apply the bucket policy to your S3 bucket: Replace ENDPOINT with the S3 endpoint. Replace MyBucket with the bucket to set the policy on. Replace BucketPolicy with the bucket policy JSON file. Add --no-verify-ssl if you are using the default self signed certificates. For example: For more information on the put-bucket-policy command, see the AWS CLI Command Reference for put-bucket-policy . Note The principal element specifies the user that is allowed or denied access to a resource, such as a bucket. Currently, Only NooBaa accounts can be used as principals. In the case of object bucket claims, NooBaa automatically create an account obc-account.<generated bucket name>@noobaa.io . Note Bucket policy conditions are not supported. 10.7.3. Creating an AWS S3 user in the Multicloud Object Gateway Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway (MCG), see Section 10.2, "Accessing the Multicloud Object Gateway with your applications" Procedure In the OpenShift Web Console, click Storage OpenShift Data Foundation . In the Status card, click Storage System and click the storage system link from the pop up that appears. In the Object tab, click the Multicloud Object Gateway link. Under the Accounts tab, click Create Account . Select S3 Access Only , provide the Account Name , for example, [email protected] . Click . Select S3 default placement , for example, noobaa-default-backing-store . Select Buckets Permissions . A specific bucket or all buckets can be selected. Click Create . 10.8. Object Bucket Claim An Object Bucket Claim can be used to request an S3 compatible bucket backend for your workloads. You can create an Object Bucket Claim in three ways: Section 10.8.1, "Dynamic Object Bucket Claim" Section 10.8.2, "Creating an Object Bucket Claim using the command line interface" Section 10.8.3, "Creating an Object Bucket Claim using the OpenShift Web Console" An object bucket claim creates a new bucket and an application account in NooBaa with permissions to the bucket, including a new access key and secret access key. The application account is allowed to access only a single bucket and can't create new buckets by default. 10.8.1. Dynamic Object Bucket Claim Similar to Persistent Volumes, you can add the details of the Object Bucket claim (OBC) to your application's YAML, and get the object service endpoint, access key, and secret access key available in a configuration map and secret. It is easy to read this information dynamically into environment variables of your application. Procedure Add the following lines to your application YAML: These lines are the OBC itself. Replace <obc-name> with the a unique OBC name. Replace <obc-bucket-name> with a unique bucket name for your OBC. You can add more lines to the YAML file to automate the use of the OBC. The example below is the mapping between the bucket claim result, which is a configuration map with data and a secret with the credentials. This specific job claims the Object Bucket from NooBaa, which creates a bucket and an account. Replace all instances of <obc-name> with your OBC name. Replace <your application image> with your application image. Apply the updated YAML file: Replace <yaml.file> with the name of your YAML file. To view the new configuration map, run the following: Replace obc-name with the name of your OBC. You can expect the following environment variables in the output: BUCKET_HOST - Endpoint to use in the application. BUCKET_PORT - The port available for the application. The port is related to the BUCKET_HOST . For example, if the BUCKET_HOST is https://my.example.com , and the BUCKET_PORT is 443, the endpoint for the object service would be https://my.example.com:443 . BUCKET_NAME - Requested or generated bucket name. AWS_ACCESS_KEY_ID - Access key that is part of the credentials. AWS_SECRET_ACCESS_KEY - Secret access key that is part of the credentials. Important Retrieve the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY . The names are used so that it is compatible with the AWS S3 API. You need to specify the keys while performing S3 operations, especially when you read, write or list from the Multicloud Object Gateway (MCG) bucket. The keys are encoded in Base64. Decode the keys before using them. <obc_name> Specify the name of the object bucket claim. 10.8.2. Creating an Object Bucket Claim using the command line interface When creating an Object Bucket Claim (OBC) using the command-line interface, you get a configuration map and a Secret that together contain all the information your application needs to use the object storage service. Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z infrastructure, use the following command: Procedure Use the command-line interface to generate the details of a new bucket and credentials. Run the following command: Replace <obc-name> with a unique OBC name, for example, myappobc . Additionally, you can use the --app-namespace option to specify the namespace where the OBC configuration map and secret will be created, for example, myapp-namespace . Example output: The MCG command-line-interface has created the necessary configuration and has informed OpenShift about the new OBC. Run the following command to view the OBC: Example output: Run the following command to view the YAML file for the new OBC: Example output: Inside of your openshift-storage namespace, you can find the configuration map and the secret to use this OBC. The CM and the secret have the same name as the OBC. Run the following command to view the secret: Example output: The secret gives you the S3 access credentials. Run the following command to view the configuration map: Example output: The configuration map contains the S3 endpoint information for your application. 10.8.3. Creating an Object Bucket Claim using the OpenShift Web Console You can create an Object Bucket Claim (OBC) using the OpenShift Web Console. Prerequisites Administrative access to the OpenShift Web Console. In order for your applications to communicate with the OBC, you need to use the configmap and secret. For more information about this, see Section 10.8.1, "Dynamic Object Bucket Claim" . Procedure Log into the OpenShift Web Console. On the left navigation bar, click Storage Object Bucket Claims Create Object Bucket Claim . Enter a name for your object bucket claim and select the appropriate storage class based on your deployment, internal or external, from the dropdown menu: Internal mode The following storage classes, which were created after deployment, are available for use: ocs-storagecluster-ceph-rgw uses the Ceph Object Gateway (RGW) openshift-storage.noobaa.io uses the Multicloud Object Gateway (MCG) External mode The following storage classes, which were created after deployment, are available for use: ocs-external-storagecluster-ceph-rgw uses the RGW openshift-storage.noobaa.io uses the MCG Note The RGW OBC storage class is only available with fresh installations of OpenShift Data Foundation version 4.5. It does not apply to clusters upgraded from OpenShift Data Foundation releases. Click Create . Once you create the OBC, you are redirected to its detail page: Additional Resources Section 10.8, "Object Bucket Claim" 10.8.4. Attaching an Object Bucket Claim to a deployment Once created, Object Bucket Claims (OBCs) can be attached to specific deployments. Prerequisites Administrative access to the OpenShift Web Console. Procedure On the left navigation bar, click Storage Object Bucket Claims . Click the Action menu (...) to the OBC you created. From the drop-down menu, select Attach to Deployment . Select the desired deployment from the Deployment Name list, then click Attach . Additional Resources Section 10.8, "Object Bucket Claim" 10.8.5. Viewing object buckets using the OpenShift Web Console You can view the details of object buckets created for Object Bucket Claims (OBCs) using the OpenShift Web Console. Prerequisites Administrative access to the OpenShift Web Console. Procedure Log into the OpenShift Web Console. On the left navigation bar, click Storage Object Buckets . Alternatively, you can also navigate to the details page of a specific OBC and click the Resource link to view the object buckets for that OBC. Select the object bucket you want to see details for. You are navigated to the Object Bucket Details page. Additional Resources Section 10.8, "Object Bucket Claim" 10.8.6. Deleting Object Bucket Claims Prerequisites Administrative access to the OpenShift Web Console. Procedure On the left navigation bar, click Storage Object Bucket Claims . Click the Action menu (...) to the Object Bucket Claim (OBC) you want to delete. Select Delete Object Bucket Claim . Click Delete . Additional Resources Section 10.8, "Object Bucket Claim" 10.9. Caching policy for object buckets A cache bucket is a namespace bucket with a hub target and a cache target. The hub target is an S3 compatible large object storage bucket. The cache bucket is the local Multicloud Object Gateway bucket. You can create a cache bucket that caches an AWS bucket or an IBM COS bucket. Important Cache buckets are a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Technology Preview Features Support Scope . AWS S3 IBM COS 10.9.1. Creating an AWS cache bucket Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. In case of IBM Z infrastructure use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the namespacestore. Replace <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> with an AWS access key ID and secret access key you created for this purpose. Replace <bucket-name> with an existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. You can also add storage resources by applying a YAML. First create a secret with credentials: You must supply and encode your own AWS access key ID and secret access key using Base64, and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Then apply the following YAML: Replace <namespacestore> with a unique name. Replace <namespacestore-secret-name> with the secret created in the step. Replace <namespace-secret> with the namespace used to create the secret in the step. Replace <target-bucket> with the AWS S3 bucket you created for the namespacestore. Run the following command to create a bucket class: Replace <my-cache-bucket-class> with a unique bucket class name. Replace <backing-store> with the relevant backing store. You can list one or more backingstores separated by commas in this field. Replace <namespacestore> with the namespacestore created in the step. Run the following command to create a bucket using an Object Bucket Claim (OBC) resource that uses the bucket class defined in step 2. Replace <my-bucket-claim> with a unique name. Replace <custom-bucket-class> with the name of the bucket class created in step 2. 10.9.2. Creating an IBM COS cache bucket Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z infrastructure, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the NamespaceStore. Replace <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> with an IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. Replace <bucket-name> with an existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. You can also add storage resources by applying a YAML. First, Create a secret with the credentials: You must supply and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Then apply the following YAML: Replace <namespacestore> with a unique name. Replace <IBM COS ENDPOINT> with the appropriate IBM COS endpoint. Replace <backingstore-secret-name> with the secret created in the step. Replace <namespace-secret> with the namespace used to create the secret in the step. Replace <target-bucket> with the AWS S3 bucket you created for the namespacestore. Run the following command to create a bucket class: Replace <my-bucket-class> with a unique bucket class name. Replace <backing-store> with the relevant backing store. You can list one or more backingstores separated by commas in this field. Replace <namespacestore> with the namespacestore created in the step. Run the following command to create a bucket using an Object Bucket Claim resource that uses the bucket class defined in step 2. Replace <my-bucket-claim> with a unique name. Replace <custom-bucket-class> with the name of the bucket class created in step 2. 10.10. Scaling Multicloud Object Gateway performance by adding endpoints The Multicloud Object Gateway performance may vary from one environment to another. In some cases, specific applications require faster performance which can be easily addressed by scaling S3 endpoints. The Multicloud Object Gateway resource pool is a group of NooBaa daemon containers that provide two types of services enabled by default: Storage service S3 endpoint service 10.10.1. Scaling the Multicloud Object Gateway with storage nodes Prerequisites A running OpenShift Data Foundation cluster on OpenShift Container Platform with access to the Multicloud Object Gateway (MCG). A storage node in the MCG is a NooBaa daemon container attached to one or more Persistent Volumes (PVs) and used for local object service data storage. NooBaa daemons can be deployed on Kubernetes nodes. This can be done by creating a Kubernetes pool consisting of StatefulSet pods. Procedure Log in to OpenShift Web Console . From the MCG user interface, click Overview Add Storage Resources . In the window, click Deploy Kubernetes Pool . In the Create Pool step create the target pool for the future installed nodes. In the Configure step, configure the number of requested pods and the size of each PV. For each new pod, one PV is to be created. In the Review step, you can find the details of the new pool and select the deployment method you wish to use: local or external deployment. If local deployment is selected, the Kubernetes nodes will deploy within the cluster. If external deployment is selected, you will be provided with a YAML file to run externally. All nodes will be assigned to the pool you chose in the first step, and can be found under Resources Storage resources Resource name . 10.11. Automatic scaling of MultiCloud Object Gateway endpoints The number of MultiCloud Object Gateway (MCG) endpoints scale automatically when the load on the MCG S3 service increases or decreases. OpenShift Data Foundation clusters are deployed with one active MCG endpoint. Each MCG endpoint pod is configured by default with 1 CPU and 2Gi memory request, with limits matching the request. When the CPU load on the endpoint crosses over an 80% usage threshold for a consistent period of time, a second endpoint is deployed lowering the load on the first endpoint. When the average CPU load on both endpoints falls below the 80% threshold for a consistent period of time, one of the endpoints is deleted. This feature improves performance and serviceability of the MCG.
|
[
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"oc describe noobaa -n openshift-storage",
"Name: noobaa Namespace: openshift-storage Labels: <none> Annotations: <none> API Version: noobaa.io/v1alpha1 Kind: NooBaa Metadata: Creation Timestamp: 2019-07-29T16:22:06Z Generation: 1 Resource Version: 6718822 Self Link: /apis/noobaa.io/v1alpha1/namespaces/openshift-storage/noobaas/noobaa UID: 019cfb4a-b21d-11e9-9a02-06c8de012f9e Spec: Status: Accounts: Admin: Secret Ref: Name: noobaa-admin Namespace: openshift-storage Actual Image: noobaa/noobaa-core:4.0 Observed Generation: 1 Phase: Ready Readme: Welcome to NooBaa! ----------------- Welcome to NooBaa! ----------------- NooBaa Core Version: NooBaa Operator Version: Lets get started: 1. Connect to Management console: Read your mgmt console login information (email & password) from secret: \"noobaa-admin\". kubectl get secret noobaa-admin -n openshift-storage -o json | jq '.data|map_values(@base64d)' Open the management console service - take External IP/DNS or Node Port or use port forwarding: kubectl port-forward -n openshift-storage service/noobaa-mgmt 11443:443 & open https://localhost:11443 2. Test S3 client: kubectl port-forward -n openshift-storage service/s3 10443:443 & 1 NOOBAA_ACCESS_KEY=USD(kubectl get secret noobaa-admin -n openshift-storage -o json | jq -r '.data.AWS_ACCESS_KEY_ID|@base64d') 2 NOOBAA_SECRET_KEY=USD(kubectl get secret noobaa-admin -n openshift-storage -o json | jq -r '.data.AWS_SECRET_ACCESS_KEY|@base64d') alias s3='AWS_ACCESS_KEY_ID=USDNOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=USDNOOBAA_SECRET_KEY aws --endpoint https://localhost:10443 --no-verify-ssl s3' s3 ls Services: Service Mgmt: External DNS: https://noobaa-mgmt-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a3406079515be11eaa3b70683061451e-1194613580.us-east-2.elb.amazonaws.com:443 Internal DNS: https://noobaa-mgmt.openshift-storage.svc:443 Internal IP: https://172.30.235.12:443 Node Ports: https://10.0.142.103:31385 Pod Ports: https://10.131.0.19:8443 serviceS3: External DNS: 3 https://s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a340f4e1315be11eaa3b70683061451e-943168195.us-east-2.elb.amazonaws.com:443 Internal DNS: https://s3.openshift-storage.svc:443 Internal IP: https://172.30.86.41:443 Node Ports: https://10.0.142.103:31011 Pod Ports: https://10.131.0.19:6443",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa status -n openshift-storage",
"INFO[0000] Namespace: openshift-storage INFO[0000] INFO[0000] CRD Status: INFO[0003] ✅ Exists: CustomResourceDefinition \"noobaas.noobaa.io\" INFO[0003] ✅ Exists: CustomResourceDefinition \"backingstores.noobaa.io\" INFO[0003] ✅ Exists: CustomResourceDefinition \"bucketclasses.noobaa.io\" INFO[0004] ✅ Exists: CustomResourceDefinition \"objectbucketclaims.objectbucket.io\" INFO[0004] ✅ Exists: CustomResourceDefinition \"objectbuckets.objectbucket.io\" INFO[0004] INFO[0004] Operator Status: INFO[0004] ✅ Exists: Namespace \"openshift-storage\" INFO[0004] ✅ Exists: ServiceAccount \"noobaa\" INFO[0005] ✅ Exists: Role \"ocs-operator.v0.0.271-6g45f\" INFO[0005] ✅ Exists: RoleBinding \"ocs-operator.v0.0.271-6g45f-noobaa-f9vpj\" INFO[0006] ✅ Exists: ClusterRole \"ocs-operator.v0.0.271-fjhgh\" INFO[0006] ✅ Exists: ClusterRoleBinding \"ocs-operator.v0.0.271-fjhgh-noobaa-pdxn5\" INFO[0006] ✅ Exists: Deployment \"noobaa-operator\" INFO[0006] INFO[0006] System Status: INFO[0007] ✅ Exists: NooBaa \"noobaa\" INFO[0007] ✅ Exists: StatefulSet \"noobaa-core\" INFO[0007] ✅ Exists: Service \"noobaa-mgmt\" INFO[0008] ✅ Exists: Service \"s3\" INFO[0008] ✅ Exists: Secret \"noobaa-server\" INFO[0008] ✅ Exists: Secret \"noobaa-operator\" INFO[0008] ✅ Exists: Secret \"noobaa-admin\" INFO[0009] ✅ Exists: StorageClass \"openshift-storage.noobaa.io\" INFO[0009] ✅ Exists: BucketClass \"noobaa-default-bucket-class\" INFO[0009] ✅ (Optional) Exists: BackingStore \"noobaa-default-backing-store\" INFO[0010] ✅ (Optional) Exists: CredentialsRequest \"noobaa-cloud-creds\" INFO[0010] ✅ (Optional) Exists: PrometheusRule \"noobaa-prometheus-rules\" INFO[0010] ✅ (Optional) Exists: ServiceMonitor \"noobaa-service-monitor\" INFO[0011] ✅ (Optional) Exists: Route \"noobaa-mgmt\" INFO[0011] ✅ (Optional) Exists: Route \"s3\" INFO[0011] ✅ Exists: PersistentVolumeClaim \"db-noobaa-core-0\" INFO[0011] ✅ System Phase is \"Ready\" INFO[0011] ✅ Exists: \"noobaa-admin\" #------------------# #- Mgmt Addresses -# #------------------# ExternalDNS : [https://noobaa-mgmt-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a3406079515be11eaa3b70683061451e-1194613580.us-east-2.elb.amazonaws.com:443] ExternalIP : [] NodePorts : [https://10.0.142.103:31385] InternalDNS : [https://noobaa-mgmt.openshift-storage.svc:443] InternalIP : [https://172.30.235.12:443] PodPorts : [https://10.131.0.19:8443] #--------------------# #- Mgmt Credentials -# #--------------------# email : [email protected] password : HKLbH1rSuVU0I/souIkSiA== #----------------# #- S3 Addresses -# #----------------# 1 ExternalDNS : [https://s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a340f4e1315be11eaa3b70683061451e-943168195.us-east-2.elb.amazonaws.com:443] ExternalIP : [] NodePorts : [https://10.0.142.103:31011] InternalDNS : [https://s3.openshift-storage.svc:443] InternalIP : [https://172.30.86.41:443] PodPorts : [https://10.131.0.19:6443] #------------------# #- S3 Credentials -# #------------------# 2 AWS_ACCESS_KEY_ID : jVmAsu9FsvRHYmfjTiHV 3 AWS_SECRET_ACCESS_KEY : E//420VNedJfATvVSmDz6FMtsSAzuBv6z180PT5c #------------------# #- Backing Stores -# #------------------# NAME TYPE TARGET-BUCKET PHASE AGE noobaa-default-backing-store aws-s3 noobaa-backing-store-15dc896d-7fe0-4bed-9349-5942211b93c9 Ready 141h35m32s #------------------# #- Bucket Classes -# #------------------# NAME PLACEMENT PHASE AGE noobaa-default-bucket-class {Tiers:[{Placement: BackingStores:[noobaa-default-backing-store]}]} Ready 141h35m33s #-----------------# #- Bucket Claims -# #-----------------# No OBC's found.",
"AWS_ACCESS_KEY_ID=<AWS_ACCESS_KEY_ID> AWS_SECRET_ACCESS_KEY=<AWS_SECRET_ACCESS_KEY> aws --endpoint <ENDPOINT> --no-verify-ssl s3 ls",
"oc adm groups new cluster-admins",
"oc adm policy add-cluster-role-to-group cluster-admin cluster-admins",
"oc adm groups add-users cluster-admins <user-name> <user-name> <user-name>",
"oc adm groups remove-users cluster-admins <user-name> <user-name> <user-name>",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa backingstore create aws-s3 <backingstore_name> --access-key=<AWS ACCESS KEY> --secret-key=<AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"aws-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-aws-resource\"",
"apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> namespace: openshift-storage type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: awsS3: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <bucket-name> type: aws-s3",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa backingstore create ibm-cos <backingstore_name> --access-key=<IBM ACCESS KEY> --secret-key=<IBM SECRET ACCESS KEY> --endpoint=<IBM COS ENDPOINT> --target-bucket <bucket-name> -n openshift-storage",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"ibm-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-ibm-resource\"",
"apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> namespace: openshift-storage type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: ibmCos: endpoint: <endpoint> secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <bucket-name> type: ibm-cos",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa backingstore create azure-blob <backingstore_name> --account-key=<AZURE ACCOUNT KEY> --account-name=<AZURE ACCOUNT NAME> --target-blob-container <blob container name>",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"azure-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-azure-resource\"",
"apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> type: Opaque data: AccountName: <AZURE ACCOUNT NAME ENCODED IN BASE64> AccountKey: <AZURE ACCOUNT KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: azureBlob: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBlobContainer: <blob-container-name> type: azure-blob",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa backingstore create google-cloud-storage <backingstore_name> --private-key-json-file=<PATH TO GCP PRIVATE KEY JSON FILE> --target-bucket <GCP bucket name>",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"google-gcp\" INFO[0002] ✅ Created: Secret \"backing-store-google-cloud-storage-gcp\"",
"apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> type: Opaque data: GoogleServiceAccountPrivateKeyJson: <GCP PRIVATE KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: googleCloudStorage: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <target bucket> type: google-cloud-storage",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa backingstore create pv-pool <backingstore_name> --num-volumes=<NUMBER OF VOLUMES> --pv-size-gb=<VOLUME SIZE> --storage-class=<LOCAL STORAGE CLASS>",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Exists: BackingStore \"local-mcg-storage\"",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <backingstore_name> namespace: openshift-storage spec: pvPool: numVolumes: <NUMBER OF VOLUMES> resources: requests: storage: <VOLUME SIZE> storageClass: <LOCAL STORAGE CLASS> type: pv-pool",
"noobaa backingstore create s3-compatible rgw-resource --access-key=<RGW ACCESS KEY> --secret-key=<RGW SECRET KEY> --target-bucket=<bucket-name> --endpoint=<RGW endpoint>",
"get secret <RGW USER SECRET NAME> -o yaml -n openshift-storage",
"INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"rgw-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-rgw-resource\"",
"apiVersion: ceph.rook.io/v1 kind: CephObjectStoreUser metadata: name: <RGW-Username> namespace: openshift-storage spec: store: ocs-storagecluster-cephobjectstore displayName: \"<Display-name>\"",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <backingstore-name> namespace: openshift-storage spec: s3Compatible: endpoint: <RGW endpoint> secret: name: <backingstore-secret-name> namespace: openshift-storage signatureVersion: v4 targetBucket: <RGW-bucket-name> type: s3-compatible",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <resource-name> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <namespacestore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage",
"noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage",
"noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage",
"noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>",
"noobaa bucketclass create placement-bucketclass mirror-to-aws --backingstores=azure-resource,aws-resource --placement Mirror",
"noobaa obc create mirrored-bucket --bucketclass=mirror-to-aws",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <bucket-class-name> namespace: openshift-storage spec: placementPolicy: tiers: - backingStores: - <backing-store-1> - <backing-store-2> placement: Mirror",
"additionalConfig: bucketclass: mirror-to-aws",
"{ \"Version\": \"NewVersion\", \"Statement\": [ { \"Sid\": \"Example\", \"Effect\": \"Allow\", \"Principal\": [ \"[email protected]\" ], \"Action\": [ \"s3:GetObject\" ], \"Resource\": [ \"arn:aws:s3:::john_bucket\" ] } ] }",
"aws --endpoint ENDPOINT --no-verify-ssl s3api put-bucket-policy --bucket MyBucket --policy BucketPolicy",
"aws --endpoint https://s3-openshift-storage.apps.gogo44.noobaa.org --no-verify-ssl s3api put-bucket-policy -bucket MyBucket --policy file://BucketPolicy",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <obc-name> spec: generateBucketName: <obc-bucket-name> storageClassName: openshift-storage.noobaa.io",
"apiVersion: batch/v1 kind: Job metadata: name: testjob spec: template: spec: restartPolicy: OnFailure containers: - image: <your application image> name: test env: - name: BUCKET_NAME valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_NAME - name: BUCKET_HOST valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_HOST - name: BUCKET_PORT valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_PORT - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: <obc-name> key: AWS_ACCESS_KEY_ID - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: <obc-name> key: AWS_SECRET_ACCESS_KEY",
"oc apply -f <yaml.file>",
"oc get cm <obc-name> -o yaml",
"oc get secret <obc_name> -o yaml",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa obc create <obc-name> -n openshift-storage",
"INFO[0001] ✅ Created: ObjectBucketClaim \"test21obc\"",
"oc get obc -n openshift-storage",
"NAME STORAGE-CLASS PHASE AGE test21obc openshift-storage.noobaa.io Bound 38s",
"oc get obc test21obc -o yaml -n openshift-storage",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer generation: 2 labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage resourceVersion: \"40756\" selfLink: /apis/objectbucket.io/v1alpha1/namespaces/openshift-storage/objectbucketclaims/test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af spec: ObjectBucketName: obc-openshift-storage-test21obc bucketName: test21obc-933348a6-e267-4f82-82f1-e59bf4fe3bb4 generateBucketName: test21obc storageClassName: openshift-storage.noobaa.io status: phase: Bound",
"oc get -n openshift-storage secret test21obc -o yaml",
"Example output: apiVersion: v1 data: AWS_ACCESS_KEY_ID: c0M0R2xVanF3ODR3bHBkVW94cmY= AWS_SECRET_ACCESS_KEY: Wi9kcFluSWxHRzlWaFlzNk1hc0xma2JXcjM1MVhqa051SlBleXpmOQ== kind: Secret metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage ownerReferences: - apiVersion: objectbucket.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ObjectBucketClaim name: test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af resourceVersion: \"40751\" selfLink: /api/v1/namespaces/openshift-storage/secrets/test21obc uid: 65117c1c-f662-11e9-9094-0a5305de57bb type: Opaque",
"oc get -n openshift-storage cm test21obc -o yaml",
"apiVersion: v1 data: BUCKET_HOST: 10.0.171.35 BUCKET_NAME: test21obc-933348a6-e267-4f82-82f1-e59bf4fe3bb4 BUCKET_PORT: \"31242\" BUCKET_REGION: \"\" BUCKET_SUBREGION: \"\" kind: ConfigMap metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage ownerReferences: - apiVersion: objectbucket.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ObjectBucketClaim name: test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af resourceVersion: \"40752\" selfLink: /api/v1/namespaces/openshift-storage/configmaps/test21obc uid: 651c6501-f662-11e9-9094-0a5305de57bb",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name>",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <namespacestore> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3",
"noobaa bucketclass create namespace-bucketclass cache <my-cache-bucket-class> --backingstores <backing-store> --hub-resource <namespacestore>",
"noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms",
"subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms",
"noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name>",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <namespacestore> namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <backingstore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos",
"noobaa bucketclass create namespace-bucketclass cache <my-bucket-class> --backingstores <backing-store> --hubResource <namespacestore>",
"noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/multicloud-object-gateway_osp
|
Chapter 10. LDAP Authentication Setup for Red Hat Quay
|
Chapter 10. LDAP Authentication Setup for Red Hat Quay Lightweight Directory Access Protocol (LDAP) is an open, vendor-neutral, industry standard application protocol for accessing and maintaining distributed directory information services over an Internet Protocol (IP) network. Red Hat Quay supports using LDAP as an identity provider. 10.1. Considerations when enabling LDAP Prior to enabling LDAP for your Red Hat Quay deployment, you should consider the following. Existing Red Hat Quay deployments Conflicts between usernames can arise when you enable LDAP for an existing Red Hat Quay deployment that already has users configured. For example, one user, alice , was manually created in Red Hat Quay prior to enabling LDAP. If the username alice also exists in the LDAP directory, Red Hat Quay automatically creates a new user, alice-1 , when alice logs in for the first time using LDAP. Red Hat Quay then automatically maps the LDAP credentials to the alice account. For consistency reasons, this might be erroneous for your Red Hat Quay deployment. It is recommended that you remove any potentially conflicting local account names from Red Hat Quay prior to enabling LDAP. Manual User Creation and LDAP authentication When Red Hat Quay is configured for LDAP, LDAP-authenticated users are automatically created in Red Hat Quay's database on first log in, if the configuration option FEATURE_USER_CREATION is set to true . If this option is set to false , the automatic user creation for LDAP users fails, and the user is not allowed to log in. In this scenario, the superuser needs to create the desired user account first. Conversely, if FEATURE_USER_CREATION is set to true , this also means that a user can still create an account from the Red Hat Quay login screen, even if there is an equivalent user in LDAP. 10.2. Configuring LDAP for Red Hat Quay You can configure LDAP for Red Hat Quay by updating your config.yaml file directly and restarting your deployment. Use the following procedure as a reference when configuring LDAP for Red Hat Quay. Update your config.yaml file directly to include the following relevant information: # ... AUTHENTICATION_TYPE: LDAP 1 # ... LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com 2 LDAP_ADMIN_PASSWD: ABC123 3 LDAP_ALLOW_INSECURE_FALLBACK: false 4 LDAP_BASE_DN: 5 - dc=example - dc=com LDAP_EMAIL_ATTR: mail 6 LDAP_UID_ATTR: uid 7 LDAP_URI: ldap://<example_url>.com 8 LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,dc=<domain_name>,dc=com) 9 LDAP_USER_RDN: 10 - ou=people LDAP_SECONDARY_USER_RDNS: 11 - ou=<example_organization_unit_one> - ou=<example_organization_unit_two> - ou=<example_organization_unit_three> - ou=<example_organization_unit_four> # ... 1 Required. Must be set to LDAP . 2 Required. The admin DN for LDAP authentication. 3 Required. The admin password for LDAP authentication. 4 Required. Whether to allow SSL/TLS insecure fallback for LDAP authentication. 5 Required. The base DN for LDAP authentication. 6 Required. The email attribute for LDAP authentication. 7 Required. The UID attribute for LDAP authentication. 8 Required. The LDAP URI. 9 Required. The user filter for LDAP authentication. 10 Required. The user RDN for LDAP authentication. 11 Optional. Secondary User Relative DNs if there are multiple Organizational Units where user objects are located. After you have added all required LDAP fields, save the changes and restart your Red Hat Quay deployment. 10.3. Enabling the LDAP_RESTRICTED_USER_FILTER configuration field The LDAP_RESTRICTED_USER_FILTER configuration field is a subset of the LDAP_USER_FILTER configuration field. When configured, this option allows Red Hat Quay administrators the ability to configure LDAP users as restricted users when Red Hat Quay uses LDAP as its authentication provider. Use the following procedure to enable LDAP restricted users on your Red Hat Quay deployment. Prerequisites Your Red Hat Quay deployment uses LDAP as its authentication provider. You have configured the LDAP_USER_FILTER field in your config.yaml file. Procedure In your deployment's config.yaml file, add the LDAP_RESTRICTED_USER_FILTER parameter and specify the group of restricted users, for example, members : # ... AUTHENTICATION_TYPE: LDAP # ... FEATURE_RESTRICTED_USERS: true 1 # ... LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com) LDAP_RESTRICTED_USER_FILTER: (<filterField>=<value>) 2 LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com # ... 1 Must be set to true when configuring an LDAP restricted user. 2 Configures specified users as restricted users. Start, or restart, your Red Hat Quay deployment. After enabling the LDAP_RESTRICTED_USER_FILTER feature, your LDAP Red Hat Quay users are restricted from reading and writing content, and creating organizations. 10.4. Enabling the LDAP_SUPERUSER_FILTER configuration field With the LDAP_SUPERUSER_FILTER field configured, Red Hat Quay administrators can configure Lightweight Directory Access Protocol (LDAP) users as superusers if Red Hat Quay uses LDAP as its authentication provider. Use the following procedure to enable LDAP superusers on your Red Hat Quay deployment. Prerequisites Your Red Hat Quay deployment uses LDAP as its authentication provider. You have configured the LDAP_USER_FILTER field field in your config.yaml file. Procedure In your deployment's config.yaml file, add the LDAP_SUPERUSER_FILTER parameter and add the group of users you want configured as super users, for example, root : # ... AUTHENTICATION_TYPE: LDAP # ... LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com) LDAP_SUPERUSER_FILTER: (<filterField>=<value>) 1 LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com # ... 1 Configures specified users as superusers. Start, or restart, your Red Hat Quay deployment. After enabling the LDAP_SUPERUSER_FILTER feature, your LDAP Red Hat Quay users have superuser privileges. The following options are available to superusers: Manage users Manage organizations Manage service keys View the change log Query the usage logs Create globally visible user messages 10.5. Common LDAP configuration issues The following errors might be returned with an invalid configuration. Invalid credentials . If you receive this error, the Administrator DN or Administrator DN password values are incorrect. Ensure that you are providing accurate Administrator DN and password values. *Verification of superuser %USERNAME% failed . This error is returned for the following reasons: The username has not been found. The user does not exist in the remote authentication system. LDAP authorization is configured improperly. Cannot find the current logged in user . When configuring LDAP for Red Hat Quay, there may be situations where the LDAP connection is established successfully using the username and password provided in the Administrator DN fields. However, if the current logged-in user cannot be found within the specified User Relative DN path using the UID Attribute or Mail Attribute fields, there are typically two potential reasons for this: The current logged in user does not exist in the User Relative DN path. The Administrator DN does not have rights to search or read the specified LDAP path. To fix this issue, ensure that the logged in user is included in the User Relative DN path, or provide the correct permissions to the Administrator DN account. 10.6. LDAP configuration fields For a full list of LDAP configuration fields, see LDAP configuration fields
|
[
"AUTHENTICATION_TYPE: LDAP 1 LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com 2 LDAP_ADMIN_PASSWD: ABC123 3 LDAP_ALLOW_INSECURE_FALLBACK: false 4 LDAP_BASE_DN: 5 - dc=example - dc=com LDAP_EMAIL_ATTR: mail 6 LDAP_UID_ATTR: uid 7 LDAP_URI: ldap://<example_url>.com 8 LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,dc=<domain_name>,dc=com) 9 LDAP_USER_RDN: 10 - ou=people LDAP_SECONDARY_USER_RDNS: 11 - ou=<example_organization_unit_one> - ou=<example_organization_unit_two> - ou=<example_organization_unit_three> - ou=<example_organization_unit_four>",
"AUTHENTICATION_TYPE: LDAP FEATURE_RESTRICTED_USERS: true 1 LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com) LDAP_RESTRICTED_USER_FILTER: (<filterField>=<value>) 2 LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com",
"AUTHENTICATION_TYPE: LDAP LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com) LDAP_SUPERUSER_FILTER: (<filterField>=<value>) 1 LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com"
] |
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/manage_red_hat_quay/ldap-authentication-setup-for-quay-enterprise
|
Chapter 3. Preparing networks for Red Hat OpenStack Services on OpenShift
|
Chapter 3. Preparing networks for Red Hat OpenStack Services on OpenShift To prepare for configuring and deploying your Red Hat OpenStack Services on OpenShift (RHOSO) environment, you must configure the Red Hat OpenShift Container Platform (RHOCP) networks on your RHOCP cluster. 3.1. Default Red Hat OpenStack Services on OpenShift networks The following physical data center networks are typically implemented for a Red Hat OpenStack Services on OpenShift (RHOSO) deployment: Control plane network: This network is used by the OpenStack Operator for Ansible SSH access to deploy and connect to the data plane nodes from the Red Hat OpenShift Container Platform (RHOCP) environment. This network is also used by data plane nodes for live migration of instances. External network: (Optional) You can configure an external network if one is required for your environment. For example, you might create an external network for any of the following purposes: To provide virtual machine instances with Internet access. To create flat provider networks that are separate from the control plane. To configure VLAN provider networks on a separate bridge from the control plane. To provide access to virtual machine instances with floating IPs on a network other than the control plane network. Internal API network: This network is used for internal communication between RHOSO components. Storage network: This network is used for block storage, RBD, NFS, FC, and iSCSI. Tenant (project) network: This network is used for data communication between virtual machine instances within the cloud deployment. Storage Management network: (Optional) This network is used by storage components. For example, Red Hat Ceph Storage uses the Storage Management network in a hyperconverged infrastructure (HCI) environment as the cluster_network to replicate data. Note For more information on Red Hat Ceph Storage network configuration, see Ceph network configuration in the Red Hat Ceph Storage Configuration Guide . The following table details the default networks used in a RHOSO deployment. If required, you can update the networks for your environment. Note By default, the control plane and external networks do not use VLANs. Networks that do not use VLANs must be placed on separate NICs. You can use a VLAN for the control plane network on new RHOSO deployments. You can also use the Native VLAN on a trunked interface as the non-VLAN network. For example, you can have the control plane and the internal API on one NIC, and the external network with no VLAN on a separate NIC. Table 3.1. Default RHOSO networks Network name VLAN CIDR NetConfig allocationRange MetalLB IPAddressPool range net-attach-def ipam range OCP worker nncp range ctlplane n/a 192.168.122.0/24 192.168.122.100 - 192.168.122.250 192.168.122.80 - 192.168.122.90 192.168.122.30 - 192.168.122.70 192.168.122.10 - 192.168.122.20 external n/a 10.0.0.0/24 10.0.0.100 - 10.0.0.250 n/a n/a internalapi 20 172.17.0.0/24 172.17.0.100 - 172.17.0.250 172.17.0.80 - 172.17.0.90 172.17.0.30 - 172.17.0.70 172.17.0.10 - 172.17.0.20 storage 21 172.18.0.0/24 172.18.0.100 - 172.18.0.250 n/a 172.18.0.30 - 172.18.0.70 172.18.0.10 - 172.18.0.20 tenant 22 172.19.0.0/24 172.19.0.100 - 172.19.0.250 n/a 172.19.0.30 - 172.19.0.70 172.19.0.10 - 172.19.0.20 storageMgmt 23 172.20.0.0/24 172.20.0.100 - 172.20.0.250 n/a 172.20.0.30 - 172.20.0.70 172.20.0.10 - 172.20.0.20 3.2. Preparing RHOCP for RHOSO networks The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. You use the NMState Operator to connect the worker nodes to the required isolated networks. You use the MetalLB Operator to expose internal service endpoints on the isolated networks. By default, the public service endpoints are exposed as RHOCP routes. Note The examples in the following procedures use IPv4 addresses. You can use IPv6 addresses instead of IPv4 addresses. Dual stack IPv4/6 is not available. For information about how to configure IPv6 addresses, see the following resources in the RHOCP Networking guide: Installing the Kubernetes NMState Operator Configuring MetalLB address pools 3.2.1. Preparing RHOCP with isolated network interfaces Create a NodeNetworkConfigurationPolicy ( nncp ) CR to configure the interfaces for each isolated network on each worker node in RHOCP cluster. Procedure Create a NodeNetworkConfigurationPolicy ( nncp ) CR file on your workstation, for example, openstack-nncp.yaml . Retrieve the names of the worker nodes in the RHOCP cluster: Discover the network configuration: Replace <worker_node> with the name of a worker node retrieved in step 2, for example, worker-1 . Repeat this step for each worker node. In the nncp CR file, configure the interfaces for each isolated network on each worker node in the RHOCP cluster. For information about the default physical data center networks that must be configured with network isolation, see Default Red Hat OpenStack Services on OpenShift networks . In the following example, the nncp CR configures the enp6s0 interface for worker node 1, osp-enp6s0-worker-1 , to use VLAN interfaces with IPv4 addresses for network isolation: Create the nncp CR in the cluster: Verify that the nncp CR is created: 3.2.2. Attaching service pods to the isolated networks Create a NetworkAttachmentDefinition ( net-attach-def ) custom resource (CR) for each isolated network to attach the service pods to the networks. Procedure Create a NetworkAttachmentDefinition ( net-attach-def ) CR file on your workstation, for example, openstack-net-attach-def.yaml . In the NetworkAttachmentDefinition CR file, configure a NetworkAttachmentDefinition resource for each isolated network to attach a service deployment pod to the network. The following examples create a NetworkAttachmentDefinition resource for the internalapi , storage , ctlplane , and tenant networks of type macvlan : 1 The namespace where the services are deployed. 2 The node interface name associated with the network, as defined in the nncp CR. 3 The whereabouts CNI IPAM plugin to assign IPs to the created pods from the range .30 - .70 . 4 The IP address pool range must not overlap with the MetalLB IPAddressPool range and the NetConfig allocationRange . Create the NetworkAttachmentDefinition CR in the cluster: Verify that the NetworkAttachmentDefinition CR is created: 3.2.3. Preparing RHOCP for RHOSO network VIPS The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. You must create an L2Advertisement resource to define how the Virtual IPs (VIPs) are announced, and an IPAddressPool resource to configure which IPs can be used as VIPs. In layer 2 mode, one node assumes the responsibility of advertising a service to the local network. Procedure Create an IPAddressPool CR file on your workstation, for example, openstack-ipaddresspools.yaml . In the IPAddressPool CR file, configure an IPAddressPool resource on the isolated network to specify the IP address ranges over which MetalLB has authority: 1 The IPAddressPool range must not overlap with the whereabouts IPAM range and the NetConfig allocationRange . For information about how to configure the other IPAddressPool resource parameters, see Configuring MetalLB address pools in the RHOCP Networking guide. Create the IPAddressPool CR in the cluster: Verify that the IPAddressPool CR is created: Create a L2Advertisement CR file on your workstation, for example, openstack-l2advertisement.yaml . In the L2Advertisement CR file, configure L2Advertisement CRs to define which node advertises a service to the local network. Create one L2Advertisement resource for each network. In the following example, each L2Advertisement CR specifies that the VIPs requested from the network address pools are announced on the interface that is attached to the VLAN: 1 The interface where the VIPs requested from the VLAN address pool are announced. For information about how to configure the other L2Advertisement resource parameters, see Configuring MetalLB with a L2 advertisement and label in the RHOCP Networking guide. Create the L2Advertisement CRs in the cluster: Verify that the L2Advertisement CRs are created: If your cluster has OVNKubernetes as the network back end, then you must enable global forwarding so that MetalLB can work on a secondary network interface. Check the network back end used by your cluster: If the back end is OVNKubernetes, then run the following command to enable global IP forwarding: 3.3. Creating the data plane network To create the data plane network, you define a NetConfig custom resource (CR) and specify all the subnets for the data plane networks. You must define at least one control plane network for your data plane. You can also define VLAN networks to create network isolation for composable networks, such as InternalAPI , Storage , and External . Each network definition must include the IP address assignment. Tip Use the following commands to view the NetConfig CRD definition and specification schema: Procedure Create a file named openstack_netconfig.yaml on your workstation. Add the following configuration to openstack_netconfig.yaml to create the NetConfig CR: In the openstack_netconfig.yaml file, define the topology for each data plane network. To use the default Red Hat OpenStack Services on OpenShift (RHOSO) networks, you must define a specification for each network. For information about the default RHOSO networks, see Default Red Hat OpenStack Services on OpenShift networks . The following example creates isolated networks for the data plane: 1 The name of the network, for example, CtlPlane . 2 The IPv4 subnet specification. 3 The name of the subnet, for example, subnet1 . 4 The NetConfig allocationRange . The allocationRange must not overlap with the MetalLB IPAddressPool range and the IP address pool range. 5 Optional: List of IP addresses from the allocation range that must not be used by data plane nodes. 6 The network VLAN. For information about the default RHOSO networks, see Default Red Hat OpenStack Services on OpenShift networks . Save the openstack_netconfig.yaml definition file. Create the data plane network: To verify that the data plane network is created, view the openstacknetconfig resource: If you see errors, check the underlying network-attach-definition and node network configuration policies:
|
[
"oc get nodes -l node-role.kubernetes.io/worker -o jsonpath=\"{.items[*].metadata.name}\"",
"oc get nns/<worker_node> -o yaml | more",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: osp-enp6s0-worker-1 spec: desiredState: interfaces: - description: internalapi vlan interface ipv4: address: - ip: 172.17.0.10 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false name: internalapi state: up type: vlan vlan: base-iface: enp6s0 id: 20 reorder-headers: true - description: storage vlan interface ipv4: address: - ip: 172.18.0.10 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false name: storage state: up type: vlan vlan: base-iface: enp6s0 id: 21 reorder-headers: true - description: tenant vlan interface ipv4: address: - ip: 172.19.0.10 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false name: tenant state: up type: vlan vlan: base-iface: enp6s0 id: 22 reorder-headers: true - description: Configuring enp6s0 ipv4: address: - ip: 192.168.122.10 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false mtu: 1500 name: enp6s0 state: up type: ethernet nodeSelector: kubernetes.io/hostname: worker-1 node-role.kubernetes.io/worker: \"\"",
"oc apply -f openstack-nncp.yaml",
"oc get nncp -w NAME STATUS REASON osp-enp6s0-worker-1 Progressing ConfigurationProgressing osp-enp6s0-worker-1 Progressing ConfigurationProgressing osp-enp6s0-worker-1 Available SuccessfullyConfigured",
"apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: internalapi namespace: openstack 1 spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"internalapi\", \"type\": \"macvlan\", \"master\": \"internalapi\", 2 \"ipam\": { 3 \"type\": \"whereabouts\", \"range\": \"172.17.0.0/24\", \"range_start\": \"172.17.0.30\", 4 \"range_end\": \"172.17.0.70\" } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: ctlplane namespace: openstack spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"ctlplane\", \"type\": \"macvlan\", \"master\": \"enp6s0\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.122.0/24\", \"range_start\": \"192.168.122.30\", \"range_end\": \"192.168.122.70\" } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: storage namespace: openstack spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"storage\", \"type\": \"macvlan\", \"master\": \"storage\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"172.18.0.0/24\", \"range_start\": \"172.18.0.30\", \"range_end\": \"172.18.0.70\" } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: tenant namespace: openstack spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"tenant\", \"type\": \"macvlan\", \"master\": \"tenant\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"172.19.0.0/24\", \"range_start\": \"172.19.0.30\", \"range_end\": \"172.19.0.70\" } }",
"oc apply -f openstack-net-attach-def.yaml",
"oc get net-attach-def -n openstack",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: internalapi namespace: metallb-system spec: addresses: - 172.17.0.80-172.17.0.90 1 autoAssign: true avoidBuggyIPs: false --- apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: ctlplane spec: addresses: - 192.168.122.80-192.168.122.90 autoAssign: true avoidBuggyIPs: false --- apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: storage spec: addresses: - 172.18.0.80-172.18.0.90 autoAssign: true avoidBuggyIPs: false --- apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: tenant spec: addresses: - 172.19.0.80-172.19.0.90 autoAssign: true avoidBuggyIPs: false",
"oc apply -f openstack-ipaddresspools.yaml",
"oc describe -n metallb-system IPAddressPool",
"apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: internalapi namespace: metallb-system spec: ipAddressPools: - internalapi interfaces: - internalapi 1 --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: ctlplane namespace: metallb-system spec: ipAddressPools: - ctlplane interfaces: - enp6s0 --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: storage namespace: metallb-system spec: ipAddressPools: - storage interfaces: - storage --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: tenant namespace: metallb-system spec: ipAddressPools: - tenant interfaces: - tenant",
"oc apply -f openstack-l2advertisement.yaml",
"oc get -n metallb-system L2Advertisement NAME IPADDRESSPOOLS IPADDRESSPOOL SELECTORS INTERFACES ctlplane [\"ctlplane\"] [\"enp6s0\"] internalapi [\"internalapi\"] [\"internalapi\"] storage [\"storage\"] [\"storage\"] tenant [\"tenant\"] [\"tenant\"]",
"oc get network.operator cluster --output=jsonpath='{.spec.defaultNetwork.type}'",
"oc patch network.operator cluster -p '{\"spec\":{\"defaultNetwork\":{\"ovnKubernetesConfig\":{\"gatewayConfig\":{\"ipForwarding\": \"Global\"}}}}}' --type=merge",
"oc describe crd netconfig oc explain netconfig.spec",
"apiVersion: network.openstack.org/v1beta1 kind: NetConfig metadata: name: openstacknetconfig namespace: openstack",
"spec: networks: - name: CtlPlane 1 dnsDomain: ctlplane.example.com subnets: 2 - name: subnet1 3 allocationRanges: 4 - end: 192.168.122.120 start: 192.168.122.100 - end: 192.168.122.200 start: 192.168.122.150 cidr: 192.168.122.0/24 gateway: 192.168.122.1 - name: InternalApi dnsDomain: internalapi.example.com subnets: - name: subnet1 allocationRanges: - end: 172.17.0.250 start: 172.17.0.100 excludeAddresses: 5 - 172.17.0.10 - 172.17.0.12 cidr: 172.17.0.0/24 vlan: 20 6 - name: External dnsDomain: external.example.com subnets: - name: subnet1 allocationRanges: - end: 10.0.0.250 start: 10.0.0.100 cidr: 10.0.0.0/24 gateway: 10.0.0.1 - name: Storage dnsDomain: storage.example.com subnets: - name: subnet1 allocationRanges: - end: 172.18.0.250 start: 172.18.0.100 cidr: 172.18.0.0/24 vlan: 21 - name: Tenant dnsDomain: tenant.example.com subnets: - name: subnet1 allocationRanges: - end: 172.19.0.250 start: 172.19.0.100 cidr: 172.19.0.0/24 vlan: 22",
"oc create -f openstack_netconfig.yaml -n openstack",
"oc get netconfig/openstacknetconfig -n openstack",
"oc get network-attachment-definitions -n openstack oc get nncp"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/deploying_red_hat_openstack_services_on_openshift/assembly_preparing-rhoso-networks_preparing
|
4.15. Using MACsec
|
4.15. Using MACsec Media Access Control Security ( MACsec , IEEE 802.1AE) encrypts and authenticates all traffic in LANs with the GCM-AES-128 algorithm. MACsec can protect not only IP but also Address Resolution Protocol (ARP), Neighbor Discovery (ND), or DHCP . While IPsec operates on the network layer (layer 3) and SSL or TLS on the application layer (layer 7), MACsec operates in the data link layer (layer 2). Combine MACsec with security protocols for other networking layers to take advantage of different security features that these standards provide. See the MACsec: a different solution to encrypt network traffic article for more information about the architecture of a MACsec network, use case scenarios, and configuration examples. For examples how to configure MACsec using wpa_supplicant and NetworkManager , see the Red Hat Enterprise Linux 7 Networking Guide .
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-using-macsec
|
Chapter 1. Hosted control planes release notes
|
Chapter 1. Hosted control planes release notes Release notes contain information about new and deprecated features, changes, and known issues. 1.1. Hosted control planes release notes for OpenShift Container Platform 4.17 With this release, hosted control planes for OpenShift Container Platform 4.17 is available. Hosted control planes for OpenShift Container Platform 4.17 supports the multicluster engine for Kubernetes Operator version 2.7. 1.1.1. New features and enhancements This release adds improvements related to the following concepts: 1.1.1.1. Custom taints and tolerations (Technology Preview) For hosted control planes on OpenShift Virtualization, you can now apply tolerations to hosted control plane pods by using the hcp CLI -tolerations argument or by using the hc.Spec.Tolerations API file. This feature is available as a Technology Preview feature. For more information, see Custom taints and tolerations . 1.1.1.2. Support for NVIDIA GPU devices on OpenShift Virtualization (Technology Preview) For hosted control planes on OpenShift Virtualization, you can attach one or more NVIDIA graphics processing unit (GPU) devices to node pools. This feature is available as a Technology Preview feature. For more information, see Attaching NVIDIA GPU devices by using the hcp CLI and Attaching NVIDIA GPU devices by using the NodePool resource . 1.1.1.3. Support for tenancy on AWS When you create a hosted cluster on AWS, you can indicate whether the EC2 instance should run on shared or single-tenant hardware. For more information, see Creating a hosted cluster on AWS . 1.1.1.4. Support for OpenShift Container Platform versions in hosted clusters You can deploy a range of supported OpenShift Container Platform versions in a hosted cluster. For more information, see Supported OpenShift Container Platform versions in a hosted cluster . 1.1.1.5. Hosted control planes on OpenShift Virtualization in a disconnected environment is Generally Available In this release, hosted control planes on OpenShift Virtualization in a disconnected environment is Generally Available. For more information, see Deploying hosted control planes on OpenShift Virtualization in a disconnected environment . 1.1.1.6. Hosted control planes for an ARM64 OpenShift Container Platform cluster on AWS is Generally Available In this release, hosted control planes for an ARM64 OpenShift Container Platform cluster on AWS is Generally Available. For more information, see Running hosted clusters on an ARM64 architecture . 1.1.1.7. Hosted control planes on IBM Z is Generally Available In this release, hosted control planes on IBM Z is Generally Available. For more information, see Deploying hosted control planes on IBM Z . 1.1.1.8. Hosted control planes on IBM Power is Generally Available In this release, hosted control planes on IBM Power is Generally Available. For more information, see Deploying hosted control planes on IBM Power . 1.1.2. Bug fixes Previously, when a hosted cluster proxy was configured and it used an identity provider (IDP) that had an HTTP or HTTPS endpoint, the hostname of the IDP was unresolved before sending it through the proxy. Consequently, hostnames that could only be resolved by the data plane failed to resolve for IDPs. With this update, a DNS lookup is performed before sending IPD traffic through the konnectivity tunnel. As a result, IDPs with hostnames that can only be resolved by the data plane can be verified by the Control Plane Operator. ( OCPBUGS-41371 ) Previously, when the hosted cluster controllerAvailabilityPolicy was set to SingleReplica , podAntiAffinity on networking components blocked the availability of the components. With this release, the issue is resolved. ( OCPBUGS-39313 ) Previously, the AdditionalTrustedCA that was specified in the hosted cluster image configuration was not reconciled into the openshift-config namespace, as expected by the image-registry-operator , and the component did not become available. With this release, the issue is resolved. ( OCPBUGS-39225 ) Previously, Red Hat HyperShift periodic conformance jobs failed because of changes to the core operating system. These failed jobs caused the OpenShift API deployment to fail. With this release, an update recursively copies individual trusted certificate authority (CA) certificates instead of copying a single file, so that the periodic conformance jobs succeed and the OpenShift API runs as expected. ( OCPBUGS-38941 ) Previously, the Konnectivity proxy agent in a hosted cluster always sent all TCP traffic through an HTTP/S proxy. It also ignored host names in the NO_PROXY configuration because it only received resolved IP addresses in its traffic. As a consequence, traffic that was not meant to be proxied, such as LDAP traffic, was proxied regardless of configuration. With this release, proxying is completed at the source (control plane) and the Konnectivity agent proxying configuration is removed. As a result, traffic that is not meant to be proxied, such as LDAP traffic, is not proxied anymore. The NO_PROXY configuration that includes host names is honored. ( OCPBUGS-38637 ) Previously, the azure-disk-csi-driver-controller image was not getting appropriate override values when using registryOverride . This was intentional so as to avoid propagating the values to the azure-disk-csi-driver data plane images. With this update, the issue is resolved by adding a separate image override value. As a result, the azure-disk-csi-driver-controller can be used with registryOverride and no longer affects azure-disk-csi-driver data plane images. ( OCPBUGS-38183 ) Previously, the AWS cloud controller manager within a hosted control plane that was running on a proxied management cluster would not use the proxy for cloud API communication. With this release, the issue is fixed. ( OCPBUGS-37832 ) Previously, proxying for Operators that run in the control plane of a hosted cluster was performed through proxy settings on the Konnectivity agent pod that runs in the data plane. It was not possible to distinguish if proxying was needed based on application protocol. For parity with OpenShift Container Platform, IDP communication via HTTPS or HTTP should be proxied, but LDAP communication should not be proxied. This type of proxying also ignores NO_PROXY entries that rely on host names because by the time traffic reaches the Konnectivity agent, only the destination IP address is available. With this release, in hosted clusters, proxy is invoked in the control plane through konnectivity-https-proxy and konnectivity-socks5-proxy , and proxying traffic is stopped from the Konnectivity agent. As a result, traffic that is destined for LDAP servers is no longer proxied. Other HTTPS or HTTPS traffic is proxied correctly. The NO_PROXY setting is honored when you specify hostnames. ( OCPBUGS-37052 ) Previously, proxying for IDP communication occurred in the Konnectivity agent. By the time traffic reached Konnectivity, its protocol and hostname were no longer available. As a consequence, proxying was not done correctly for the OAUTH server pod. It did not distinguish between protocols that require proxying ( http/s ) and protocols that do not ( ldap:// ). In addition, it did not honor the no_proxy variable that is configured in the HostedCluster.spec.configuration.proxy spec. With this release, you can configure the proxy on the Konnectivity sidecar of the OAUTH server so that traffic is routed appropriately, honoring your no_proxy settings. As a result, the OAUTH server can communicate properly with identity providers when a proxy is configured for the hosted cluster. ( OCPBUGS-36932 ) Previously, the Hosted Cluster Config Operator (HCCO) did not delete the ImageDigestMirrorSet CR (IDMS) after you removed the ImageContentSources field from the HostedCluster object. As a consequence, the IDMS persisted in the HostedCluster object when it should not. With this release, the HCCO manages the deletion of IDMS resources from the HostedCluster object. ( OCPBUGS-34820 ) Previously, deploying a hostedCluster in a disconnected environment required setting the hypershift.openshift.io/control-plane-operator-image annotation. With this update, the annotation is no longer needed. Additionally, the metadata inspector works as expected during the hosted Operator reconciliation, and OverrideImages is populated as expected. ( OCPBUGS-34734 ) Previously, hosted clusters on AWS leveraged their VPC's primary CIDR range to generate security group rules on the data plane. As a consequence, if you installed a hosted cluster into an AWS VPC with multiple CIDR ranges, the generated security group rules could be insufficient. With this update, security group rules are generated based on the provided machine CIDR range to resolve this issue. ( OCPBUGS-34274 ) Previously, the OpenShift Cluster Manager container did not have the right TLS certificates. As a consequence, you could not use image streams in disconnected deployments. With this release, the TLS certificates are added as projected volumes to resolve this issue. ( OCPBUGS-31446 ) Previously, the bulk destroy option in the multicluster engine for Kubernetes Operator console for OpenShift Virtualization did not destroy a hosted cluster. With this release, this issue is resolved. ( ACM-10165 ) 1.1.3. Known issues If the annotation and the ManagedCluster resource name do not match, the multicluster engine for Kubernetes Operator console displays the cluster as Pending import . The cluster cannot be used by the multicluster engine Operator. The same issue happens when there is no annotation and the ManagedCluster name does not match the Infra-ID value of the HostedCluster resource. When you use the multicluster engine for Kubernetes Operator console to add a new node pool to an existing hosted cluster, the same version of OpenShift Container Platform might appear more than once in the list of options. You can select any instance in the list for the version that you want. When a node pool is scaled down to 0 workers, the list of hosts in the console still shows nodes in a Ready state. You can verify the number of nodes in two ways: In the console, go to the node pool and verify that it has 0 nodes. On the command-rline interface, run the following commands: Verify that 0 nodes are in the node pool by running the following command: USD oc get nodepool -A Verify that 0 nodes are in the cluster by running the following command: USD oc get nodes --kubeconfig Verify that 0 agents are reported as bound to the cluster by running the following command: USD oc get agents -A When you create a hosted cluster in an environment that uses the dual-stack network, you might encounter the following DNS-related issues: CrashLoopBackOff state in the service-ca-operator pod: When the pod tries to reach the Kubernetes API server through the hosted control plane, the pod cannot reach the server because the data plane proxy in the kube-system namespace cannot resolve the request. This issue occurs because in the HAProxy setup, the front end uses an IP address and the back end uses a DNS name that the pod cannot resolve. Pods stuck in ContainerCreating state: This issue occurs because the openshift-service-ca-operator cannot generate the metrics-tls secret that the DNS pods need for DNS resolution. As a result, the pods cannot resolve the Kubernetes API server. To resolve these issues, configure the DNS server settings for a dual stack network. On the Agent platform, the hosted control planes feature periodically rotates the token that the Agent uses to pull ignition. As a result, if you have an Agent resource that was created some time ago, it might fail to pull ignition. As a workaround, in the Agent specification, delete the secret of the IgnitionEndpointTokenReference property then add or modify any label on the Agent resource. The system re-creates the secret with the new token. If you created a hosted cluster in the same namespace as its managed cluster, detaching the managed hosted cluster deletes everything in the managed cluster namespace including the hosted cluster. The following situations can create a hosted cluster in the same namespace as its managed cluster: You created a hosted cluster on the Agent platform through the multicluster engine for Kubernetes Operator console by using the default hosted cluster cluster namespace. You created a hosted cluster through the command-line interface or API by specifying the hosted cluster namespace to be the same as the hosted cluster name. 1.1.4. Generally Available and Technology Preview features Features which are Generally Available (GA) are fully supported and are suitable for production use. Technology Preview (TP) features are experimental features and are not intended for production use. For more information about TP features, see the Technology Preview scope of support on the Red Hat Customer Portal . Important For IBM Power and IBM Z, you must run the control plane on machine types based on 64-bit x86 architecture, and node pools on IBM Power or IBM Z. See the following table to know about hosted control planes GA and TP features: Table 1.1. Hosted control planes GA and TP tracker Feature 4.15 4.16 4.17 Hosted control planes for OpenShift Container Platform on Amazon Web Services (AWS) Technology Preview Generally Available Generally Available Hosted control planes for OpenShift Container Platform on bare metal General Availability General Availability General Availability Hosted control planes for OpenShift Container Platform on OpenShift Virtualization Generally Available Generally Available Generally Available Hosted control planes for OpenShift Container Platform using non-bare-metal agent machines Technology Preview Technology Preview Technology Preview Hosted control planes for an ARM64 OpenShift Container Platform cluster on Amazon Web Services Technology Preview Technology Preview Generally Available Hosted control planes for OpenShift Container Platform on IBM Power Technology Preview Technology Preview Generally Available Hosted control planes for OpenShift Container Platform on IBM Z Technology Preview Technology Preview Generally Available Hosted control planes for OpenShift Container Platform on RHOSP Not Available Not Available Developer Preview
|
[
"oc get nodepool -A",
"oc get nodes --kubeconfig",
"oc get agents -A"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/hosted_control_planes/hosted-control-planes-release-notes-1
|
Chapter 40. Developing Asynchronous Applications
|
Chapter 40. Developing Asynchronous Applications Abstract JAX-WS provides an easy mechanism for accessing services asynchronously. The SEI can specify additional methods that can be used to access a service asynchronously. The Apache CXF code generators generate the extra methods for you. You simply add the business logic. 40.1. Types of Asynchronous Invocation In addition to the usual synchronous mode of invocation, Apache CXF supports two forms of asynchronous invocation: Polling approach - To invoke the remote operation using the polling approach, you call a method that has no output parameters, but returns a javax.xml.ws.Response object. The Response object (which inherits from the javax.util.concurrency.Future interface) can be polled to check whether or not a response message has arrived. Callback approach - To invoke the remote operation using the callback approach, you call a method that takes a reference to a callback object (of javax.xml.ws.AsyncHandler type) as one of its parameters. When the response message arrives at the client, the runtime calls back on the AsyncHandler object, and gives it the contents of the response message. 40.2. WSDL for Asynchronous Examples Example 40.1, "WSDL Contract for Asynchronous Example" shows the WSDL contract that is used for the asynchronous examples. The contract defines a single interface, GreeterAsync, which contains a single operation, greetMeSometime. Example 40.1. WSDL Contract for Asynchronous Example 40.3. Generating the Stub Code Overview The asynchronous style of invocation requires extra stub code for the dedicated asynchronous methods defined on the SEI. This special stub code is not generated by default. To switch on the asynchronous feature and generate the requisite stub code, you must use the mapping customization feature from the WSDL 2.0 specification. Customization enables you to modify the way the Maven code generation plug-in generates stub code. In particular, it enables you to modify the WSDL-to-Java mapping and to switch on certain features. Here, customization is used to switch on the asynchronous invocation feature. Customizations are specified using a binding declaration, which you define using a jaxws:bindings tag (where the jaxws prefix is tied to the http://java.sun.com/xml/ns/jaxws namespace). There are two ways of specifying a binding declaration: External Binding Declaration When using an external binding declaration the jaxws:bindings element is defined in a file separate from the WSDL contract. You specify the location of the binding declaration file to code generator when you generate the stub code. Embedded Binding Declaration When using an embedded binding declaration you embed the jaxws:bindings element directly in a WSDL contract, treating it as a WSDL extension. In this case, the settings in jaxws:bindings apply only to the immediate parent element. Using an external binding declaration The template for a binding declaration file that switches on asynchronous invocations is shown in Example 40.2, "Template for an Asynchronous Binding Declaration" . Example 40.2. Template for an Asynchronous Binding Declaration Where AffectedWSDL specifies the URL of the WSDL contract that is affected by this binding declaration. The AffectedNode is an XPath value that specifies which node (or nodes) from the WSDL contract are affected by this binding declaration. You can set AffectedNode to wsdl:definitions , if you want the entire WSDL contract to be affected. The jaxws:enableAsyncMapping element is set to true to enable the asynchronous invocation feature. For example, if you want to generate asynchronous methods only for the GreeterAsync interface, you can specify <bindings node="wsdl:definitions/wsdl:portType[@name='GreeterAsync']"> in the preceding binding declaration. Assuming that the binding declaration is stored in a file, async_binding.xml , you would set up your POM as shown in Example 40.3, "Consumer Code Generation" . Example 40.3. Consumer Code Generation The -b option tells the code generator where to locate the external binding file. For more information on the code generator see Section 44.2, "cxf-codegen-plugin" . Using an embedded binding declaration You can also embed the binding customization directly into the WSDL document defining the service by placing the jaxws:bindings element and its associated jaxws:enableAsynchMapping child directly into the WSDL. You also must add a namespace declaration for the jaxws prefix. Example 40.4, "WSDL with Embedded Binding Declaration for Asynchronous Mapping" shows a WSDL file with an embedded binding declaration that activates the asynchronous mapping for an operation. Example 40.4. WSDL with Embedded Binding Declaration for Asynchronous Mapping When embedding the binding declaration into the WSDL document you can control the scope affected by the declaration by changing where you place the declaration. When the declaration is placed as a child of the wsdl:definitions element the code generator creates asynchronous methods for all of the operations defined in the WSDL document. If it is placed as a child of a wsdl:portType element the code generator creates asynchronous methods for all of the operations defined in the interface. If it is placed as a child of a wsdl:operation element the code generator creates asynchronous methods for only that operation. It is not necessary to pass any special options to the code generator when using embedded declarations. The code generator will recognize them and act accordingly. Generated interface After generating the stub code in this way, the GreeterAsync SEI (in the file GreeterAsync.java ) is defined as shown in Example 40.5, "Service Endpoint Interface with Methods for Asynchronous Invocations" . Example 40.5. Service Endpoint Interface with Methods for Asynchronous Invocations In addition to the usual synchronous method, greetMeSometime() , two asynchronous methods are also generated for the greetMeSometime operation: Callback approach public Future<?> greetMeSomtimeAsync java.lang.String requestType AsyncHandler<GreetMeSomtimeResponse> asyncHandler Polling approach public Response<GreetMeSomeTimeResponse> greetMeSometimeAsync java.lang.String requestType 40.4. Implementing an Asynchronous Client with the Polling Approach Overview The polling approach is the more straightforward of the two approaches to developing an asynchronous application. The client invokes the asynchronous method called OperationName Async() and is returned a Response<T> object that it polls for a response. What the client does while it is waiting for a response is depends on the requirements of the application. There are two basic patterns for handling the polling: Non-blocking polling - You periodically check to see if the result is ready by calling the non-blocking Response<T>.isDone() method. If the result is ready, the client processes it. If it not, the client continues doing other things. Blocking polling - You call Response<T>.get() right away, and block until the response arrives (optionally specifying a timeout). Using the non-blocking pattern Example 40.6, "Non-Blocking Polling Approach for an Asynchronous Operation Call" illustrates using non-blocking polling to make an asynchronous invocation on the greetMeSometime operation defined in Example 40.1, "WSDL Contract for Asynchronous Example" . The client invokes the asynchronous operation and periodically checks to see if the result is returned. Example 40.6. Non-Blocking Polling Approach for an Asynchronous Operation Call The code in Example 40.6, "Non-Blocking Polling Approach for an Asynchronous Operation Call" does the following: Invokes the greetMeSometimeAsync() on the proxy. The method call returns the Response<GreetMeSometimeResponse> object to the client immediately. The Apache CXF runtime handles the details of receiving the reply from the remote endpoint and populating the Response<GreetMeSometimeResponse> object. Note The runtime transmits the request to the remote endpoint's greetMeSometime() method and handles the details of the asynchronous nature of the call transparently. The endpoint, and therefore the service implementation, never worries about the details of how the client intends to wait for a response. Checks to see if a response has arrived by checking the isDone() of the returned Response object. If the response has not arrived, the client continues working before checking again. When the response arrives, the client retrieves it from the Response object using the get() method. Using the blocking pattern When using the block polling pattern, the Response object's isDone() is never called. Instead, the Response object's get() method is called immediately after invoking the remote operation. The get() blocks until the response is available. You can also pass a timeout limit to the get() method. Example 40.7, "Blocking Polling Approach for an Asynchronous Operation Call" shows a client that uses blocking polling. Example 40.7. Blocking Polling Approach for an Asynchronous Operation Call 40.5. Implementing an Asynchronous Client with the Callback Approach Overview An alternative approach to making an asynchronous operation invocation is to implement a callback class. You then call the asynchronous remote method that takes the callback object as a parameter. The runtime returns the response to the callback object. To implement an application that uses callbacks, do the following: Create a callback class that implements the AsyncHandler interface. Note Your callback object can perform any amount of response processing required by your application. Make remote invocations using the operationName Async() that takes the callback object as a parameter and returns a Future<?> object. If your client requires access to the response data, you can poll the returned Future<?> object's isDone() method to see if the remote endpoint has sent the response. If the callback object does all of the response processing, it is not necessary to check if the response has arrived. Implementing the callback The callback class must implement the javax.xml.ws.AsyncHandler interface. The interface defines a single method: handleResponse Response<T> res The Apache CXF runtime calls the handleResponse() method to notify the client that the response has arrived. Example 40.8, "The javax.xml.ws.AsyncHandler Interface" shows an outline of the AsyncHandler interface that you must implement. Example 40.8. The javax.xml.ws.AsyncHandler Interface Example 40.9, "Callback Implementation Class" shows a callback class for the greetMeSometime operation defined in Example 40.1, "WSDL Contract for Asynchronous Example" . Example 40.9. Callback Implementation Class The callback implementation shown in Example 40.9, "Callback Implementation Class" does the following: Defines a member variable, response , that holds the response returned from the remote endpoint. Implements handleResponse() . This implementation simply extracts the response and assigns it to the member variable reply . Implements an added method called getResponse() . This method is a convenience method that extracts the data from reply and returns it. Implementing the consumer Example 40.10, "Callback Approach for an Asynchronous Operation Call" illustrates a client that uses the callback approach to make an asynchronous call to the GreetMeSometime operation defined in Example 40.1, "WSDL Contract for Asynchronous Example" . Example 40.10. Callback Approach for an Asynchronous Operation Call The code in Example 40.10, "Callback Approach for an Asynchronous Operation Call" does the following: Instantiates a callback object. Invokes the greetMeSometimeAsync() that takes the callback object on the proxy. The method call returns the Future<?> object to the client immediately. The Apache CXF runtime handles the details of receiving the reply from the remote endpoint, invoking the callback object's handleResponse() method, and populating the Response<GreetMeSometimeResponse> object. Note The runtime transmits the request to the remote endpoint's greetMeSometime() method and handles the details of the asynchronous nature of the call without the remote endpoint's knowledge. The endpoint, and therefore the service implementation, does not need to worry about the details of how the client intends to wait for a response. Uses the returned Future<?> object's isDone() method to check if the response has arrived from the remote endpoint. Invokes the callback object's getResponse() method to get the response data. 40.6. Catching Exceptions Returned from a Remote Service Overview Consumers making asynchronous requests will not receive the same exceptions returned when they make synchronous requests. Any exceptions returned to the consumer asynchronously are wrapped in an ExecutionException exception. The actual exception thrown by the service is stored in the ExecutionException exception's cause field. Catching the exception Exceptions generated by a remote service are thrown locally by the method that passes the response to the consumer's business logic. When the consumer makes a synchronous request, the method making the remote invocation throws the exception. When the consumer makes an asynchronous request, the Response<T> object's get() method throws the exception. The consumer will not discover that an error was encountered in processing the request until it attempts to retrieve the response message. Unlike the methods generated by the JAX-WS framework, the Response<T> object's get() method throws neither user modeled exceptions nor generic JAX-WS exceptions. Instead, it throws a java.util.concurrent.ExecutionException exception. Getting the exception details The framework stores the exception returned from the remote service in the ExecutionException exception's cause field. The details about the remote exception are extracted by getting the value of the cause field and examining the stored exception. The stored exception can be any user defined exception or one of the generic JAX-WS exceptions. Example Example 40.11, "Catching an Exception using the Polling Approach" shows an example of catching an exception using the polling approach. Example 40.11. Catching an Exception using the Polling Approach The code in Example 40.11, "Catching an Exception using the Polling Approach" does the following: Wraps the call to the Response<T> object's get() method in a try/catch block. Catches a ExecutionException exception. Extracts the cause field from the exception. If the consumer was using the callback approach the code used to catch the exception would be placed in the callback object where the service's response is extracted.
|
[
"<?xml version=\"1.0\" encoding=\"UTF-8\"?><wsdl:definitions xmlns=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:soap=\"http://schemas.xmlsoap.org/wsdl/soap/\" xmlns:tns=\"http://apache.org/hello_world_async_soap_http\" xmlns:x1=\"http://apache.org/hello_world_async_soap_http/types\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" targetNamespace=\"http://apache.org/hello_world_async_soap_http\" name=\"HelloWorld\"> <wsdl:types> <schema targetNamespace=\"http://apache.org/hello_world_async_soap_http/types\" xmlns=\"http://www.w3.org/2001/XMLSchema\" xmlns:x1=\"http://apache.org/hello_world_async_soap_http/types\" elementFormDefault=\"qualified\"> <element name=\"greetMeSometime\"> <complexType> <sequence> <element name=\"requestType\" type=\"xsd:string\"/> </sequence> </complexType> </element> <element name=\"greetMeSometimeResponse\"> <complexType> <sequence> <element name=\"responseType\" type=\"xsd:string\"/> </sequence> </complexType> </element> </schema> </wsdl:types> <wsdl:message name=\"greetMeSometimeRequest\"> <wsdl:part name=\"in\" element=\"x1:greetMeSometime\"/> </wsdl:message> <wsdl:message name=\"greetMeSometimeResponse\"> <wsdl:part name=\"out\" element=\"x1:greetMeSometimeResponse\"/> </wsdl:message> <wsdl:portType name=\"GreeterAsync\"> <wsdl:operation name=\"greetMeSometime\"> <wsdl:input name=\"greetMeSometimeRequest\" message=\"tns:greetMeSometimeRequest\"/> <wsdl:output name=\"greetMeSometimeResponse\" message=\"tns:greetMeSometimeResponse\"/> </wsdl:operation> </wsdl:portType> <wsdl:binding name=\"GreeterAsync_SOAPBinding\" type=\"tns:GreeterAsync\"> </wsdl:binding> <wsdl:service name=\"SOAPService\"> <wsdl:port name=\"SoapPort\" binding=\"tns:GreeterAsync_SOAPBinding\"> <soap:address location=\"http://localhost:9000/SoapContext/SoapPort\"/> </wsdl:port> </wsdl:service> </wsdl:definitions>",
"<bindings xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\" wsdlLocation=\" AffectedWSDL \" xmlns=\"http://java.sun.com/xml/ns/jaxws\"> <bindings node=\" AffectedNode \"> <enableAsyncMapping>true</enableAsyncMapping> </bindings> </bindings>",
"<plugin> <groupId>org.apache.cxf</groupId> <artifactId>cxf-codegen-plugin</artifactId> <version>USD{cxf.version}</version> <executions> <execution> <id>generate-sources</id> <phase>generate-sources</phase> <configuration> <sourceRoot> outputDir </sourceRoot> <wsdlOptions> <wsdlOption> <wsdl>hello_world.wsdl</wsdl> <extraargs> <extraarg>-client</extraarg> <extraarg>-b async_binding.xml</extraarg> </extraargs> </wsdlOption> </wsdlOptions> </configuration> <goals> <goal>wsdl2java</goal> </goals> </execution> </executions> </plugin>",
"<wsdl:definitions xmlns=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:jaxws=\"http://java.sun.com/xml/ns/jaxws\" ...> <wsdl:portType name=\"GreeterAsync\"> <wsdl:operation name=\"greetMeSometime\"> <jaxws:bindings> <jaxws:enableAsyncMapping>true</jaxws:enableAsyncMapping> </jaxws:bindings> <wsdl:input name=\"greetMeSometimeRequest\" message=\"tns:greetMeSometimeRequest\"/> <wsdl:output name=\"greetMeSometimeResponse\" message=\"tns:greetMeSometimeResponse\"/> </wsdl:operation> </wsdl:portType> </wsdl:definitions>",
"package org.apache.hello_world_async_soap_http; import org.apache.hello_world_async_soap_http.types.GreetMeSometimeResponse; public interface GreeterAsync { public Future<?> greetMeSometimeAsync( java.lang.String requestType, AsyncHandler<GreetMeSometimeResponse> asyncHandler ); public Response<GreetMeSometimeResponse> greetMeSometimeAsync( java.lang.String requestType ); public java.lang.String greetMeSometime( java.lang.String requestType ); }",
"package demo.hw.client; import java.io.File; import java.util.concurrent.Future; import javax.xml.namespace.QName; import javax.xml.ws.Response; import org.apache.hello_world_async_soap_http.*; public final class Client { private static final QName SERVICE_NAME = new QName(\"http://apache.org/hello_world_async_soap_http\", \"SOAPService\"); private Client() {} public static void main(String args[]) throws Exception { // set up the proxy for the client Response<GreetMeSometimeResponse> greetMeSomeTimeResp = port.greetMeSometimeAsync(System.getProperty(\"user.name\")); while (!greetMeSomeTimeResp.isDone()) { // client does some work } GreetMeSometimeResponse reply = greetMeSomeTimeResp.get(); // process the response System.exit(0); } }",
"package demo.hw.client; import java.io.File; import java.util.concurrent.Future; import javax.xml.namespace.QName; import javax.xml.ws.Response; import org.apache.hello_world_async_soap_http.*; public final class Client { private static final QName SERVICE_NAME = new QName(\"http://apache.org/hello_world_async_soap_http\", \"SOAPService\"); private Client() {} public static void main(String args[]) throws Exception { // set up the proxy for the client Response<GreetMeSometimeResponse> greetMeSomeTimeResp = port.greetMeSometimeAsync(System.getProperty(\"user.name\")); GreetMeSometimeResponse reply = greetMeSomeTimeResp.get(); // process the response System.exit(0); } }",
"public interface javax.xml.ws.AsyncHandler { void handleResponse(Response<T> res) }",
"package demo.hw.client; import javax.xml.ws.AsyncHandler; import javax.xml.ws.Response; import org.apache.hello_world_async_soap_http.types.*; public class GreeterAsyncHandler implements AsyncHandler<GreetMeSometimeResponse> { private GreetMeSometimeResponse reply; public void handleResponse(Response<GreetMeSometimeResponse> response) { try { reply = response.get(); } catch (Exception ex) { ex.printStackTrace(); } } public String getResponse() { return reply.getResponseType(); } }",
"package demo.hw.client; import java.io.File; import java.util.concurrent.Future; import javax.xml.namespace.QName; import javax.xml.ws.Response; import org.apache.hello_world_async_soap_http.*; public final class Client { public static void main(String args[]) throws Exception { // Callback approach GreeterAsyncHandler callback = new GreeterAsyncHandler(); Future<?> response = port.greetMeSometimeAsync(System.getProperty(\"user.name\"), callback); while (!response.isDone()) { // Do some work } resp = callback.getResponse(); System.exit(0); } }",
"package demo.hw.client; import java.io.File; import java.util.concurrent.Future; import javax.xml.namespace.QName; import javax.xml.ws.Response; import org.apache.hello_world_async_soap_http.*; public final class Client { private static final QName SERVICE_NAME = new QName(\"http://apache.org/hello_world_async_soap_http\", \"SOAPService\"); private Client() {} public static void main(String args[]) throws Exception { // port is a previously established proxy object. Response<GreetMeSometimeResponse> resp = port.greetMeSometimeAsync(System.getProperty(\"user.name\")); while (!resp.isDone()) { // client does some work } try { GreetMeSometimeResponse reply = greetMeSomeTimeResp.get(); // process the response } catch (ExecutionException ee) { Throwable cause = ee.getCause(); System.out.println(\"Exception \"+cause.getClass().getName()+\" thrown by the remote service.\"); } } }"
] |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/jaxwsasyncdev
|
Chapter 24. The Object-Graph Navigation Language(OGNL)
|
Chapter 24. The Object-Graph Navigation Language(OGNL) Overview OGNL is an expression language for getting and setting properties of Java objects. You use the same expression for both getting and setting the value of a property. The OGNL support is in the camel-ognl module. Camel on EAP deployment This component is supported by the Camel on EAP (Wildfly Camel) framework, which offers a simplified deployment model on the Red Hat JBoss Enterprise Application Platform (JBoss EAP) container. Adding the OGNL module To use OGNL in your routes you need to add a dependency on camel-ognl to your project as shown in Example 24.1, "Adding the camel-ognl dependency" . Example 24.1. Adding the camel-ognl dependency Static import To use the ognl() static method in your application code, include the following import statement in your Java source files: Built-in variables Table 24.1, "OGNL variables" lists the built-in variables that are accessible when using OGNL. Table 24.1. OGNL variables Name Type Description this org.apache.camel.Exchange The current Exchange exchange org.apache.camel.Exchange The current Exchange exception Throwable the Exchange exception (if any) exchangeID String the Exchange ID fault org.apache.camel.Message The Fault message(if any) request org.apache.camel.Message The IN message response org.apache.camel.Message The OUT message properties Map The Exchange properties property( name ) Object The value of the named Exchange property property( name , type ) Type The typed value of the named Exchange property Example Example 24.2, "Route using OGNL" shows a route that uses OGNL. Example 24.2. Route using OGNL
|
[
"<!-- Maven POM File --> <dependencies> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-ognl</artifactId> <version>USD{camel-version}</version> </dependency> </dependencies>",
"import static org.apache.camel.language.ognl.OgnlExpression.ognl;",
"<camelContext> <route> <from uri=\"seda:foo\"/> <filter> <language langauge=\"ognl\">request.headers.foo == 'bar'</language> <to uri=\"seda:bar\"/> </filter> </route> </camelContext>"
] |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/OGNL
|
Chapter 125. Google Sheets Component
|
Chapter 125. Google Sheets Component Available as of Camel version 2.23 The Google Sheets component provides access to Google Sheets via the Google Sheets Web APIs . Google Sheets uses the OAuth 2.0 protocol for authenticating a Google account and authorizing access to user data. Before you can use this component, you will need to create an account and generate OAuth credentials . Credentials comprise of a clientId, clientSecret, and a refreshToken. A handy resource for generating a long-lived refreshToken is the OAuth playground . Maven users will need to add the following dependency to their pom.xml for this component: 125.1. URI Format The GoogleSheets Component uses the following URI format: Endpoint prefix can be one of: spreadsheets data 125.2. GoogleSheetsComponent The Google Sheets component supports 3 options, which are listed below. Name Description Default Type configuration (common) To use the shared configuration GoogleSheets Configuration clientFactory (advanced) To use the GoogleSheetsClientFactory as factory for creating the client. Will by default use BatchGoogleSheetsClientFactory GoogleSheetsClient Factory resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Google Sheets endpoint is configured using URI syntax: with the following path and query parameters: 125.2.1. Path Parameters (2 parameters): Name Description Default Type apiName Required What kind of operation to perform GoogleSheetsApiName methodName Required What sub operation to use for the selected operation String 125.2.2. Query Parameters (10 parameters): Name Description Default Type accessToken (common) OAuth 2 access token. This typically expires after an hour so refreshToken is recommended for long term usage. String applicationName (common) Google Sheets application name. Example would be camel-google-sheets/1.0 String clientId (common) Client ID of the sheets application String clientSecret (common) Client secret of the sheets application String inBody (common) Sets the name of a parameter to be passed in the exchange In Body String refreshToken (common) OAuth 2 refresh token. Using this, the Google Sheets component can obtain a new accessToken whenever the current one expires - a necessity if the application is long-lived. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 125.3. Spring Boot Auto-Configuration The component supports 10 options, which are listed below. Name Description Default Type camel.component.google-sheets.client-factory To use the GoogleSheetsClientFactory as factory for creating the client. Will by default use BatchGoogleSheetsClientFactory. The option is a org.apache.camel.component.google.sheets.GoogleSheetsClientFactory type. String camel.component.google-sheets.configuration.access-token OAuth 2 access token. This typically expires after an hour so refreshToken is recommended for long term usage. String camel.component.google-sheets.configuration.api-name What kind of operation to perform GoogleSheetsApiName camel.component.google-sheets.configuration.application-name Google Sheets application name. Example would be camel-google-sheets/1.0 String camel.component.google-sheets.configuration.client-id Client ID of the sheets application String camel.component.google-sheets.configuration.client-secret Client secret of the sheets application String camel.component.google-sheets.configuration.method-name What sub operation to use for the selected operation String camel.component.google-sheets.configuration.refresh-token OAuth 2 refresh token. Using this, the Google Sheets component can obtain a new accessToken whenever the current one expires - a necessity if the application is long-lived. String camel.component.google-sheets.enabled Whether to enable auto configuration of the google-sheets component. This is enabled by default. Boolean camel.component.google-sheets.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 125.4. Producer Endpoints Producer endpoints can use endpoint prefixes followed by endpoint names and associated options described . A shorthand alias can be used for some endpoints. The endpoint URI MUST contain a prefix. Endpoint options that are not mandatory are denoted by []. When there are no mandatory options for an endpoint, one of the set of [] options MUST be provided. Producer endpoints can also use a special option inBody that in turn should contain the name of the endpoint option whose value will be contained in the Camel Exchange In message. Any of the endpoint options can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelGoogleSheets.<option> . Note that the inBody option overrides message header, i.e. the endpoint option inBody=option would override a CamelGoogleSheets.option header. For more information on the endpoints and options see API documentation at: https://developers.google.com/sheets/api/reference/rest/ 125.5. Consumer Endpoints Any of the producer endpoints can be used as a consumer endpoint. Consumer endpoints can use Scheduled Poll Consumer Options with a consumer. prefix to schedule endpoint invocation. Consumer endpoints that return an array or collection will generate one exchange per element, and their routes will be executed once for each exchange. 125.6. Message Headers Any URI option can be provided in a message header for producer endpoints with a CamelGoogleSheets. prefix. 125.7. Message Body All result message bodies utilize objects provided by the underlying APIs used by the GoogleSheetsComponent. Producer endpoints can specify the option name for incoming message body in the inBody endpoint URI parameter. For endpoints that return an array or collection, a consumer endpoint will map every element to distinct messages.
|
[
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-google-sheets</artifactId> <version>2.23.0</version> </dependency>",
"google-sheets://endpoint-prefix/endpoint?[options]",
"google-sheets:apiName/methodName"
] |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/google-sheets-component
|
Appendix B. Using Red Hat Maven repositories
|
Appendix B. Using Red Hat Maven repositories This section describes how to use Red Hat-provided Maven repositories in your software. B.1. Using the online repository Red Hat maintains a central Maven repository for use with your Maven-based projects. For more information, see the repository welcome page . There are two ways to configure Maven to use the Red Hat repository: Add the repository to your Maven settings Add the repository to your POM file Adding the repository to your Maven settings This method of configuration applies to all Maven projects owned by your user, as long as your POM file does not override the repository configuration and the included profile is enabled. Procedure Locate the Maven settings.xml file. It is usually inside the .m2 directory in the user home directory. If the file does not exist, use a text editor to create it. On Linux or UNIX: /home/ <username> /.m2/settings.xml On Windows: C:\Users\<username>\.m2\settings.xml Add a new profile containing the Red Hat repository to the profiles element of the settings.xml file, as in the following example: Example: A Maven settings.xml file containing the Red Hat repository <settings> <profiles> <profile> <id>red-hat</id> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>red-hat</activeProfile> </activeProfiles> </settings> For more information about Maven configuration, see the Maven settings reference . Adding the repository to your POM file To configure a repository directly in your project, add a new entry to the repositories element of your POM file, as in the following example: Example: A Maven pom.xml file containing the Red Hat repository <project> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>example-app</artifactId> <version>1.0.0</version> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> </project> For more information about POM file configuration, see the Maven POM reference . B.2. Using a local repository Red Hat provides file-based Maven repositories for some of its components. These are delivered as downloadable archives that you can extract to your local filesystem. To configure Maven to use a locally extracted repository, apply the following XML in your Maven settings or POM file: <repository> <id>red-hat-local</id> <url> USD{repository-url} </url> </repository> USD{repository-url} must be a file URL containing the local filesystem path of the extracted repository. Table B.1. Example URLs for local Maven repositories Operating system Filesystem path URL Linux or UNIX /home/alice/maven-repository file:/home/alice/maven-repository Windows C:\repos\red-hat file:C:\repos\red-hat
|
[
"/home/ <username> /.m2/settings.xml",
"C:\\Users\\<username>\\.m2\\settings.xml",
"<settings> <profiles> <profile> <id>red-hat</id> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>red-hat</activeProfile> </activeProfiles> </settings>",
"<project> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>example-app</artifactId> <version>1.0.0</version> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> </project>",
"<repository> <id>red-hat-local</id> <url> USD{repository-url} </url> </repository>"
] |
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_core_protocol_jms_client/using_red_hat_maven_repositories
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.