title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Chapter 1. Downloading, converting, and analyzing your SBOM
Chapter 1. Downloading, converting, and analyzing your SBOM The following procedure explains how to inspect your SBOM with TPA. Specifically, it outlines how to download an SBOM, convert the SBOM into a compatible format, and analyze the SBOM with TPA. Prerequisites: Cosign Syft jq Procedure: In your container registry, find the full address of the container image whose SBOM you want to inspect. The address has the format registry/namespace/image:tag. For example, quay.io/app/app-image:ff59e21cc... Note Do not use the address of the SBOM image, which ends with .sbom . Use the address of the image for the actual application. In your CLI, use cosign to download the SBOM. Redirect the output to a file you can reference later. Make sure the new filename ends with .json . (Optional) Your SBOM ultimately appears in the TPA UI with a name listed in this .json file. By default, Syft creates that name based on the filepath of the SBOM. If you want your SBOM to appear in the TPA UI with a more meaningful name, you must manually change it in the .json file you just downloaded. Specifically, you must replace the name in the .metadata.component object. You can optionally add a version field here, if you wish. Run the following command to store the Bombastic API URL as an environment variable. Note In this command and the command, after -n , be sure to enter the namespace in which you installed RHTAP. The examples assume you used a namespace called rhtap . In your CLI, create a new token_issuer_url environment variable with the following value. , you need to set the TPA__OIDC__WALKER_CLIENT_SECRET environment variable. If you have access to the private.env file, which your organization generated while installing RHTAP, you can simply source that file. If you do not have access to that file, ask whomever installed RHTAP to provide your with the TPA OIDC Walker client secret. If you have access to the private.env file: Or, once you have obtained the secret from whomever installed RHTAP: Run the following command to obtain a token for the BOMbastic API. The token allows you to upload the SBOM. Try to upload the SBOM. If you receive the error message storage error: invalid storage content , use Syft to convert your SBOM to an earlier CycloneDX, 1.4. You can disregard warnings about merging packages with different pURLs; they indicate that Syft might discard some data from the original SBOM, but that data is not crucial. Then try to upload the SBOM again: Access your cluster that is running RHTAP through the OpenShift Console. In the rhtap project, navigate to Networking > Routes. Open the URL listed on the same row as the spog-ui service. Use the Register button to create a new account and authenticate to TPA. Select your SBOM (the most recent upload) and see what insights TPA has provided about your application based on that SBOM. Go to the Dependency Analytics Report tab to view vulnerabilities and remediations. Additional resources Parts of this document are based on the Trustification documentation for SBOMs . Revised on 2024-07-15 21:03:29 UTC
[ "cosign download sbom quay.io/redhat/rhtap-app:8d34c03188cf294a77339b2a733b1f6811263a369b309e6b170d9b489abc0334 > /tmp/sbom.json", "vim /tmp/sbom.json \"component\": { \"bom-ref\": \"fdef64df97f1d419\", \"type\": \"file\", \"name\": \"/var/lib/containers/storage/vfs/dir/3b3009adcd335d2b3902c5a7014d22b2beb6392b1958f1d9c7aabe24acab2deb\" #Replace this with a meaningful name }", "bombastic_api_url=\"https://USD(oc -n rhtap get route --selector app.kubernetes.io/name=bombastic-api -o jsonpath='{.items[].spec.host}')\"", "token_issuer_url=https://USD(oc -n rhtap get route --selector app.kubernetes.io/name=keycloak -o jsonpath='{.items[].spec.host}')/realms/chicken/protocol/openid-connect/token", "source private.env", "TPA__OIDC__WALKER_CLIENT_SECRET=<secret value>", "tpa_token=USD(curl -d 'client_id=walker' -d \"client_secret=USDTPA__OIDC__WALKER_CLIENT_SECRET\" -d 'grant_type=client_credentials' \"USDtoken_issuer_url\" | jq -r .access_token)", "curl -H \"authorization: Bearer USDtpa_token\" -H \"transfer-encoding: chunked\" -H \"content-type: application/json\" --data @/tmp/sbom.json \"USDbombastic_api_url/api/v1/sbom?id=my-sbom\"", "syft convert /tmp/sbom.json -o [email protected]=/tmp/sbom-1-4.json", "curl -H \"authorization: Bearer USDtpa_token\" -H \"transfer-encoding: chunked\" -H \"content-type: application/json\" --data @/tmp/sbom-1-4.json \"USDbombastic_api_url/api/v1/sbom?id=my-sbom\"" ]
https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.0/html/inspecting_your_sbom_using_red_hat_trusted_profile_analyzer/proc_inspecting_sbom_default
Chapter 55. Kamelet
Chapter 55. Kamelet Both producer and consumer are supported The Kamelet Component provides support for interacting with the Camel Route Template engine using Endpoint semantic. 55.1. Dependencies When using kamelet with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kamelet-starter</artifactId> </dependency> 55.2. URI format kamelet:templateId/routeId[?options] 55.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 55.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 55.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 55.4. Component Options The Kamelet component supports 9 options, which are listed below. Name Description Default Type location (common) The location(s) of the Kamelets on the file system. Multiple locations can be set separated by comma. classpath:/kamelets String routeProperties (common) Set route local parameters. Map templateProperties (common) Set template local parameters. Map bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean block (producer) If sending a message to a kamelet endpoint which has no active consumer, then we can tell the producer to block and wait for the consumer to become active. true boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean timeout (producer) The timeout value to use if block is enabled. 30000 long autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean routeTemplateLoaderListener (advanced) Autowired To plugin a custom listener for when the Kamelet component is loading Kamelets from external resources. RouteTemplateLoaderListener 55.5. Endpoint Options The Kamelet endpoint is configured using URI syntax: with the following path and query parameters: 55.5.1. Path Parameters (2 parameters) Name Description Default Type templateId (common) Required The Route Template ID. String routeId (common) The Route ID. Default value notice: The ID will be auto-generated if not provided. String 55.5.2. Query Parameters (8 parameters) Name Description Default Type location (common) Location of the Kamelet to use which can be specified as a resource from file system, classpath etc. The location cannot use wildcards, and must refer to a file including extension, for example file:/etc/foo-kamelet.xml. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern block (producer) If sending a message to a direct endpoint which has no active consumer, then we can tell the producer to block and wait for the consumer to become active. true boolean failIfNoConsumers (producer) Whether the producer should fail by throwing an exception, when sending to a kamelet endpoint with no active consumers. true boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean timeout (producer) The timeout value to use if block is enabled. 30000 long Note The kamelet endpoint is lenient , which means that the endpoint accepts additional parameters that are passed to the engine and consumed upon route materialization. 55.6. Discovery If a Route Template is not found, the kamelet endpoint tries to load the related kamelet definition from the file system (by default classpath:/kamelets ). The default resolution mechanism expect kamelet files to have the extension .kamelet.yaml . 55.7. Samples Kamelets can be used as if they were standard Camel components. For example, suppose that we have created a Route Template as follows: routeTemplate("setMyBody") .templateParameter("bodyValue") .from("kamelet:source") .setBody().constant("{{bodyValue}}"); Note To let the Kamelet component wiring the materialized route to the caller processor, we need to be able to identify the input and output endpoint of the route and this is done by using kamele:source to mark the input endpoint and kamelet:sink for the output endpoint. Then the template can be instantiated and invoked as shown below: from("direct:setMyBody") .to("kamelet:setMyBody?bodyValue=myKamelet"); Behind the scenes, the Kamelet component does the following things: It instantiates a route out of the Route Template identified by the given templateId path parameter (in this case setBody ) It will act like the direct component and connect the current route to the materialized one. If you had to do it programmatically, it would have been something like: routeTemplate("setMyBody") .templateParameter("bodyValue") .from("direct:{{foo}}") .setBody().constant("{{bodyValue}}"); TemplatedRouteBuilder.builder(context, "setMyBody") .parameter("foo", "bar") .parameter("bodyValue", "myKamelet") .add(); from("direct:template") .to("direct:bar"); 55.8. Spring Boot Auto-Configuration The component supports 10 options, which are listed below. Name Description Default Type camel.component.kamelet.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kamelet.block If sending a message to a kamelet endpoint which has no active consumer, then we can tell the producer to block and wait for the consumer to become active. true Boolean camel.component.kamelet.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kamelet.enabled Whether to enable auto configuration of the kamelet component. This is enabled by default. Boolean camel.component.kamelet.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kamelet.location The location(s) of the Kamelets on the file system. Multiple locations can be set separated by comma. classpath:/kamelets String camel.component.kamelet.route-properties Set route local parameters. Map camel.component.kamelet.route-template-loader-listener To plugin a custom listener for when the Kamelet component is loading Kamelets from external resources. The option is a org.apache.camel.spi.RouteTemplateLoaderListener type. RouteTemplateLoaderListener camel.component.kamelet.template-properties Set template local parameters. Map camel.component.kamelet.timeout The timeout value to use if block is enabled. 30000 Long
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kamelet-starter</artifactId> </dependency>", "kamelet:templateId/routeId[?options]", "kamelet:templateId/routeId", "routeTemplate(\"setMyBody\") .templateParameter(\"bodyValue\") .from(\"kamelet:source\") .setBody().constant(\"{{bodyValue}}\");", "from(\"direct:setMyBody\") .to(\"kamelet:setMyBody?bodyValue=myKamelet\");", "routeTemplate(\"setMyBody\") .templateParameter(\"bodyValue\") .from(\"direct:{{foo}}\") .setBody().constant(\"{{bodyValue}}\"); TemplatedRouteBuilder.builder(context, \"setMyBody\") .parameter(\"foo\", \"bar\") .parameter(\"bodyValue\", \"myKamelet\") .add(); from(\"direct:template\") .to(\"direct:bar\");" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kamelet-component-starter
10.2.2.5. Error Documents
10.2.2.5. Error Documents To use a hard-coded message with the ErrorDocument directive, the message should be enclosed in a pair of double quotation marks " , rather than just preceded by a double quotation mark as required in Apache HTTP Server 1.3. For example, the following is a sample Apache HTTP Server 1.3 directive: To migrate an ErrorDocument setting to Apache HTTP Server 2.0, use the following structure: Note the trailing double quote in the ErrorDocument directive example. For more on this topic, refer to the following documentation on the Apache Software Foundation's website: http://httpd.apache.org/docs-2.0/mod/core.html#errordocument
[ "ErrorDocument 404 \"The document was not found", "ErrorDocument 404 \"The document was not found \"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s3-httpd-mig-main-error
2.5. Smart Card Token Management with Certificate System
2.5. Smart Card Token Management with Certificate System A smart card is a hardware cryptographic device containing cryptographic certificates and keys. It can be employed by the user to participate in operations such as secure website access and secure mail. It can also serve as an authentication device to log in to various operating systems such as Red Hat Enterprise Linux. The management of these cards or tokens throughout their entire lifetime in service is accomplished by the Token Management System (TMS). A TMS environment requires a Certificate Authority (CA), Token Key Service (TKS), and Token Processing System (TPS), with an optional Key Recovery Authority (KRA) for server-side key generation and key archival and recovery. Online Certificate Status Protocol (OCSP) can also be used to work with the CA to serve online certificate status requests. This chapter provides an overview of the TKS and TPS systems, which provide the smart card management functions of Red Hat Certificate System, as well as Enterprise Security Client (ESC), that works with TMS from the user end. Figure 2.4. How the TMS Manages Smart Cards 2.5.1. Token Key Service (TKS) The Token Key Service (TKS) is responsible for managing one or more master keys. It maintains the master keys and is the only entity within the TMS that has access to the key materials. In an operational environment, each valid smart card token contains a set of symmetric keys that are derived from both the master key and the ID that is unique to the card (CUID). Initially, a default (unique only per manufacturer master key) set of symmetric keys is initialized on each smart card by the manufacturer. This default set should be changed at the deployment site by going through a Key Changeover operation to generate the new master key on TKS. As the sole owner to the master key, when given the CUID of a smart card, TKS is capable of deriving the set of symmetric keys residing on that particular smart card, which would then allow TKS to establish a session-based Secure Channel for secure communication between TMS and each individual smart card. Note Because of the sensitivity of the data that the TKS manages, the TKS should be set behind the firewall with restricted access. 2.5.1.1. Master Keys and Key Sets The TKS supports multiple smart card key sets. Each smart card vendor creates different default (developer) static key sets for their smart card token stocks, and the TKS is equipped with the static key set (per manufacturer) to kickstart the format process of a blank token. During the format process of a blank smart card token, a Java applet and the uniquely derived symmetric key set are injected into the token. Each master key (in some cases referred to as keySet ) that the TKS supports is to have a set of entries in the TKS configuration file ( CS.cfg ). Each TPS profile contains a configuration to direct its enrollment to the proper TKS keySet for the matching key derivation process that would essentially be responsible for establishing the Secure Channel secured by a set of session-specific keys between TMS and the smart card token. On TKS, master keys are defined by named keySets for references by TPS. On TPS, depending on the enrollment type (internal or external registration), The keySet is either specified in the TPS profile, or determined by the keySet Mapping Resolver. 2.5.1.2. Key Ceremony (Shared Key Transport) A Key Ceremony is a process for transporting highly sensitive keys in a secure way from one location to another. In one scenario, in a highly secure deployment environment, the master key can be generated in a secure vault with no network to the outside. Alternatively, an organization might want to have TKS and TPS instances on different physical machines. In either case, under the assumption that no one single person is to be trusted with the key, Red Hat Certificate System TMS provides a utility called tkstool to manage the secure key transportation. 2.5.1.3. Key Update (Key Changeover) When Global Platform-compliant smart cards are created at the factory, the manufacturer will burn a set of default symmetric keys onto the token. The TKS is initially configured to use these symmetric keys (one KeySet entry per vendor in the TKS configuration). However, since these symmetric keys are not unique to the smart cards from the same stock, and because these are well-known keys, it is strongly encouraged to replace these symmetric keys with a set that is unique per token, not shared by the manufacturer, to restrict the set of entities that can manipulate the token. The changing over of the keys takes place with the assistance of the Token Key Service subsystem. One of the functions of the TKS is to oversee the Master Keys from which the previously discussed smart card token keys are derived. There can be more than one master key residing under the control of the TKS. Important When this key changeover process is done on a token, the token may become unusable in the future since it no longer has the default key set enabled. The key is essentially only as good as long as the TPS and TKS system that provisioned the token is valid. Because of this, it is essential to keep all the master keys, even if any of them are outdated. You can disable the old master keys in TKS for better control, but do not delete them unless disabled tokens are part of your plan. There is support to revert the token keys back to the original key set, which is viable if the token is to be reused again in some sort of a testing scenario. 2.5.1.4. APDUs and Secure Channels The Red Hat Certificate System Token Management System (TMS) supports the GlobalPlatform smart card specification, in which the Secure Channel implementation is done with the Token Key System (TKS) managing the master key and the Token Processing System (TPS) communicating with the smart card (tokens) with Application Protocol Data Units (APDUs). There are two types of APDUs: Command APDUs , sent by the TPS to smart cards Response APDUs , sent by smart cards to the TPS as response to command APDUs The initiation of the APDU commands may be triggered when clients take action and connect to the Certificate System server for requests. A secure channel begins with an InitializeUpdate APDU sent from TPS to the smart card token, and is fully established with the ExternalAuthenticate APDU. Then, both the token and TMS would have established a set of shared secrets, called session keys, which are used to encrypt and authenticate the communication. This authenticated and encrypted communication channel is called Secure Channel. Because TKS is the only entity that has access to the master key which is capable of deriving the set of unique symmetric on-token smart card keys, the Secure Channel provides the adequately safeguarded communication between TMS and each individual token. Any disconnection of the channel will require reestablishment of new session keys for a new channel. 2.5.2. Token Processing System (TPS) The Token Processing System (TPS) is a registration authority for smart card certificate enrollment. It acts as a conduit between the user-centered Enterprise Security Client (ESC), which interacts with client side smart card tokens, and the Certificate System back end subsystems, such as the Certificate Authority (CA) and the Key Recovery Authority (KRA). In TMS, the TPS is required in order to manage smart cards, as it is the only TMS entity that understands the APDU commands and responses. TPS sends commands to the smart cards to help them generate and store keys and certificates for a specific entity, such as a user or device. Smart card operations go through the TPS and are forwarded to the appropriate subsystem for action, such as the CA to generate certificates or the KRA to generate, archive, or recover keys. 2.5.2.1. Coolkey Applet Red Hat Certificate System includes the Coolkey Java applet, written specifically to run on TMS-supported smart card tokens. The Coolkey applet connects to a PKCS#11 module that handles the certificate and key related operations. During a token format operation, this applet is injected onto the smart card token using the Secure Channel protocol, and can be updated per configuration. 2.5.2.2. Token Operations The TPS in Red Hat Certificate System is available to provision smart cards on the behalf of end users of the smart cards. The Token Processing System provides support for the following major token operations: Token Format - The format operation is responsible for installing the proper Coolkey applet onto the token. The applet provides a platform where subsequent cryptographic keys and certificates can be later placed. Token Enrollment - The enrollment operation results in a smart card populated with required cryptographic keys and cryptographic certificates. This material allows the user of the smart card to participate in operations such as secure web site access and secure mail. Two types of enrollments are supported, which is configured globally: Internal Registration - Enrollment by TPS profiles determined by the profile Mapping Resolver . External Registration - Enrollment by TPS profiles determined by the entries in the user's LDAP record. Token PIN Reset - The token PIN reset operation allows the user of the token to specify a new PIN that is used to log into the token, making it available for performing cryptographic operations. The following other operations can be considered supplementary or inherent operations to the main ones listed above. They can be triggered per relevant configuration or by the state of the token. Key Generation - Each PKI certificate is comprised of a public/private key pair. In Red Hat Certificate System, the generation of the keys can be done in two ways, depending on the TPS profile configuration: Token Side Key Generation - The PKI key pairs are generated on the smart card token. Generating the key pairs on the token side does not allow for key archival. Server Side Key Generation - The PKI key pairs are generated on the TMS server side. The key pairs are then sent back to the token using Secure Channel. Generating the key pairs on the server side allows for key archival. Certificate Renewal - This operation allows a previously enrolled token to have the certificates currently on the token reissued while reusing the same keys. This is useful in situations where the old certificates are due to expire and you want to create new ones but maintain the original key material. Certificate Revocation - Certificate revocation can be triggered based on TPS profile configuration or based on token state. Normally, only the CA which issued a certificate can revoke it, which could mean that retiring a CA would make it impossible to revoke certain certificates. However, it is possible to route revocation requests for tokens to the retired CA while still routing all other requests such as enrollment to a new, active CA. This mechanism is called Revocation Routing . Token Key Changeover - The key changeover operation, triggered by a format operation, results in the ability to change the internal keys of the token from the default developer key set to a new key set controlled by the deployer of the Token Processing System. This is usually done in any real deployment scenario since the developer key set is better suited to testing situations. Applet Update - During the course of a TMS deployment, the Coolkey smart card applet can be updated or downgraded if required. 2.5.2.3. TPS Profiles The Certificate System Token Processing System subsystem facilitates the management of smart card tokens. Tokens are provisioned by the TPS such that they are taken from a blank state to either a Formatted or Enrolled condition. A Formatted token is one that contains the CoolKey applet supported by TPS, while an Enrolled token is personalized (a process called binding ) to an individual with the requisite certificates and cryptographic keys. This fully provisioned token is ready to use for crytptographic operations. The TPS can also manage Profiles . The notion of a token Profile is related to: The steps taken to Format or Enroll a token. The attributes contained within the finished token after the operation has been successfully completed. The following list contains some of the quantities that make up a unique token profile: How does the TPS connect to the user's authentication LDAP database? Will user authentication be required for this token operation? If so, what authentication manager will be used? How does the TPS connect to a Certificate System CA from which it will obtain certificates? How are the private and public keys generated on this token? Are they generated on the token side or on the server side? What key size (in bits) is to be used when generating private and public keys? Which certificate enrollment profile (provisioned by the CA) is to be used to generate the certificates on this token? Note This setting will determine the final structure of the certificates to be written to the token. Different certificates can be created for different uses, based on extensions included in the certificate. For example, one certificate can specialize in data encryption, and another one can be used for signature operations. What version of the Coolkey applet will be required on the token? How many certificates will be placed on this token for an enrollment operation? These above and many others can be configured for each token type or profile. A full list of available configuration options is available in the Red Hat Certificate System Administration Guide . Another question to consider is how a given token being provisioned by a user will be mapped to an individual token profile. There are two types of registration: Internal Registration - In this case, the TPS profile ( tokenType ) is determined by the profile Mapping Resolver . This filter-based resolver can be configured to take any of the data provided by the token into account and determine the target profile. External Registration - When using external registration, the profile (in name only - actual profiles are still defined in the TPS in the same fashion as those used by the internal registration) is specified in each user's LDAP record, which is obtained during authentication. This allows the TPS to obtain key enrollment and recovery information from an external registration Directory Server where user information is stored. This gives you the control to override the enrollment, revocation, and recovery policies that are inherent to the TPS internal registration mechanism. The user LDAP record attribute names relevant to external registration are configurable. External registration can be useful when the concept of a "group certificate" is required. In that case, all users within a group can have a special record configured in their LDAP profiles for downloading a shared certificate and keys. The registration to be used is configured globally per TPS instance. 2.5.2.4. Token Database The Token Processing System makes use of the LDAP token database store, which is used to keep a list of active tokens and their respective certificates, and to keep track of the current state of each token. A brand new token is considered Uninitialized , while a fully enrolled token is Enrolled . This data store is constantly updated and consulted by the TPS when processing tokens. 2.5.2.4.1. Token States and Transitions The Token Processing System stores states in its internal database in order to determine the current token status as well as actions which can be performed on the token. 2.5.2.4.1.1. Token States The following table lists all possible token states: Table 2.9. Possible Token States Name Code Label FORMATTED 0 Formatted (uninitialized) DAMAGED 1 Physically damaged PERM_LOST 2 Permanently lost SUSPENDED 3 Suspended (temporarily lost) ACTIVE 4 Active TERMINATED 6 Terminated UNFORMATTED 7 Unformatted The command line interface displays token states using the Name listed above. The graphical interface uses the Label instead. Note The above table contains no state with code 5 , which previously belonged to a state that was removed. 2.5.2.4.1.2. Token State Transitions Done Using the Graphical or Command Line Interface Each token state has a limited amount of states it can transition into. For example, a token can change state from FORMATTED to ACTIVE or DAMAGED , but it can never transition from FORMATTED to UNFORMATTED . Furthermore, the list of states a token can transition into is different depending on whether the transition is triggered manually using a command line or the graphical interface, or automatically using a token operation. The list of allowed manual transitions is stored in the tokendb.allowedTransitions property, and the tps.operations.allowedTransitions property controls allowed transitions triggered by token operations. The default configurations for both manual and token operation-based transitions are stored in the /usr/share/pki/tps/conf/CS.cfg configuration file. 2.5.2.4.1.2.1. Token State Transitions Using the Command Line or Graphical Interface All possible transitions allowed in the command line or graphical interface are described in the TPS configuration file using the tokendb.allowedTransitions property: The property contains a comma-separated list of transitions. Each transition is written in the format of <current code> : <new code> . The codes are described in Table 2.9, "Possible Token States" . The default configuration is preserved in /usr/share/pki/tps/conf/CS.cfg . The following table describes each possible transition in more detail: Table 2.10. Possible Manual Token State Transitions Transition Current State State Description 0:1 FORMATTED DAMAGED This token has been physically damaged. 0:2 FORMATTED PERM_LOST This token has been permanently lost. 0:3 FORMATTED SUSPENDED This token has been suspended (temporarily lost). 0:6 FORMATTED TERMINATED This token has been terminated. 3:2 SUSPENDED PERM_LOST This suspended token has been permanently lost. 3:6 SUSPENDED TERMINATED This suspended token has been terminated. 4:1 ACTIVE DAMAGED This token has been physically damaged. 4:2 ACTIVE PERM_LOST This token has been permanently lost. 4:3 ACTIVE SUSPENDED This token has been suspended (temporarily lost). 4:6 ACTIVE TERMINATED This token has been terminated. 6:7 TERMINATED UNFORMATTED Reuse this token. The following transitions are generated automatically depending on the token's original state. If a token was originally FORMATTED and then became SUSPENDED , it can only return to the FORMATTED state. If a token was originally ACTIVE and then became SUSPENDED , it can only return to the ACTIVE state. Table 2.11. Token State Transitions Triggered Automatically Transition Current State State Description 3:0 SUSPENDED FORMATTED This suspended (temporarily lost) token has been found. 3:4 SUSPENDED ACTIVE This suspended (temporarily lost) token has been found. 2.5.2.4.1.3. Token State Transitions using Token Operations All possible transitions that can be done using token operations are described in the TPS configuration file using the tokendb.allowedTransitions property: The property contains a comma-separated list of transitions. Each transition is written in the format of <current code> : <new code> . The codes are described in Table 2.9, "Possible Token States" . The default configuration is preserved in /usr/share/pki/tps/conf/CS.cfg . The following table describes each possible transition in more detail: Table 2.12. Possible Token State Transitions using Token Operations Transition Current State State Description 0:0 FORMATTED FORMATTED This allows reformatting a token or upgrading applet/key in a token. 0:4 FORMATTED ACTIVE This allows enrolling a token. 4:4 ACTIVE ACTIVE This allows re-enrolling an active token. May be useful for external registration. 4:0 ACTIVE FORMATTED This allows formatting an active token. 7:0 UNFORMATTED FORMATTED This allows formatting a blank or previously used token. 2.5.2.4.1.4. Token State and Transition Labels The default labels for token states and transitions are stored in the /usr/share/pki/tps/conf/token-states.properties configuration file. By default, the file has the following contents: 2.5.2.4.1.5. Customizing Allowed Token State Transitions To customize the list of token state transition, edit the following properties in /var/lib/pki/ instance_name/tps/conf/CS.cfg : tokendb.allowedTransitions to customize the list of allowed transitions performed using the command line or graphical interface tps.operations.allowedTransitions to customize the list of allowed transitions using token operations Transitions can be removed from the default list if necessary, but new transitions cannot be added unless they were in the default list. The defaults are stored in /usr/share/pki/tps/conf/CS.cfg . 2.5.2.4.1.6. Customizing Token State and Transition Labels To customize token state and transition labels, copy the default /usr/share/pki/tps/conf/token-states.properties into your instance folder ( /var/lib/pki/ instance_name/tps/conf/CS.cfg ), and change the labels listed inside as needed. Changes will be effective immediately, the server does not need to be restarted. The TPS user interface may require a reload. To revert to default state and label names, delete the edited token-states.properties file from your instance folder. 2.5.2.4.1.7. Token Activity Log Certain TPS activities are logged. Possible events in the log file are listed in the table below. Table 2.13. TPS Activity Log Events Activity Description add A token was added. format A token was formatted. enrollment A token was enrolled. recovery A token was recovered. renewal A token was renewed. pin_reset A token PIN was reset. token_status_change A token status was changed using the command line or graphical interface. token_modify A token was modified. delete A token was deleted. cert_revocation A token certificate was revoked. cert_unrevocation A token certificate was unrevoked. 2.5.2.4.2. Token Policies In case of internal registration, each token can be governed by a set of token policies. The default policies are: All TPS operations under internal registration are subject to the policies specified in the token's record. If no policies are specified for a token, the TPS uses the default set of policies. 2.5.2.5. Mapping Resolver The Mapping Resolver is an extensible mechanism used by the TPS to determine which token profile to assign to a specific token based on configurable criteria. Each mapping resolver instance can be uniquely defined in the configuration, and each operation can point to various defined mapping resolver instance. Note The mapping resolver framework provides a platform for writing custom plug-ins. However instructions on how to write a plug-in is outside the scope of this document. FilterMappingResolver is the only mapping resolver implementation provided with the TPS by default. It allows you to define a set of mappings and a target result for each mapping. Each mapping contains a set of filters, where: If the input filter parameters pass all filters within a mapping, the target value is assigned. If the input parameters fail a filter, that mapping is skipped and the one in order is tried. If a filter has no specified value, it always passes. If a filter does have a specified value, then the input parameters must match exactly. The order in which mappings are defined is important. The first mapping which passes is considered resolved and is returned to the caller. The input filter parameters are information received from the smart card token with or without extensions. They are run against the FilterMappingResolver according to the above rules. The following input filter parameters are supported by FilterMappingResolver : appletMajorVersion - The major version of the Coolkey applet on the token. appletMinorVersion - The minor version of the Coolkey applet on the token. keySet or tokenType keySet - can be set as an extension in the client request. Must match the value in the filter if the extension is specified. The keySet mapping resolver is meant for determining keySet value when using external registration. The Key Set Mapping Resolver is necessary in the external registration environment when multiple key sets are supported (for example, different smart card token vendors). The keySet value is needed for identifying the master key on TKS, which is crucial for establishing Secure Channel. When a user's LDAP record is populated with a set tokenType (TPS profile), it does not know which card will end up doing the enrollment, and therefore keySet cannot be predetermined. The keySetMappingResolver helps solve the issue by allowing the keySet to be resolved before authentication. tokenType - okenType can be set as an extension in the client request. It must match the value in the filter if the extension is specified. tokenType (also referred to as TPS Profile) is determined at this time for the internal registration environment. tokenATR - The token's Answer to Reset (ATR). tokenCUID - "start" and "end" define the range the Card Unique IDs (CUID) of the token must fall in to pass this filter. 2.5.2.6. TPS Roles The TPS supports the following roles by default: TPS Administrator - this role is allowed to: Manage TPS tokens View TPS certificates and activities Manage TPS users and groups Change general TPS configuration Manage TPS authenticators and connectors Configure TPS profiles and profile mappings Configure TPS audit logging TPS Agent - this role is allowed to: Configure TPS tokens View TPS certificates and activities Change the status of TPS profiles TPS Operator - this role is allowed to: View TPS tokens, certificates, and activities 2.5.3. TKS/TPS Shared Secret During TMS installation, a shared symmetric key is established between the Token Key Service and the Token Processing System. The purpose of this key is to wrap and unwrap session keys which are essential to Secure Channels. Note The shared secret key is currently only kept in a software cryptographical database. There are plans to support keeping the key on a Hardware Security Module (HSM) devices in a future release of Red Hat Certificate System. Once this functionality is implemented, you will be instructed to run a Key Ceremony using tkstool to transfer the key to the HSM. 2.5.4. Enterprise Security Client (ESC) The Enterprise Security Client is an HTTP client application, similar to a web browser, that communicates with the TPS and handles smart card tokens from the client side. While an HTTPS connection is established between the ESC and the TPS, an underlying Secure Channel is also established between the token and the TMS within each TLS session.
[ "tokendb.allowedTransitions=0:1,0:2,0:3,0:6,3:2,3:6,4:1,4:2,4:3,4:6,6:7", "tps.operations.allowedTransitions=0:0,0:4,4:4,4:0,7:0", "Token states UNFORMATTED = Unformatted FORMATTED = Formatted (uninitialized) ACTIVE = Active SUSPENDED = Suspended (temporarily lost) PERM_LOST = Permanently lost DAMAGED = Physically damaged TEMP_LOST_PERM_LOST = Temporarily lost then permanently lost TERMINATED = Terminated Token state transitions FORMATTED.DAMAGED = This token has been physically damaged. FORMATTED.PERM_LOST = This token has been permanently lost. FORMATTED.SUSPENDED = This token has been suspended (temporarily lost). FORMATTED.TERMINATED = This token has been terminated. SUSPENDED.ACTIVE = This suspended (temporarily lost) token has been found. SUSPENDED.PERM_LOST = This suspended (temporarily lost) token has become permanently lost. SUSPENDED.TERMINATED = This suspended (temporarily lost) token has been terminated. SUSPENDED.FORMATTED = This suspended (temporarily lost) token has been found. ACTIVE.DAMAGED = This token has been physically damaged. ACTIVE.PERM_LOST = This token has been permanently lost. ACTIVE.SUSPENDED = This token has been suspended (temporarily lost). ACTIVE.TERMINATED = This token has been terminated. TERMINATED.UNFORMATTED = Reuse this token.", "RE_ENROLL=YES;RENEW=NO;FORCE_FORMAT=NO;PIN_RESET=NO;RESET_PIN_RESET_TO_NO=NO;RENEW_KEEP_OLD_ENC_CERTS=YES" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/manages-tokens
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/troubleshooting_openshift_data_foundation/making-open-source-more-inclusive
Chapter 23. Apache CXF Binding IDs
Chapter 23. Apache CXF Binding IDs Table of Binding IDs Table 23.1. Binding IDs for Message Bindings Binding ID CORBA http://cxf.apache.org/bindings/corba HTTP/REST http://apache.org/cxf/binding/http SOAP 1.1 http://schemas.xmlsoap.org/wsdl/soap/http SOAP 1.1 w/ MTOM http://schemas.xmlsoap.org/wsdl/soap/http?mtom=true SOAP 1.2 http://www.w3.org/2003/05/soap/bindings/HTTP/ SOAP 1.2 w/ MTOM http://www.w3.org/2003/05/soap/bindings/HTTP/?mtom=true XML http://cxf.apache.org/bindings/xformat
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/CXFDeployBindingAppx
30.5. Deployment Scenarios
30.5. Deployment Scenarios VDO can be deployed in a variety of ways to provide deduplicated storage for both block and file access and for both local and remote storage. Because VDO exposes its deduplicated storage as a standard Linux block device, it can be used with standard file systems, iSCSI and FC target drivers, or as unified storage. 30.5.1. iSCSI Target As a simple example, the entirety of the VDO storage target can be exported as an iSCSI Target to remote iSCSI initiators. Figure 30.3. Deduplicated Block Storage Target For more information on iSCSI Target, Section 25.1, "Target Setup" 30.5.2. File Systems If file access is desired instead, file systems can be created on top of VDO and exposed to NFS or CIFS users via either the Linux NFS server or Samba. Figure 30.4. Deduplicated NAS 30.5.3. LVM More feature-rich systems may make further use of LVM to provide multiple LUNs that are all backed by the same deduplicated storage pool. In Figure 30.5, "Deduplicated Unified Storage" , the VDO target is registered as a physical volume so that it can be managed by LVM. Multiple logical volumes ( LV1 to LV4 ) are created out of the deduplicated storage pool. In this way, VDO can support multiprotocol unified block/file access to the underlying deduplicated storage pool. Figure 30.5. Deduplicated Unified Storage Deduplicated unified storage design allows for multiple file systems to collectively use the same deduplication domain through the LVM tools. Also, file systems can take advantage of LVM snapshot, copy-on-write, and shrink or grow features, all on top of VDO. 30.5.4. Encryption Data security is critical today. More and more companies have internal policies regarding data encryption. Linux Device Mapper mechanisms such as DM-Crypt are compatible with VDO. Encrypting VDO volumes will help ensure data security, and any file systems above VDO still gain the deduplication feature for disk optimization. Note that applying encryption above VDO results in little if any data deduplication; encryption renders duplicate blocks different before VDO can deduplicate them. Figure 30.6. Using VDO with Encryption
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/vdo-ig-deployment
Chapter 2. Major differences between Red Hat build of OpenJDK 8 and Red Hat build of OpenJDK 11
Chapter 2. Major differences between Red Hat build of OpenJDK 8 and Red Hat build of OpenJDK 11 If you are migrating your Java applications from Red Hat build of OpenJDK 8, first ensure that you familiarize yourself with the changes that were introduced in Red Hat build of OpenJDK 11. These changes might require that you reconfigure your existing Red Hat build of OpenJDK installation before you migrate to Red Hat build of OpenJDK 17. Note This chapter is relevant only if you currently use Red Hat build of OpenJDK 8. You can ignore this chapter if you already use Red Hat build of OpenJDK 11. One of the major differences between Red Hat build of OpenJDK 8 and later versions is the inclusion of a module system in Red Hat build of OpenJDK 11 or later. If you are migrating from Red Hat build of OpenJDK 8, consider moving your application's libraries and modules from the Red Hat build of OpenJDK 8 class path to the module class in Red Hat build of OpenJDK 11 or later. This change can improve the class-loading capabilities of your application. Red Hat build of OpenJDK 11 and later versions include new features and enhancements that can improve the performance of your application, such as enhanced memory usage, improved startup speed, and increased container integration. Note Some features might differ between Red Hat build of OpenJDK and other upstream community or third-party versions of OpenJDK. For example: The Shenandoah garbage collector is available in all versions of Red Hat build of OpenJDK, but this feature might not be available by default in other builds of OpenJDK. JDK Flight Recorder (JFR) support in OpenJDK 8 has been available from version 8u262 onward and enabled by default from version 8u272 onward, but JFR might be disabled in certain builds. Because JFR functionality was backported from the open source version of JFR in OpenJDK 11, the JFR implementation in Red Hat build of OpenJDK 8 is largely similar to JFR in Red Hat build of OpenJDK 11 or later. This JFR implementation is different from JFR in Oracle JDK 8, so users who want to migrate from Oracle JDK to Red Hat build of OpenJDK 8 or later need to be aware of the command-line options for using JFR. 32-bit builds of OpenJDK are generally unsupported in OpenJDK 8 or later, and they might not be available in later versions. 32-bit builds are unsupported in all versions of Red Hat build of OpenJDK. 2.1. Cryptography and security Certain minor cryptography and security differences exist between Red Hat build of OpenJDK 8 and Red Hat build of OpenJDK 11. However, both versions of Red Hat build of OpenJDK have many similar cryptography and security behaviors. Red Hat builds of OpenJDK use system-wide certificates, and each build obtains its list of disabled cryptographic algorithms from a system's global configuration settings. These settings are common to all versions of Red Hat build of OpenJDK, so you can easily change from Red Hat build of OpenJDK 8 to Red Hat build of OpenJDK 11 or later. In FIPS mode, Red Hat build of OpenJDK 8 and Red Hat build of OpenJDK 11 releases are self-configured, so that either release uses the same security providers at startup. The TLS stacks in Red Hat build of OpenJDK 8 and Red Hat build of OpenJDK 11 are identical, because the SunJSSE engine from Red Hat build of OpenJDK 11 was backported to Red Hat build of OpenJDK 8. Both Red Hat build of OpenJDK versions support the TLS 1.3 protocol. The following minor cryptography and security differences exist between Red Hat build of OpenJDK 8 and Red Hat build of OpenJDK 11: Red Hat build of OpenJDK 8 Red Hat build of OpenJDK 11 TLS clients do not use TLSv1.3 for communication with the target server by default. You can change this behavior by setting the jdk.tls.client.protocols system property to ‐Djdk.tls.client.protocols=TLSv1.3 . TLS clients use TLSv.1.3 by default. This release does not support the use of the X25519 and X448 elliptic curves in the Diffie-Hellman key exchange. This release supports the use of the X25519 and X448 elliptic curves in the Diffie-Hellman key exchange. This release still supports the legacy KRB5-based cipher suites, which are disabled for security reasons. You can enable these cipher suites by changing the jdk.tls.client.cipherSuites and jdk.tls.server.cipherSuites system properties. This release does not support the legacy KRB5-based cipher suites. This release does not support the Datagram Transport Layer Security (DTLS) protocol. This release supports the DTLS protocol. The max_fragment_length extension, which is used by DTLS, is not available for TLS clients. The max_fragment_length extension is available for both clients and servers. 2.2. Garbage collector For garbage collection, Red Hat build of OpenJDK 8 uses the Parallel collector by default, whereas Red Hat build of OpenJDK 11 uses the Garbage-First (G1) collector by default. Before you choose a garbage collector, consider the following details: If you want to improve throughput, use the Parallel collector. The Parallel collector maximizes throughput but ignores latency, which means that garbage collection pauses could become an issue if you want your application to have reasonable response times. However, if your application is performing batch processing and you are not concerned about pause times, the Parallel collector is the best choice. You can switch to the Parallel collector by setting the ‐XX:+UseParallelGC JVM option. If you want a balance between throughput and latency, use the G1 collector. The G1 collector can achieve great throughput while providing reasonable latencies with pause times of a few hundred milliseconds. If you notice throughput issues when migrating applications from Red Hat build of OpenJDK 8 to Red Hat build of OpenJDK 11, you can switch to the Parallel collector as described above. If you want low-latency garbage collection, use the Shenandoah collector. You can select the garbage collector type that you want to use by specifying the ‐XX:+<gc_type> JVM option at startup. For example, the ‐XX:+UseParallelGC option switches to the Parallel collector. 2.3. Garbage collector logging options Red Hat build of OpenJDK 11 includes a new and more powerful logging framework that works more effectively than the old logging framework. Red Hat build of OpenJDK 11 also includes unified JVM logging options and unified GC logging options. The logging system for Red Hat build of OpenJDK 11 activates the - XX:+PrintGCTimeStamps and -XX:+PrintGCDateStamps options by default. Because the logging format in Red Hat build of OpenJDK 11 is different from Red Hat build of OpenJDK 8, you might need to update any of your code that parses garbage collector logs. Modified options in Red Hat build of OpenJDK 11 The old logging framework options are deprecated in Red Hat build of OpenJDK 11. These old options are still available only as aliases for the new logging framework options. If you want to work more effectively with Red Hat build of OpenJDK 11 or later, use the new logging framework options. The following table outlines the changes in garbage collector logging options between Red Hat build of OpenJDK versions 8 and 11: Options in Red Hat build of OpenJDK 8 Options in Red Hat build of OpenJDK 11 -verbose:gc -Xlog:gc -XX:+PrintGC -Xlog:gc -XX:+PrintGCDetails -Xlog:gc* or -Xlog:gc+USDtags -Xloggc:USDFILE -Xlog:gc:file=USDFILE When using the -XX:+PrintGCDetails option, pass the -Xlog:gc* flag, where the asterisk ( * ) activates more detailed logging. Alternatively, you can pass the -Xlog:gc+USDtags flag. When using the -Xloggc option, append the :file=USDFILE suffix to redirect log output to the specified file. For example -Xlog:gc:file=USDFILE . Removed options in Red Hat build of OpenJDK 11 Red Hat build of OpenJDK 11 does not include the following options, which were deprecated in Red Hat build of OpenJDK 8: -Xincgc -XX:+CMSIncrementalMode -XX:+UseCMSCompactAtFullCollection -XX:+CMSFullGCsBeforeCompaction -XX:+UseCMSCollectionPassing Red Hat build of OpenJDK 11 also removes the following options because the printing of timestamps and datestamps is automatically enabled: -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps Note In Red Hat build of OpenJDK 11, unless you specify the -XX:+IgnoreUnrecognizedVMOptions option, the use of any of the preceding removed options results in a startup failure. Additional resources For more information about the common framework for unified JVM logging and the format of Xlog options, see JEP 158: Unified JVM Logging . For more information about deprecated and removed options, see JEP 214: Remove GC Combinations Deprecated in JDK 8 . For more information about unified GC logging, see JEP 271: Unified GC Logging . 2.4. OpenJDK graphics Before version 8u252, Red Hat build of OpenJDK 8 used Pisces as the default rendering engine. From version 8u252 onward, Red Hat build of OpenJDK 8 uses Marlin as the new default rendering engine. Red Hat build of OpenJDK 11 and later releases also use Marlin by default. Marlin improves the handling of intensive application graphics. Because the rendering engines produce the same results, users should not observe any changes apart from improved performance. 2.5. Webstart and applets You can use Java WebStart by using the IcedTea-Web plug-in with Red Hat build of OpenJDK 8 or Red Hat build of OpenJDK 11 on RHEL 7, RHEL 8, and Microsoft Windows operating systems. The IcedTea-Web plug-in requires that Red Hat build of OpenJDK 8 is installed as a dependency on the system. Applets are not supported on any version of Red Hat build of OpenJDK. Even though some applets can be run on RHEL 7 by using the IcedTea-web plug-in with OpenJDK 8 on a Netscape Plugin Application Programming Interface (NPAPI) browser, Red Hat build of OpenJDK does not support this behavior. Note The upstream community version of OpenJDK does not support applets or Java Webstart. Support for these technologies is deprecated and they are not recommended for use. 2.6. JPMS The Java Platform Module System (JPMS), which was introduced in OpenJDK 9, limits or prevents access to non-public APIs. JPMS also impacts how you can start and compile your Java application (for example, whether you use a class path or a module path). Internal modules By default, Red Hat build of OpenJDK 11 restricts but still permits access to JDK internal modules. This means that most applications can continue to work without requiring changes, but these applications will emit a warning. As a workaround for this restriction, you can enable your application to access an internal package by passing a ‐‐add-opens <module-name>/<package-in-module>=ALL-UNNAMED option to the java command. For example: Additionally, you can check illegal access cases by passing the ‐‐illegal-access=warn option to the java command. This option changes the default behavior of Red Hat build of OpenJDK. ClassLoader The JPMS refactoring changes the ClassLoader hierarchy in Red Hat build of OpenJDK 11. In Red Hat build of OpenJDK 11, the system class loader is no longer an instance of URLClassLoader . Existing application code that invokes ClassLoader::getSystemClassLoader and casts the result to a URLClassLoader in Red Hat build of OpenJDK 11 will result in a runtime exception. In Red Hat build of OpenJDK 8, when you create a class loader, you can pass null as the parent of this class loader instance. However, in Red Hat build of OpenJDK 11, applications that pass null as the parent of a class loader might prevent the class loader from locating platform classes. Red Hat build of OpenJDK 11 includes a new class loader that can control the loading of certain classes. This improves the way that a class loader can locate all of its required classes. In Red Hat build of OpenJDK 11, when you create a class loader instance, you can set the platform class loader as its parent by using the ClassLoader.getPlatformClassLoader() API. Additional resources For more information about JPMS, see JEP 261: Module System . 2.7. Extension and endorsed override mechanisms In Red Hat build of OpenJDK 11, both the extension mechanism, which supported optional packages, and the endorsed standards override mechanism are no longer available. These changes mean that any libraries that are added to the <JAVA_HOME>/lib/ext or <JAVA_HOME>/lib/endorsed directory are no longer used, and Red Hat build of OpenJDK 11 generates an error if these directories exist. Additional resources For more information about the removed mechanisms, see JEP 220: Modular Run-Time Images . 2.8. JFR functionality JDK Flight Recorder (JFR) support was backported to Red Hat build of OpenJDK 8 starting from version 8u262. JFR support was subsequently enabled by default from Red Hat build of OpenJDK 8u272 onward. Note The term backporting describes when Red Hat takes an update from a more recent version of upstream software and applies that update to an older version of the software that Red Hat distributes. Backported JFR features The JFR backport to Red Hat build of OpenJDK 8 included all of the following features: A large number of events that are also available in Red Hat build of OpenJDK 11 Command-line tools such as jfr and the Java diagnostic command ( jcmd ) that behave consistently across Red Hat build of OpenJDK versions 8 and 11 The Java Management Extensions (JMX) API that you can use to enable JFR by using the JMX beans interfaces either programmatically or through jcmd The jdk.jfr namespace Note The JFR APIs in the jdk.jfr namespace are not considered part of the Java specification in Red Hat build of OpenJDK 8, but these APIs are part of the Java specification in Red Hat build of OpenJDK 11. Because the JFR API is available in all supported Red Hat build of OpenJDK versions, applications that use JFR do not require any special configuration to use the JFR APIs in Red Hat build of OpenJDK 8 and later versions. JDK Mission Control, which is distributed separately, was also updated to be compatible with Red Hat build of OpenJDK 8. Applications that need to be compatible with other OpenJDK versions If your applications need to be compatible with any of the following OpenJDK versions, you might need to adapt these applications: OpenJDK versions earlier than 8u262 OpenJDK versions from other vendors that do not support JFR Oracle JDK To aid this effort, Red Hat has developed a special compatibility layer that provides an empty implementation of JFR, which behaves as if JFR was disabled at runtime. For more information about the JFR compatibility API, see openjdk8-jfr-compat . You can install the resulting .jar file in the jre/lib/ext directory of an OpenJDK 8 distribution. Some applications might need to be updated if these applications were filtering out OpenJDK 8 by checking only for the version number instead of querying the MBeans interface. 2.9. JRE and headless packages All Red Hat build of OpenJDK versions for RHEL platforms are separated into the following types of packages. The following list of package types is sorted in order of minimality, starting with the most minimal. Java Runtime Environment (JRE) headless Provides the library only without support for graphical user interface but supports offline rendering of images JRE Adds the necessary libraries to run for full graphical clients JDK Includes tooling and compilers Red Hat build of OpenJDK versions for Windows platforms do not support headless packages. However, the Red Hat build of OpenJDK packages for Windows platforms are also divided into JRE and JDK components, similar to the packages for RHEL platforms. Note The upstream community version of OpenJDK 11 or later does not separate packages in this way and instead provides one monolithic JDK installation. OpenJDK 9 introduced a modularised version of the JDK class libraries divided by their namespaces. From Red Hat build of OpenJDK 11 onward, these libraries are packaged into jmods modules. For more information, see Jmods . 2.10. Jmods OpenJDK 9 introduced jmods , which is a modularized version of the JDK class libraries, where each module groups classes from a set of related packages. You can use the jlink tool to create derivative runtimes that include only some subset of the modules that are needed to run selected applications. From Red Hat build of OpenJDK 11 onward, Red Hat build of OpenJDK versions for RHEL platforms place the jmods files into a separate RPM package that is not installed by default. If you want to create standalone OpenJDK images for your applications by using jlink , you must manually install the jmods package (for example, java-11-openjdk-jmods ). Note On RHEL platforms, OpenJDK is dynamically linked against system libraries, which means the resulting jlink images are not portable across different versions of RHEL or other systems. If you want to ensure portability, you must use the portable builds of Red Hat build of OpenJDK that are released through the Red Hat Customer Portal. For more information, see Installing Red Hat build of OpenJDK on RHEL by using an archive . 2.11. Deprecated and removed functionality in Red Hat build of OpenJDK 11 Red Hat build of OpenJDK 11 has either deprecated or removed some features that Red Hat build of OpenJDK 8 supports. CORBA Red Hat build of OpenJDK 11 does not support the following Common Object Request Broker Architecture (CORBA) tools: Idlj orbd servertool tnamesrv Logging framework Red Hat build of OpenJDK 11 does not support the following APIs: java.util.logging.LogManager.addPropertyChangeListener java.util.logging.LogManager.removePropertyChangeListener java.util.jar.Pack200.Packer.addPropertyChangeListener java.util.jar.Pack200.Packer.removePropertyChangeListener java.util.jar.Pack200.Unpacker.addPropertyChangeListener java.util.jar.Pack200.Unpacker.removePropertyChangeListener Java EE modules Red Hat build of OpenJDK 11 does not support the following APIs: java.activation java.corba java.se.ee (aggregator) java.transaction java.xml.bind java.xml.ws java.xml.ws.annotation java.awt.peer Red Hat build of OpenJDK 11 sets the java.awt.peer package as internal, which means that applications cannot automatically access this package by default. Because of this change, Red Hat build of OpenJDK 11 removed a number of classes and methods that refer to the peer API, such as the Component.getPeer method. The following list outlines the most common use cases for the peer API: Writing of new graphics ports Checking if a component can be displayed Checking if a component is either lightweight or backed by an operating system native UI component resource such as an Xlib XWindow From Java 1.1 onward, the Component.isDisplayable() method provides the functionality to check whether a component can be displayed. From Java 1.2 onward, the Component.isLightweight() method provides the functionality to check whether a component is lightweight. javax.security and java.lang APIs Red Hat build of OpenJDK 11 does not support the following APIs: javax.security.auth.Policy java.lang.Runtime.runFinalizersOnExit(boolean) java.lang.SecurityManager.checkAwtEventQueueAccess() java.lang.SecurityManager.checkMemberAccess(java.lang.Class,int) java.lang.SecurityManager.checkSystemClipboardAccess() java.lang.SecurityManager.checkTopLevelWindow(java.lang.Object) java.lang.System.runFinalizersOnExit(boolean) java.lang.Thread.destroy() java.lang.Thread.stop(java.lang.Throwable) Sun.misc The sun.misc package has always been considered internal and unsupported. In Red Hat build of OpenJDK 11, the following packages are deprecated or removed: sun.misc.BASE64Encoder sun.misc.BASE64Decoder sun.misc.Unsafe sun.reflect.Reflection Consider the following information: Red Hat build of OpenJDK 8 added the java.util.Base64 package as a replacement for the sun.misc.BASE64Encoder and sun.misc.BASE64Decoder APIs. You can use the java.util.Base64 package rather than these APIs, which have been removed from Red Hat build of OpenJDK 11. Red Hat build of OpenJDK 11 deprecates the sun.misc.Unsafe package, which is scheduled for removal. For more information about a new set of APIs that you can use as a replacement for sun.misc.Unsafe , see JEP 193 . Red Hat build of OpenJDK 11 removes the sun.reflect.Reflection package. For more information about new functionality for stack walking that replaces the sun.reflect.Reflection.getCallerClass method, see JEP 259 . Additional resources For more information about the removed Java EE modules and COBRA modules and potential replacements for these modules, see JEP 320: Remove the Java EE and CORBA Modules . 2.12. Additional resources (or steps) For more information about Red Hat build of OpenJDK 8 features, see JDK 8 Features . For more information about OpenJDK 9 features inherited by Red Hat build of OpenJDK 11, see JDK 9 . For more information about OpenJDK 10 features inherited by Red Hat build of OpenJDK 11, see JDK 10 . For more information about Red Hat build of OpenJDK 11 features, see JDK 11 . For more information about a list of all available JEPs, see JEP 0: JEP Index . For more information about the changes introduced in version 17, see Major differences between Red Hat build of OpenJDK 11 and Red Hat build of OpenJDK 17 .
[ "--add-opens java.base/jdk.internal.math=ALL-UNNAMED" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/migrating_to_red_hat_build_of_openjdk_17_from_earlier_versions/differences_8_11
5.4.5. Creating Snapshot Volumes
5.4.5. Creating Snapshot Volumes Note As of the Red Hat Enterprise Linux 6.4 release, LVM supports thinly-provisioned snapshots. For information on creating thinly provisioned snapshot volumes, see Section 5.4.6, "Creating Thinly-Provisioned Snapshot Volumes" . Use the -s argument of the lvcreate command to create a snapshot volume. A snapshot volume is writable. Note LVM snapshots are not supported across the nodes in a cluster. You cannot create a snapshot volume in a clustered volume group. As of the Red Hat Enterprise Linux 6.1 release, however, if you need to create a consistent backup of data on a clustered logical volume you can activate the volume exclusively and then create the snapshot. For information on activating logical volumes exclusively on one node, see Section 5.7, "Activating Logical Volumes on Individual Nodes in a Cluster" . Note As of the Red Hat Enterprise Linux 6.1 release, LVM snapshots are supported for mirrored logical volumes. As of the Red Hat Enterprise Linux 6.3 release, snapshots are supported for RAID logical volumes. For information on RAID logical volumes, see Section 5.4.16, "RAID Logical Volumes" . As of the Red Hat Enterprise Linux 6.5 release, LVM does not allow you to create a snapshot volume that is larger than the size of the origin volume plus needed metadata for the volume. If you specify a snapshot volume that is larger than this, the system will create a snapshot volume that is only as large as will be needed for the size of the origin. By default, a snapshot volume is skipped during normal activation commands. For information on controlling the activation of a snapshot volume, see Section 5.4.17, "Controlling Logical Volume Activation" . The following command creates a snapshot logical volume that is 100 MB in size named /dev/vg00/snap . This creates a snapshot of the origin logical volume named /dev/vg00/lvol1 . If the original logical volume contains a file system, you can mount the snapshot logical volume on an arbitrary directory in order to access the contents of the file system to run a backup while the original file system continues to get updated. After you create a snapshot logical volume, specifying the origin volume on the lvdisplay command yields output that includes a list of all snapshot logical volumes and their status (active or inactive). The following example shows the status of the logical volume /dev/new_vg/lvol0 , for which a snapshot volume /dev/new_vg/newvgsnap has been created. The lvs command, by default, displays the origin volume and the current percentage of the snapshot volume being used for each snapshot volume. The following example shows the default output for the lvs command for a system that includes the logical volume /dev/new_vg/lvol0 , for which a snapshot volume /dev/new_vg/newvgsnap has been created. Warning Because the snapshot increases in size as the origin volume changes, it is important to monitor the percentage of the snapshot volume regularly with the lvs command to be sure it does not fill. A snapshot that is 100% full is lost completely, as a write to unchanged parts of the origin would be unable to succeed without corrupting the snapshot. As of the Red Hat Enterprise Linux 6.2 release, there are two new features related to snapshots. First, in addition to the snapshot itself being invalidated when full, any mounted file systems on that snapshot device are forcibly unmounted, avoiding the inevitable file system errors upon access to the mount point. Second, you can specify the snapshot_autoextend_threshold option in the lvm.conf file. This option allows automatic extension of a snapshot whenever the remaining snapshot space drops below the threshold you set. This feature requires that there be unallocated space in the volume group. As of the Red Hat Enterprise Linux 6.5 release, LVM does not allow you to create a snapshot volume that is larger than the size of the origin volume plus needed metadata for the volume. Similarly, automatic extension of a snapshot will not increase the size of a snapshot volume beyond the maximum calculated size that is necessary for the snapshot. Once a snapshot has grown large enough to cover the origin, it is no longer monitored for automatic extension. Information on setting snapshot_autoextend_threshold and snapshot_autoextend_percent is provided in the lvm.conf file itself. For information about the lvm.conf file, see Appendix B, The LVM Configuration Files .
[ "lvcreate --size 100M --snapshot --name snap /dev/vg00/lvol1", "lvdisplay /dev/new_vg/lvol0 --- Logical volume --- LV Name /dev/new_vg/lvol0 VG Name new_vg LV UUID LBy1Tz-sr23-OjsI-LT03-nHLC-y8XW-EhCl78 LV Write Access read/write LV snapshot status source of /dev/new_vg/newvgsnap1 [active] LV Status available # open 0 LV Size 52.00 MB Current LE 13 Segments 1 Allocation inherit Read ahead sectors 0 Block device 253:2", "lvs LV VG Attr LSize Origin Snap% Move Log Copy% lvol0 new_vg owi-a- 52.00M newvgsnap1 new_vg swi-a- 8.00M lvol0 0.20" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/snapshot_command
Chapter 4. Resolved issues
Chapter 4. Resolved issues There are no resolved issues for this release. For details of any security fixes in this release, see the errata links in Advisories related to this release .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_apache_http_server_2.4.57_service_pack_4_release_notes/resolved_issues
Preface
Preface Learn how to configure Red Hat Developer Hub for production to work in your IT ecosystem by adding custom config maps and secrets.
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/configuring/pr01
Chapter 15. Importing and exporting the database
Chapter 15. Importing and exporting the database Red Hat Single Sign-On includes the ability to export and import its entire database. You can migrate the whole Red Hat Single Sign-On database from one environment to another or migrate to another database. The export/import triggers at server boot time, and its parameters pass through Java properties. Note Because import and export trigger at server startup, take no actions on the server or the database during export/import. You can export/import your database to: A directory on the filesystem. A single JSON file on your filesystem. When importing from a directory, the filenames must follow this naming convention: <REALM_NAME>-realm.json. For example, "acme-roadrunner-affairs-realm.json" for the realm named "acme-roadrunner-affairs". <REALM_NAME>-users-<INDEX>.json. For example, "acme-roadrunner-affairs-users-0.json" for the first user's file of the realm named "acme-roadrunner-affairs" If you export to a directory, you can specify the number of users stored in each JSON file. Note Exporting into single files can produce large files, so if your database contains more than 500 users, export to a directory and not a single file. Exporting many users into a directory performs optimally as the directory provider uses a separate transaction for each "page" (a file of users). The default count of users per file and per transaction is fifty, but you can override this number. See keycloak.migration.usersPerFile for more information. Exporting to or importing from a single file uses one transaction, which can impair performance if the database contains many users. To export into an unencrypted directory: To export into single JSON file: Similarly, for importing,use -Dkeycloak.migration.action=import rather than export . For example: Other command line options include: -Dkeycloak.migration.realmName Use this property to export one specifically named realm. If this parameter is not specified, all realms export. -Dkeycloak.migration.usersExportStrategy This property specifies where users export to. Possible values include: DIFFERENT_FILES - Users export into different files subject to the maximum number of users per file . DIFFERENT_FILES is the default value for this property. SKIP - Red Hat Single Sign-On skips exporting users. REALM_FILE - Users export to the same file with the realm settings. The file is similar to "foo-realm.json" with realm data and users. SAME_FILE - Users export to the same file but different from the realm file. The result is similar to "foo-realm.json" with realm data and "foo-users.json" with users. -Dkeycloak.migration.usersPerFile This property specifies the number of users per file and database transaction. By default, its value is fifty. Red Hat Single Sign-On uses this property if keycloak.migration.usersExportStrategy is DIFFERENT_FILES. -Dkeycloak.migration.strategy Red Hat Single Sign-On uses this property when importing. It specifies how to proceed when a realm with the same name already exists in the database. Possible values are: IGNORE_EXISTING - Do not import a realm if a realm with the same name already exists. OVERWRITE_EXISTING - Remove the existing realm and import the realm again with new data from the JSON file. Use this value to migrate from one environment to another fully. If you are importing files that are not from a Red Hat Single Sign-On export, use the keycloak.import option. If you are importing more than one realm file, specify a comma-separated list of filenames. A list of filenames is more suitable than the cases because this happens after Red Hat Single Sign-On initializes the master realm. Examples: -Dkeycloak.import=/tmp/realm1.json -Dkeycloak.import=/tmp/realm1.json,/tmp/realm2.json Note You cannot use the keycloak.import parameter with keycloak.migration.X parameters. If you use these parameters together, Red Hat Single Sign-On ignores the keycloak.import parameter. The keycloak.import mechanism ignores the realms which already exist in Red Hat Single Sign-On. The keycloak.import mechanism is convenient for development purposes, but if more flexibility is needed, use the keycloak.migration.X parameters. 15.1. Admin console export/import Red Hat Single Sign-On imports most resources from the Admin Console as well as exporting most resources. Red Hat Single Sign-On does not support the export of users. Note Red Hat Single Sign-On masks attributes containing secrets or private information in the export file. Export files from the Admin Console are not suitable for backups or data transfer between servers. Only boot-time exports are suitable for backups or data transfer between servers. You can use the files created during an export to import from the Admin Console. You can export from one realm and import to another realm, or you can export from one server and import to another. Note The admin console export/import permits one realm per file only. Warning The Admin Console import can overwrite resources. Use this feature with caution, especially on a production server. JSON files from the Admin Console Export operation are not appropriate for data import because they contain invalid values for secrets. Warning You can use the Admin Console to export clients, groups, and roles. If the database in your realm contains many clients, groups, and roles, the export may take a long time to conclude, and the Red Hat Single Sign-On server may not respond to user requests. Use this feature with caution, especially on a production server.
[ "bin/standalone.sh -Dkeycloak.migration.action=export -Dkeycloak.migration.provider=dir -Dkeycloak.migration.dir=<DIR TO EXPORT TO>", "bin/standalone.sh -Dkeycloak.migration.action=export -Dkeycloak.migration.provider=singleFile -Dkeycloak.migration.file=<FILE TO EXPORT TO>", "bin/standalone.sh -Dkeycloak.migration.action=import -Dkeycloak.migration.provider=singleFile -Dkeycloak.migration.file=<FILE TO IMPORT> -Dkeycloak.migration.strategy=OVERWRITE_EXISTING" ]
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/server_administration_guide/assembly-exporting-importing_server_administration_guide
4.2. Logging Into the Piranha Configuration Tool
4.2. Logging Into the Piranha Configuration Tool When configuring LVS, you should always begin by configuring the primary router with the Piranha Configuration Tool . To do this,verify that the piranha-gui service is running and an administrative password has been set, as described in Section 2.2, "Setting a Password for the Piranha Configuration Tool " . If you are accessing the machine locally, you can open http://localhost:3636 in a Web browser to access the Piranha Configuration Tool . Otherwise, type in the hostname or real IP address for the server followed by :3636 . Once the browser connects, you will see the screen shown in Figure 4.1, "The Welcome Panel" . Figure 4.1. The Welcome Panel Click on the Login button and enter piranha for the Username and the administrative password you created in the Password field. The Piranha Configuration Tool is made of four main screens or panels . In addition, the Virtual Servers panel contains four subsections . The CONTROL/MONITORING panel is the first panel after the login screen.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/s1-piranha-login-vsa
Appendix B. Converting a File System from GFS to GFS2
Appendix B. Converting a File System from GFS to GFS2 Since the Red Hat Enterprise Linux 6 release does not support GFS file systems, you must upgrade any existing GFS file systems to GFS2 file systems with the gfs2_convert command. Note that you must perform this conversion procedure on a Red Hat Enterprise Linux 5 system before upgrading to Red Hat Enterprise Linux 6. Note For information on upgrading Red Hat Enterprise Linux 5 with a GFS file system to Red Hat Enterprise Linuz 7, see How to upgrade from RHEL 5 with a gfs or gfs2 filesystem to RHEL6 or RHEL 7? . Warning Before converting the GFS file system, you must back up the file system, since the conversion process is irreversible and any errors encountered during the conversion can result in the abrupt termination of the program and consequently an unusable file system. Before converting the GFS file system, you must use the gfs_fsck command to check the file system and fix any errors. If the conversion from GFS to GFS2 is interrupted by a power failure or any other issue, restart the conversion tool. Do not attempt to execute the fsck.gfs2 command on the file system until the conversion is complete. When converting full or nearly full file systems, it is possible that there will not be enough space available to fit all the GFS2 file system data structures. In such cases, the size of all the journals is reduced uniformly such that everything fits in the available space. B.1. Conversion of Context-Dependent Path Names GFS2 file systems do not provide support for Context-Dependent Path Names (CDPNs), which allow you to create symbolic links that point to variable destination files or directories. To achieve the same functionality as CDPNs in GFS2 file systems, you can use the bind option of the mount command. The gfs2_convert command identifies CDPNs and replaces them with empty directories with the same name. In order to configure bind mounts to replace the CDPNs, however, you need to know the full paths of the link targets of the CDPNs you are replacing. Before converting your file system, you can use the find command to identify the links. The following command lists the symlinks that point to a hostname CDPN: Similarly, you can execute the find command for other CDPNs ( mach , os , sys , uid , gid , jid ). Note that since CDPN names can be of the form @hostname or {hostname} , you will need to run the find command for each variant. For more information on bind mounts and context-dependent pathnames in GFS2, see Section 4.12, "Bind Mounts and Context-Dependent Path Names" .
[ "find /mnt/gfs -lname @hostname /mnt/gfs/log" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/gfs_upgrade
16.4. Configuration Examples
16.4. Configuration Examples 16.4.1. Enabling SELinux Labeled NFS Support The following example demonstrates how to enable SELinux labeled NFS support. This example assumes that the nfs-utils package is installed, that the SELinux targeted policy is used, and that SELinux is running in enforcing mode. Note Steps 1-3 are supposed to be performed on the NFS server, nfs-srv . If the NFS server is running, stop it: Confirm that the server is stopped: Edit the /etc/sysconfig/nfs file to set the RPCNFSDARGS flag to "-V 4.2" : Start the server again and confirm that it is running. The output will contain information below, only the time stamp will differ: On the client side, mount the NFS server: All SELinux labels are now successfully passed from the server to the client: Note If you enable labeled NFS support for home directories or other content, the content will be labeled the same as it was on an EXT file system. Also note that mounting systems with different versions of NFS or an attempt to mount a server that does not support labeled NFS could cause errors to be returned.
[ "systemctl stop nfs", "systemctl status nfs nfs-server.service - NFS Server Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled) Active: inactive (dead)", "Optional arguments passed to rpc.nfsd. See rpc.nfsd(8) RPCNFSDARGS=\"-V 4.2\"", "systemctl start nfs", "systemctl status nfs nfs-server.service - NFS Server Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled) Active: active (exited) since Wed 2013-08-28 14:07:11 CEST; 4s ago", "mount -o v4.2 server:mntpoint localmountpoint", "[nfs-srv]USD ls -Z file -rw-rw-r--. user user unconfined_u:object_r:svirt_image_t:s0 file [nfs-client]USD ls -Z file -rw-rw-r--. user user unconfined_u:object_r:svirt_image_t:s0 file" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-managing_confined_services-nfs-configuration_examples
33.6. Printing a Test Page
33.6. Printing a Test Page After you have configured your printer, you should print a test page to make sure the printer is functioning properly. To print a test page, select the printer that you want to try out from the printer list, then click Print Test Page from the printer's Settings tab. If you change the print driver or modify the driver options, you should print a test page to test the different configuration.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/s1-printing-test-page
4.2. Disk
4.2. Disk The following sections showcase scripts that monitor disk and I/O activity. 4.2.1. Summarizing Disk Read/Write Traffic This section describes how to identify which processes are performing the heaviest disk reads/writes to the system. Example 4.9. disktop.stp #!/usr/bin/stap # #
[ "#!/usr/bin/stap # Copyright (C) 2007 Oracle Corp. # Get the status of reading/writing disk every 5 seconds, output top ten entries # This is free software,GNU General Public License (GPL); either version 2, or (at your option) any later version. # Usage: ./disktop.stp # global io_stat,device global read_bytes,write_bytes probe vfs.read.return { if (USDreturn>0) { if (devname!=\"N/A\") {/*skip read from cache*/ io_stat[pid(),execname(),uid(),ppid(),\"R\"] += USDreturn device[pid(),execname(),uid(),ppid(),\"R\"] = devname read_bytes += USDreturn } } } probe vfs.write.return { if (USDreturn>0) { if (devname!=\"N/A\") { /*skip update cache*/ io_stat[pid(),execname(),uid(),ppid(),\"W\"] += USDreturn device[pid(),execname(),uid(),ppid(),\"W\"] = devname write_bytes += USDreturn } } } probe timer.ms(5000) { /* skip non-read/write disk */ if (read_bytes+write_bytes) { printf(\"\\n%-25s, %-8s%4dKb/sec, %-7s%6dKb, %-7s%6dKb\\n\\n\", ctime(gettimeofday_s()), \"Average:\", ((read_bytes+write_bytes)/1024)/5, \"Read:\",read_bytes/1024, \"Write:\",write_bytes/1024) /* print header */ printf(\"%8s %8s %8s %25s %8s %4s %12s\\n\", \"UID\",\"PID\",\"PPID\",\"CMD\",\"DEVICE\",\"T\",\"BYTES\") } /* print top ten I/O */ foreach ([process,cmd,userid,parent,action] in io_stat- limit 10) printf(\"%8d %8d %8d %25s %8s %4s %12d\\n\", userid,process,parent,cmd, device[process,cmd,userid,parent,action], action,io_stat[process,cmd,userid,parent,action]) /* clear data */ delete io_stat delete device read_bytes = 0 write_bytes = 0 } probe end{ delete io_stat delete device delete read_bytes delete write_bytes }", "[...] Mon Sep 29 03:38:28 2008 , Average: 19Kb/sec, Read: 7Kb, Write: 89Kb UID PID PPID CMD DEVICE T BYTES 0 26319 26294 firefox sda5 W 90229 0 2758 2757 pam_timestamp_c sda5 R 8064 0 2885 1 cupsd sda5 W 1678 Mon Sep 29 03:38:38 2008 , Average: 1Kb/sec, Read: 7Kb, Write: 1Kb UID PID PPID CMD DEVICE T BYTES 0 2758 2757 pam_timestamp_c sda5 R 8064 0 2885 1 cupsd sda5 W 1678", "global start global entry_io global fd_io global time_io function timestamp:long() { return gettimeofday_us() - start } function proc:string() { return sprintf(\"%d (%s)\", pid(), execname()) } probe begin { start = gettimeofday_us() } global filenames global filehandles global fileread global filewrite probe syscall.open { filenames[pid()] = user_string(USDfilename) } probe syscall.open.return { if (USDreturn != -1) { filehandles[pid(), USDreturn] = filenames[pid()] fileread[pid(), USDreturn] = 0 filewrite[pid(), USDreturn] = 0 } else { printf(\"%d %s access %s fail\\n\", timestamp(), proc(), filenames[pid()]) } delete filenames[pid()] } probe syscall.read { if (USDcount > 0) { fileread[pid(), USDfd] += USDcount } t = gettimeofday_us(); p = pid() entry_io[p] = t fd_io[p] = USDfd } probe syscall.read.return { t = gettimeofday_us(); p = pid() fd = fd_io[p] time_io[p,fd] <<< t - entry_io[p] } probe syscall.write { if (USDcount > 0) { filewrite[pid(), USDfd] += USDcount } t = gettimeofday_us(); p = pid() entry_io[p] = t fd_io[p] = USDfd } probe syscall.write.return { t = gettimeofday_us(); p = pid() fd = fd_io[p] time_io[p,fd] <<< t - entry_io[p] } probe syscall.close { if (filehandles[pid(), USDfd] != \"\") { printf(\"%d %s access %s read: %d write: %d\\n\", timestamp(), proc(), filehandles[pid(), USDfd], fileread[pid(), USDfd], filewrite[pid(), USDfd]) if (@count(time_io[pid(), USDfd])) printf(\"%d %s iotime %s time: %d\\n\", timestamp(), proc(), filehandles[pid(), USDfd], @sum(time_io[pid(), USDfd])) } delete fileread[pid(), USDfd] delete filewrite[pid(), USDfd] delete filehandles[pid(), USDfd] delete fd_io[pid()] delete entry_io[pid()] delete time_io[pid(),USDfd] }", "[...] 825946 3364 (NetworkManager) access /sys/class/net/eth0/carrier read: 8190 write: 0 825955 3364 (NetworkManager) iotime /sys/class/net/eth0/carrier time: 9 [...] 117061 2460 (pcscd) access /dev/bus/usb/003/001 read: 43 write: 0 117065 2460 (pcscd) iotime /dev/bus/usb/003/001 time: 7 [...] 3973737 2886 (sendmail) access /proc/loadavg read: 4096 write: 0 3973744 2886 (sendmail) iotime /proc/loadavg time: 11 [...]", "#! /usr/bin/env stap traceio.stp Copyright (C) 2007 Red Hat, Inc., Eugene Teo <[email protected]> Copyright (C) 2009 Kai Meyer <[email protected]> Fixed a bug that allows this to run longer And added the humanreadable function # This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License version 2 as published by the Free Software Foundation. # global reads, writes, total_io probe vfs.read.return { reads[pid(),execname()] += USDreturn total_io[pid(),execname()] += USDreturn } probe vfs.write.return { writes[pid(),execname()] += USDreturn total_io[pid(),execname()] += USDreturn } function humanreadable(bytes) { if (bytes > 1024*1024*1024) { return sprintf(\"%d GiB\", bytes/1024/1024/1024) } else if (bytes > 1024*1024) { return sprintf(\"%d MiB\", bytes/1024/1024) } else if (bytes > 1024) { return sprintf(\"%d KiB\", bytes/1024) } else { return sprintf(\"%d B\", bytes) } } probe timer.s(1) { foreach([p,e] in total_io- limit 10) printf(\"%8d %15s r: %12s w: %12s\\n\", p, e, humanreadable(reads[p,e]), humanreadable(writes[p,e])) printf(\"\\n\") # Note we don't zero out reads, writes and total_io, # so the values are cumulative since the script started. }", "[...] Xorg r: 583401 KiB w: 0 KiB floaters r: 96 KiB w: 7130 KiB multiload-apple r: 538 KiB w: 537 KiB sshd r: 71 KiB w: 72 KiB pam_timestamp_c r: 138 KiB w: 0 KiB staprun r: 51 KiB w: 51 KiB snmpd r: 46 KiB w: 0 KiB pcscd r: 28 KiB w: 0 KiB irqbalance r: 27 KiB w: 4 KiB cupsd r: 4 KiB w: 18 KiB Xorg r: 588140 KiB w: 0 KiB floaters r: 97 KiB w: 7143 KiB multiload-apple r: 543 KiB w: 542 KiB sshd r: 72 KiB w: 72 KiB pam_timestamp_c r: 138 KiB w: 0 KiB staprun r: 51 KiB w: 51 KiB snmpd r: 46 KiB w: 0 KiB pcscd r: 28 KiB w: 0 KiB irqbalance r: 27 KiB w: 4 KiB cupsd r: 4 KiB w: 18 KiB", "#! /usr/bin/env stap global device_of_interest probe begin { /* The following is not the most efficient way to do this. One could directly put the result of usrdev2kerndev() into device_of_interest. However, want to test out the other device functions */ dev = usrdev2kerndev(USD1) device_of_interest = MKDEV(MAJOR(dev), MINOR(dev)) } probe vfs.write, vfs.read { if (dev == device_of_interest) printf (\"%s(%d) %s 0x%x\\n\", execname(), pid(), probefunc(), dev) }", "[...] synergyc(3722) vfs_read 0x800005 synergyc(3722) vfs_read 0x800005 cupsd(2889) vfs_write 0x800005 cupsd(2889) vfs_write 0x800005 cupsd(2889) vfs_write 0x800005 [...]", "#! /usr/bin/env stap probe vfs.write, vfs.read { # dev and ino are defined by vfs.write and vfs.read if (dev == MKDEV(USD1,USD2) # major/minor device && ino == USD3) printf (\"%s(%d) %s 0x%x/%u\\n\", execname(), pid(), probefunc(), dev, ino) }", "805 1078319", "cat(16437) vfs_read 0x800005/1078319 cat(16437) vfs_read 0x800005/1078319", "global ATTR_MODE = 1 probe kernel.function(\"inode_setattr\") { dev_nr = USDinode->i_sb->s_dev inode_nr = USDinode->i_ino if (dev_nr == (USD1 << 20 | USD2) # major/minor device && inode_nr == USD3 && USDattr->ia_valid & ATTR_MODE) printf (\"%s(%d) %s 0x%x/%u %o %d\\n\", execname(), pid(), probefunc(), dev_nr, inode_nr, USDattr->ia_mode, uid()) }", "chmod(17448) inode_setattr 0x800005/6011835 100777 500 chmod(17449) inode_setattr 0x800005/6011835 100666 500" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_beginners_guide/mainsect-disk
Chapter 10. Provisioning virtual machines in VMware vSphere
Chapter 10. Provisioning virtual machines in VMware vSphere VMware vSphere is an enterprise-level virtualization platform from VMware. Red Hat Satellite can interact with the vSphere platform, including creating new virtual machines and controlling their power management states. 10.1. Prerequisites for VMware provisioning The requirements for VMware vSphere provisioning include: A supported version of VMware vCenter Server. The following versions have been fully tested with Satellite: vCenter Server 8.0 vCenter Server 7.0 A Capsule Server managing a network on the vSphere environment. Ensure no other DHCP services run on this network to avoid conflicts with Capsule Server. For more information, see Chapter 3, Configuring networking . An existing VMware template if you want to use image-based provisioning. You can use synchronized content repositories for Red Hat Enterprise Linux. For more information, see Syncing Repositories in Managing content . Provide an activation key for host registration. For more information, see Creating An Activation Key in Managing content . 10.2. Creating a VMware user The VMware vSphere server requires an administration-like user for Satellite Server communication. For security reasons, do not use the administrator user for such communication. Instead, create a user with the following permissions: For VMware vCenter Server version 8.0 or 7.0, set the following permissions: All Privileges Datastore Allocate Space, Browse datastore, Update Virtual Machine files, Low level file operations All Privileges Network Assign Network All Privileges Resource Assign virtual machine to resource pool All Privileges Virtual Machine Change Config (All) All Privileges Virtual Machine Interaction (All) All Privileges Virtual Machine Edit Inventory (All) All Privileges Virtual Machine Provisioning (All) All Privileges Virtual Machine Guest Operations (All) 10.3. Adding a VMware connection to Satellite Server Use this procedure to add a VMware vSphere connection in Satellite Server's compute resources. To use the CLI instead of the Satellite web UI, see the CLI procedure . Prerequisites Ensure that the host and network-based firewalls are configured to allow communication from Satellite Server to vCenter on TCP port 443. Verify that Satellite Server and vCenter can resolve each other's host names. Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources , and in the Compute Resources window, click Create Compute Resource . In the Name field, enter a name for the resource. From the Provider list, select VMware . In the Description field, enter a description for the resource. In the VCenter/Server field, enter the IP address or host name of the vCenter server. In the User field, enter the user name with permission to access the vCenter's resources. In the Password field, enter the password for the user. Click Load Datacenters to populate the list of data centers from your VMware vSphere environment. From the Datacenter list, select a specific data center to manage from this list. In the Fingerprint field, ensure that this field is populated with the fingerprint from the data center. From the Display Type list, select a console type, for example, VNC or VMRC . Note that VNC consoles are unsupported on VMware ESXi 6.5 and later. Optional: In the VNC Console Passwords field, select the Set a randomly generated password on the display connection checkbox to secure console access for new hosts with a randomly generated password. You can retrieve the password for the VNC console to access guest virtual machine console from the libvirtd host from the output of the following command: The password randomly generates every time the console for the virtual machine opens, for example, with virt-manager. From the Enable Caching list, you can select whether to enable caching of compute resources. For more information, see Section 10.10, "Caching of compute resources" . Click the Locations and Organizations tabs and verify that the values are automatically set to your current context. You can also add additional contexts. Click Submit to save the connection. CLI procedure Create the connection with the hammer compute-resource create command. Select Vmware as the --provider and set the instance UUID of the data center as the --uuid : 10.4. Adding VMware images to Satellite Server VMware vSphere uses templates as images for creating new virtual machines. If using image-based provisioning to create new hosts, you need to add VMware template details to your Satellite Server. This includes access details and the template name. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources . Select your VMware compute resource. Click Create Image . In the Name field, enter a name for the image. From the Operating System list, select the base operating system of the image. From the Architecture list, select the operating system architecture. In the Username field, enter the SSH user name for image access. By default, this is set to root . If your image supports user data input such as cloud-init data, click the User data checkbox. Optional: In the Password field, enter the SSH password to access the image. From the Image list, select an image from VMware. Click Submit to save the image details. CLI procedure Create the image with the hammer compute-resource image create command. Use the --uuid field to store the relative template path on the vSphere environment: 10.5. Adding VMware details to a compute profile You can predefine certain hardware settings for virtual machines on VMware vSphere. You achieve this through adding these hardware settings to a compute profile. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Infrastructure > Compute Profiles . Select a compute profile. Select a VMware compute resource. In the CPUs field, enter the number of CPUs to allocate to the host. In the Cores per socket field, enter the number of cores to allocate to each CPU. In the Memory field, enter the amount of memory in MiB to allocate to the host. In the Firmware checkbox, select either BIOS or UEFI as firmware for the host. By default, this is set to automatic . In the Cluster list, select the name of the target host cluster on the VMware environment. From the Resource pool list, select an available resource allocations for the host. In the Folder list, select the folder to organize the host. From the Guest OS list, select the operating system you want to use in VMware vSphere. From the Virtual H/W version list, select the underlying VMware hardware abstraction to use for virtual machines. If you want to add more memory while the virtual machine is powered on, select the Memory hot add checkbox. If you want to add more CPUs while the virtual machine is powered on, select the CPU hot add checkbox. If you want to add a CD-ROM drive, select the CD-ROM drive checkbox. From the Boot order list, define the order in which the virtual machines tried to boot. Optional: In the Annotation Notes field, enter an arbitrary description. If you use image-based provisioning, select the image from the Image list. From the SCSI controller list, select the disk access method for the host. If you want to use eager zero thick provisioning, select the Eager zero checkbox. By default, the disk uses lazy zero thick provisioning. From the Network Interfaces list, select the network parameters for the host's network interface. At least one interface must point to a Capsule-managed network. Optional: Click Add Interface to create another network interfaces. Click Submit to save the compute profile. CLI procedure Create a compute profile: Set VMware details to a compute profile: 10.6. Creating hosts on VMware The VMware vSphere provisioning process provides the option to create hosts over a network connection or using an existing image. For network-based provisioning, you must create a host to access either Satellite Server's integrated Capsule or an external Capsule Server on a VMware vSphere virtual network, so that the host has access to PXE provisioning services. The new host entry triggers the VMware vSphere server to create the virtual machine. If the virtual machine detects the defined Capsule Server through the virtual network, the virtual machine boots to PXE and begins to install the chosen operating system. DHCP conflicts If you use a virtual network on the VMware vSphere server for provisioning, ensure that you select a virtual network that does not provide DHCP assignments. This causes DHCP conflicts with Satellite Server when booting new hosts. For image-based provisioning, use the pre-existing image as a basis for the new volume. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Create Host . In the Name field, enter a name for the host. Optional: Click the Organization tab and change the organization context to match your requirement. Optional: Click the Location tab and change the location context to match your requirement. From the Host Group list, select a host group that you want to assign your host to. That host group will populate the form. From the Deploy on list, select the VMware vSphere connection. From the Compute Profile list, select a profile to use to automatically populate virtual machine-based settings. Click the Interfaces tab, and on the interface of the host, click Edit . Verify that the fields are populated with values. Note in particular: Satellite automatically assigns an IP address for the new host. Ensure that the MAC address field is blank. VMware assigns a MAC address to the host during provisioning. The Name from the Host tab becomes the DNS name . Ensure that Satellite automatically selects the Managed , Primary , and Provision options for the first interface on the host. If not, select them. In the interface window, review the VMware-specific fields that are populated with settings from our compute profile. Modify these settings to suit your needs. Click OK to save. To add another interface, click Add Interface . You can select only one interface for Provision and Primary . Click the Operating System tab, and confirm that all fields automatically contain values. Select the Provisioning Method that you want: For network-based provisioning, click Network Based . For image-based provisioning, click Image Based . For boot-disk provisioning, click Boot disk based . Click Resolve in Provisioning templates to check the new host can identify the right provisioning templates to use. Click the Virtual Machine tab and confirm that these settings are populated with details from the host group and compute profile. Modify these settings to suit your requirements. Click the Parameters tab and ensure that a parameter exists that provides an activation key. If a parameter does not exist, click + Add Parameter . In the field Name , enter kt_activation_keys . In the field Value , enter the name of the activation key used to register the Content Hosts. Click Submit to provision your host on VMware. CLI procedure Create the host from a network with the hammer host create command and include --provision-method build to use network-based provisioning: Create the host from an image with the hammer host create command and include --provision-method image to use image-based provisioning: For more information about additional host creation parameters for this compute resource, enter the hammer host create --help command. 10.7. Using VMware cloud-init and userdata templates for provisioning You can use VMware with the Cloud-init and Userdata templates to insert user data into the new virtual machine, to make further VMware customization, and to enable the VMware-hosted virtual machine to call back to Satellite. You can use the same procedures to set up a VMware compute resource within Satellite, with a few modifications to the workflow. Figure 10.1. VMware cloud-init provisioning overview When you set up the compute resource and images for VMware provisioning in Satellite, the following sequence of provisioning events occurs: The user provisions one or more virtual machines using the Satellite web UI, API, or hammer Satellite calls the VMware vCenter to clone the virtual machine template Satellite userdata provisioning template adds customized identity information When provisioning completes, the Cloud-init provisioning template instructs the virtual machine to call back to Capsule when cloud-init runs VMware vCenter clones the template to the virtual machine VMware vCenter applies customization for the virtual machine's identity, including the host name, IP, and DNS The virtual machine builds, cloud-init is invoked and calls back Satellite on port 80 , which then redirects to 443 Prerequisites Configure port and firewall settings to open any necessary connections. Because of the cloud-init service, the virtual machine always calls back to Satellite even if you register the virtual machine to Capsule. For more information, see Port and firewall requirements in Installing Satellite Server in a connected network environment and Port and firewall requirements in Installing Capsule Server . If you want to use Capsule Servers instead of your Satellite Server, ensure that you have configured your Capsule Servers accordingly. For more information, see Configuring Capsule for Host Registration and Provisioning in Installing Capsule Server . Back up the following configuration files: /etc/cloud/cloud.cfg.d/01_network.cfg /etc/cloud/cloud.cfg.d/10_datasource.cfg /etc/cloud/cloud.cfg Associating the Userdata and Cloud-init templates with the operating system In the Satellite web UI, navigate to Hosts > Templates > Provisioning Templates . Search for the CloudInit default template and click its name. Click the Association tab. Select all operating systems to which the template applies and click Submit . Repeat the steps above for the UserData open-vm-tools template. Navigate to Hosts > Provisioning Setup > Operating Systems . Select the operating system that you want to use for provisioning. Click the Templates tab. From the Cloud-init template list, select CloudInit default . From the User data template list, select UserData open-vm-tools . Click Submit to save the changes. Preparing an image to use the cloud-init template To prepare an image, you must first configure the settings that you require on a virtual machine that you can then save as an image to use in Satellite. To use the cloud-init template for provisioning, you must configure a virtual machine so that cloud-init is installed, enabled, and configured to call back to Satellite Server. For security purposes, you must install a CA certificate to use HTTPS for all communication. This procedure includes steps to clean the virtual machine so that no unwanted information transfers to the image you use for provisioning. If you have an image with cloud-init , you must still follow this procedure to enable cloud-init to communicate with Satellite because cloud-init is disabled by default. Procedure On the virtual machine that you use to create the image, install the required packages: Disable network configuration by cloud-init : Configure cloud-init to fetch data from Satellite: If you intend to provision through Capsule Server, use the URL of your Capsule Server in the seedfrom option, such as https:// capsule.example.com :9090/userdata/ . Configure modules to use in cloud-init : Enable the CA certificates for the image: Download the katello-server-ca.crt file from Satellite Server: If you intend to provision through Capsule Server, download the file from your Capsule Server, such as https:// capsule.example.com /pub/katello-server-ca.crt . Update the record of certificates: Stop the rsyslog and auditd services: Clean packages on the image: On Red Hat Enterprise Linux 8 and later: On Red Hat Enterprise Linux 7 and earlier: Reduce logspace, remove old logs, and truncate logs: Remove udev hardware rules: Remove the ifcfg scripts related to existing network configurations: Remove the SSH host keys: Remove root user's SSH history: Remove root user's shell history: Create an image from this virtual machine. Add your image to Satellite . 10.8. Deleting a VM on VMware You can delete VMs running on VMware from within Satellite. Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources . Select your VMware provider. On the Virtual Machines tab, click Delete from the Actions menu. This deletes the virtual machine from the VMware compute resource while retaining any associated hosts within Satellite. If you want to delete the orphaned host, navigate to Hosts > All Hosts and delete the host manually. Additional resources You can configure Satellite to remove the associated virtual machine when you delete a host. For more information, see Section 2.22, "Removing a virtual machine upon host deletion" . 10.9. Importing a virtual machine from VMware into Satellite You can import existing virtual machines running on VMware into Satellite. Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources . Select your VMware compute resource. On the Virtual Machines tab, click Import as managed Host or Import as unmanaged Host from the Actions menu. The following page looks identical to creating a host with the compute resource being already selected. For more information, see Creating a host in Satellite in Managing hosts . Click Submit to import the virtual machine into Satellite. 10.10. Caching of compute resources Caching of compute resources speeds up rendering of VMware information. 10.10.1. Enabling caching of compute resources To enable or disable caching of compute resources: Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources . Click the Edit button to the right of the VMware server you want to update. Select the Enable caching checkbox. 10.10.2. Refreshing the compute resources cache Refresh the cache of compute resources to update compute resources information. Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources . Select a VMware server you want to refresh the compute resources cache for and click Refresh Cache . CLI procedure Use this API call to refresh the compute resources cache: Use hammer compute-resource list to determine the ID of the VMware server you want to refresh the compute resources cache for.
[ "virsh edit your_VM_name <graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0' passwd=' your_randomly_generated_password '>", "hammer compute-resource create --datacenter \" My_Datacenter \" --description \"vSphere server at vsphere.example.com \" --locations \" My_Location \" --name \"My_vSphere\" --organizations \" My_Organization \" --password \" My_Password \" --provider \"Vmware\" --server \" vsphere.example.com \" --user \" My_User \"", "hammer compute-resource image create --architecture \" My_Architecture \" --compute-resource \" My_VMware \" --name \" My_Image \" --operatingsystem \" My_Operating_System \" --username root --uuid \" My_UUID \"", "hammer compute-profile create --name \" My_Compute_Profile \"", "hammer compute-profile values create --compute-attributes \"cpus=1,corespersocket=2,memory_mb=1024,cluster=MyCluster,path=MyVMs,start=true\" --compute-profile \" My_Compute_Profile \" --compute-resource \" My_VMware \" --interface \"compute_type=VirtualE1000,compute_network=mynetwork --volume \"size_gb=20G,datastore=Data,name=myharddisk,thin=true\"", "hammer host create --build true --compute-attributes=\"cpus=1,corespersocket=2,memory_mb=1024,cluster=MyCluster,path=MyVMs,start=true\" --compute-resource \" My_VMware \" --enabled true --hostgroup \" My_Host_Group \" --interface \"managed=true,primary=true,provision=true,compute_type=VirtualE1000,compute_network=mynetwork\" --location \" My_Location \" --managed true --name \" My_Host \" --organization \" My_Organization \" --provision-method build --volume=\"size_gb=20G,datastore=Data,name=myharddisk,thin=true\"", "hammer host create --compute-attributes=\"cpus=1,corespersocket=2,memory_mb=1024,cluster=MyCluster,path=MyVMs,start=true\" --compute-resource \" My_VMware \" --enabled true --hostgroup \" My_Host_Group \" --image \" My_VMware_Image \" --interface \"managed=true,primary=true,provision=true,compute_type=VirtualE1000,compute_network=mynetwork\" --location \" My_Location \" --managed true --name \" My_Host \" --organization \" My_Organization \" --provision-method image --volume=\"size_gb=20G,datastore=Data,name=myharddisk,thin=true\"", "dnf install cloud-init open-vm-tools perl-interpreter perl-File-Temp", "cat << EOM > /etc/cloud/cloud.cfg.d/01_network.cfg network: config: disabled EOM", "cat << EOM > /etc/cloud/cloud.cfg.d/10_datasource.cfg datasource_list: [NoCloud] datasource: NoCloud: seedfrom: https://satellite.example.com/userdata/ EOM", "cat << EOM > /etc/cloud/cloud.cfg cloud_init_modules: - bootcmd - ssh cloud_config_modules: - runcmd cloud_final_modules: - scripts-per-once - scripts-per-boot - scripts-per-instance - scripts-user - phone-home system_info: distro: rhel paths: cloud_dir: /var/lib/cloud templates_dir: /etc/cloud/templates ssh_svcname: sshd EOM", "update-ca-trust enable", "wget -O /etc/pki/ca-trust/source/anchors/cloud-init-ca.crt https:// satellite.example.com /pub/katello-server-ca.crt", "update-ca-trust extract", "systemctl stop rsyslog systemctl stop auditd", "dnf remove --oldinstallonly", "package-cleanup --oldkernels --count=1 dnf clean all", "logrotate -f /etc/logrotate.conf rm -f /var/log/*-???????? /var/log/*.gz rm -f /var/log/dmesg.old rm -rf /var/log/anaconda cat /dev/null > /var/log/audit/audit.log cat /dev/null > /var/log/wtmp cat /dev/null > /var/log/lastlog cat /dev/null > /var/log/grubby", "rm -f /etc/udev/rules.d/70*", "rm -f /etc/sysconfig/network-scripts/ifcfg-ens* rm -f /etc/sysconfig/network-scripts/ifcfg-eth*", "rm -f /etc/ssh/ssh_host_*", "rm -rf ~root/.ssh/known_hosts", "rm -f ~root/.bash_history unset HISTFILE", "curl -H \"Accept:application/json\" -H \"Content-Type:application/json\" -X PUT -u username : password -k https:// satellite.example.com /api/compute_resources/ compute_resource_id /refresh_cache" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/provisioning_hosts/provisioning_virtual_machines_in_vmware_vmware-provisioning
Chapter 7. Infrastructure services
Chapter 7. Infrastructure services 7.1. Time synchronization Accurate timekeeping is important for a number of reasons. In Linux systems, the Network Time Protocol (NTP) protocol is implemented by a daemon running in user space. 7.1.1. Implementation of NTP RHEL 7 supported two implementations of the NTP protocol: ntp and chrony . In RHEL 8, the NTP protocol is implemented only by the chronyd daemon, provided by the chrony package. The ntp daemon is no longer available. If you used ntp on your RHEL 7 system, you might need to migrate to chrony . Possible replacements for ntp features that are not supported by chrony are documented in Achieving some settings previously supported by ntp in chrony . 7.1.2. Introduction to chrony suite chrony is an implementation of NTP , which performs well in a wide range of conditions, including intermittent network connections, heavily congested networks, changing temperatures (ordinary computer clocks are sensitive to temperature), and systems that do not run continuously, or run on a virtual machine. You can use chrony : To synchronize the system clock with NTP servers To synchronize the system clock with a reference clock, for example a GPS receiver To synchronize the system clock with a manual time input As an NTPv4(RFC 5905) server or peer to provide a time service to other computers in the network For more information about chrony , see Configuring basic system settings . 7.1.2.1. Differences between chrony and ntp See the following resources for information about differences between chrony and ntp : Differences Between ntpd and chronyd Comparison of NTP implementations 7.1.2.1.1. Chrony applies leap second correction by default In RHEL 8, the default chrony configuration file, /etc/chrony.conf , includes the leapsectz directive. The leapsectz directive enables chronyd to: Get information about leap seconds from the system tz database ( tzdata ) Set the TAI-UTC offset of the system clock in order that the system provides an accurate International Atomic Time (TAI) clock (CLOCK_TAI) The directive is not compatible with servers that hide leap seconds from their clients using a leap smear , such as chronyd servers configured with the leapsecmode and smoothtime directives. If a client chronyd is configured to synchronize to such servers, remove leapsectz from the configuration file. 7.1.3. Additional information For more information about how to configure NTP using the chrony suite, see Configuring time synchronization . 7.2. BIND - Implementation of DNS RHEL 8 includes BIND (Berkeley Internet Name Domain) in version 9.11. This version of the DNS server introduces multiple new features and feature changes compared to version 9.10. New features: A new method of provisioning secondary servers called Catalog Zones has been added. Domain Name System Cookies are now sent by the named service and the dig utility. The Response Rate Limiting feature can now help with mitigation of DNS amplification attacks. Performance of response-policy zone (RPZ) has been improved. A new zone file format called map has been added. Zone data stored in this format can be mapped directly into memory, which enables zones to load significantly faster. A new tool called delv (domain entity lookup and validation) has been added, with dig-like semantics for looking up DNS data and performing internal DNS Security Extensions (DNSSEC) validation. A new mdig command is now available. This command is a version of the dig command that sends multiple pipelined queries and then waits for responses, instead of sending one query and waiting for the response before sending the query. A new prefetch option, which improves the recursive resolver performance, has been added. A new in-view zone option, which allows zone data to be shared between views, has been added. When this option is used, multiple views can serve the same zones authoritatively without storing multiple copies in memory. A new max-zone-ttl option, which enforces maximum TTLs for zones, has been added. When a zone containing a higher TTL is loaded, the load fails. Dynamic DNS (DDNS) updates with higher TTLs are accepted but the TTL is truncated. New quotas have been added to limit queries that are sent by recursive resolvers to authoritative servers experiencing denial-of-service attacks. The nslookup utility now looks up both IPv6 and IPv4 addresses by default. The named service now checks whether other name server processes are running before starting up. When loading a signed zone, named now checks whether a Resource Record Signature's (RSIG) inception time is in the future, and if so, it regenerates the RRSIG immediately. Zone transfers now use smaller message sizes to improve message compression, which reduces network usage. Feature changes: The version 3 XML schema for the statistics channel, including new statistics and a flattened XML tree for faster parsing, is provided by the HTTP interface. The legacy version 2 XML schema is no longer supported. The named service now listens on both IPv6 and IPv4 interfaces by default. The named service no longer supports GeoIP databases. Access control lists (ACLs) defined by presumed location of query sender are unavailable. Since RHEL 8.2, the named service supports GeoIP2, which is provided in the libmaxminddb data format. 7.3. DNS resolution In RHEL 7, the nslookup and host utilities were able to accept any reply without the recursion available flag from any name server listed. In RHEL 8, nslookup and host ignore replies from name servers with recursion not available unless it is the name server that is last configured. In case of the last configured name server, answer is accepted even without the recursion available flag. However, if the last configured name server is not responding or unreachable, name resolution fails. To prevent such fail, you can use one of the following approaches: Ensure that configured name servers always reply with the recursion available flag set. Allow recursion for all internal clients. Optionally, you can also use the dig utility to detect whether recursion is available or not. 7.4. Postfix By default in RHEL 8, Postfix uses MD5 fingerprints with the TLS for backward compatibility. But in FIPS mode, the MD5 hashing function is not available, which may cause TLS to incorrectly function in the default Postfix configuration. As a workaround, the hashing function needs to be changed to SHA-256 in the postfix configuration file. For more details, see the related link: https://access.redhat.com/articles/5824391 7.5. Printing 7.5.1. Print settings tools The Print Settings configuration tool, which was used in RHEL 7, is no longer available. To achieve various tasks related to printing, you can choose one of the following tools: CUPS web user interface (UI) GNOME Control center For more information about print setting tools in RHEL 8, see Configuring printing . 7.5.2. Location of CUPs logs CUPS provides three kinds of logs: Error log Access log Page log In RHEL 8, the logs are no longer stored in specific files within the /var/log/cups directory, which was used in RHEL 7. Instead, all three types are logged centrally in systemd-journald together with logs from other programs. For more information about how to use CUPS logs in RHEL 8, see Accessing the CUPS logs in the systemd journal . 7.5.3. Additional information For more information about how to configure printing in RHEL 8, see Configuring printing . 7.6. Performance and power management options 7.6.1. Notable changes in the recommended TuneD profile In RHEL 8, the recommended TuneD profile, reported by the tuned-adm recommend command, is selected based on the following rules: If the syspurpose role (reported by the syspurpose show command) contains atomic , and at the same time: if TuneD is running on bare metal, the atomic-host profile is selected if TuneD is running in a virtual machine, the atomic-guest profile is selected If TuneD is running in a virtual machine, the virtual-guest profile is selected If the syspurpose role contains desktop or workstation and the chassis type (reported by dmidecode ) is Notebook , Laptop , or Portable , then the balanced profile is selected If none of the above rules matches, the throughput-performance profile is selected Note that the first rule that matches takes effect. 7.7. Other changes to infrastructure services components The summary of other notable changes to particular infrastructure services components follows. Table 7.1. Notable changes to infrastructure services components Name Type of change Additional information acpid Option change -d (debug) no longer implies -f (foreground) bind Configuration option removal dnssec-lookaside auto removed; use no instead brltty Configuration option change --message-delay brltty renamed to --message-timeout brltty Configuration option removal -U [--update-interval=] removed brltty Configuration option change A Bluetooth device address may now contain dashes (-) instead of colons (:). The bth: and bluez: device qualifier aliases are no longer supported. cups Functionality removal Upstream removed support of interface scripts because of security reasons. Use ppds and drivers provided by OS or proprietary ones. cups Directive options removal Removed Digest and BasicDigest authentication types for AuthType and DefaultAuthType directives in /etc/cups/cupsd.conf . Migrate to Basic . cups Directive options removal Removed Include from cupsd.conf cups Directive options removal Removed ServerCertificate and ServerKey from cups-files.conf use Serverkeychain instead cups Directives moved between conf files SetEnv and PassEnv moved from cupsd.conf to cups-files.conf cups Directives moved between conf files PrintcapFormat moved from cupsd.conf to cups-files.conf cups-filters Default configuration change Names of remote print queues discovered by cups-browsed are now created based on device ID of printer, not on the name of remote print queue. cups-filters Default configuration change CreateIPPPrinterQueues must be set to All for automatic creation of queues of IPP printers cyrus-imapd Data format change Cyrus-imapd 3.0.7 has different data format. dhcp Behavior change dhclient sends the hardware address as a client identifier by default. The client-id option is configurable. For more information, see the /etc/dhcp/dhclient.conf file. dhcp Options incompatibility The -I option is now used for standard-ddns-updates. For the functionality (dhcp-client-identifier), use the new -C option. dosfstools Behavior change Data structures are now automatically aligned to cluster size. To disable the alignment, use the -a option. fsck.fat now defaults to interactive repair mode which previously had to be selected with the -r option. finger Functionality removal GeoIP Functionality removal grep Behavior change grep now treats files containining data improperly encoded for the current locale as binary. grep Behavior change grep -P no longer reports an error and exits when given invalid UTF-8 data grep Behavior change grep now warns if the GREP_OPTIONS environment variable is now used. Use an alias or script instead. grep Behavior change grep -P eports an error and exits in locales with multibyte character encodings other than UTF-8 grep Behavior change When searching binary data, grep may treat non-text bytes as line terminators, which impacts performance significantly. grep Behavior change grep -z no longer automatically treats the byte '\200' as binary data. grep Behavior change Context no longer excludes selected lines omitted because of -m . irssi Behavior change SSLv2 and SSLv3 no longer supported lftp Change of options xfer:log and xfer:log-file deprecated; now available under log:enabled and log:file commands ntp Functionality removal ntp has been removed; use chrony instead postfix Configuration change 3.x version have compatibility safety net that runs Postfix programs with backwards-compatible default settings after an upgrade. postfix Configuration change In the Postfix MySQL database client, the default option_group value has changed to client , set it to empty value for backward compatible behavior. postfix Configuration change The postqueue command no longer forces all message arrival times to be reported in UTC. To get the old behavior, set TZ=UTC in main.cf:import_environment . For example, import_environment = MAIL_CONFIG MAIL_DEBUG MAIL_LOGTAG TZ=UTC XAUTHORITY DISPLAY LANG=C. postfix Configuration change ECDHE - smtpd_tls_eecdh_grade defaults to auto ; new parameter tls_eecdh_auto_curves with the names of curves that may be negotiated postfix Configuration change Changed defaults for append_dot_mydomain (new: no, old: yes), master.cf chroot (new: n, old: y), smtputf8 (new: yes, old: no). postfix Configuration change Changed defaults for relay_domains (new: empty, old: USDmydestination). postfix Configuration change The mynetworks_style default value has changed from subnet to host . powertop Option removal -d removed powertop Option change -h is no longer alias for --html . It is now an alias for --help . powertop Option removal -u removed quagga Functionality removal sendmail Configuration change sendmail uses uncompressed IPv6 addresses by default, which permits a zero subnet to have a more specific match. Configuration data must use the same format, so make sure patterns such as IPv6:[0-9a-fA-F:]*:: and IPv6:: are updated before using 8.15. spamassasin Command line option removal Removed --ssl-version in spamd. spamassasin Command line option change In spamc, the command line option -S/--ssl can no longer be used to specify SSL/TLS version. The option can now only be used without an argument to enable TLS. spamassasin Change in supported SSL versions In spamc and spamd, SSLv3 is no longer supported. spamassasin Functionality removal sa-update no longer supports SHA1 validation of filtering rules, and uses SHA256/SHA512 validation instead. vim Default settings change Vim runs default.vim script, if no ~/.vimrc file is available. vim Default settings change Vim now supports bracketed paste from terminal. Include 'set t_BE=' in vimrc for the behavior. vsftpd Default configuration change anonymous_enable disabled vsftpd Default configuration change strict_ssl_read_eof now defaults to YES vsftpd Functionality removal tcp_wrappers no longer supported vsftpd Default configuration change TLSv1 and TLSv1.1 are disabled by default wireshark Python bindings removal Dissectors can no longer be written in Python, use C instead. wireshark Option removal -C suboption for -N option for asynchronous DNS name resolution removed wireshark Ouput change With the -H option, the output no longer shows SHA1, RIPEMD160 and MD5 hashes. It now shows SHA256, RIPEMD160 and SHA1 hashes. wvdial Functionality removal
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/considerations_in_adopting_rhel_8/infrastructure-services_considerations-in-adopting-rhel-8
Chapter 7. Creating infrastructure machine sets
Chapter 7. Creating infrastructure machine sets Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' You can use infrastructure machine sets to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and the components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment. In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Both OpenShift Logging and Red Hat OpenShift Service Mesh deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. This configuration requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. 7.1. OpenShift Container Platform infrastructure components The following infrastructure workloads do not incur OpenShift Container Platform worker subscriptions: Kubernetes and OpenShift Container Platform control plane services that run on masters The default router The integrated container image registry The HAProxy-based Ingress Controller The cluster metrics collection, or monitoring service, including components for monitoring user-defined projects Cluster aggregated logging Service brokers Red Hat Quay Red Hat OpenShift Data Foundation Red Hat Advanced Cluster Manager Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift GitOps Red Hat OpenShift Pipelines Any node that runs any other container, pod, or component is a worker node that your subscription must cover. For information about infrastructure nodes and which components can run on infrastructure nodes, see the "Red Hat OpenShift control plane and infrastructure nodes" section in the OpenShift sizing and subscription guide for enterprise Kubernetes document. To create an infrastructure node, you can use a machine set , label the node , or use a machine config pool . 7.2. Creating infrastructure machine sets for production environments In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Both OpenShift Logging and Red Hat OpenShift Service Mesh deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. A configuration like this requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. 7.2.1. Creating machine sets for different clouds Use the sample machine set for your cloud. 7.2.1.1. Sample YAML for a machine set custom resource on Alibaba Cloud This sample YAML defines a machine set that runs in a specified Alibaba Cloud zone in a region and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-<infra>-<zone> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> 10 spec: metadata: labels: node-role.kubernetes.io/infra: "" providerSpec: value: apiVersion: machine.openshift.io/v1 credentialsSecret: name: alibabacloud-credentials imageId: <image_id> 11 instanceType: <instance_type> 12 kind: AlibabaCloudMachineProviderConfig ramRoleName: <infrastructure_id>-role-worker 13 regionId: <region> 14 resourceGroup: 15 id: <resource_group_id> type: ID securityGroups: - tags: 16 - Key: Name Value: <infrastructure_id>-sg-<role> type: Tags systemDisk: 17 category: cloud_essd size: <disk_size> tag: 18 - Key: kubernetes.io/cluster/<infrastructure_id> Value: owned userDataSecret: name: <user_data_secret> 19 vSwitch: tags: 20 - Key: Name Value: <infrastructure_id>-vswitch-<zone> type: Tags vpcId: "" zoneId: <zone> 21 taints: 22 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 5 7 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 8 9 Specify the <infra> node label. 4 6 10 Specify the infrastructure ID, <infra> node label, and zone. 11 Specify the image to use. Use an image from an existing default machine set for the cluster. 12 Specify the instance type you want to use for the machine set. 13 Specify the name of the RAM role to use for the machine set. Use the value that the installer populates in the default machine set. 14 Specify the region to place machines on. 15 Specify the resource group and type for the cluster. You can use the value that the installer populates in the default machine set, or specify a different one. 16 18 20 Specify the tags to use for the machine set. Minimally, you must include the tags shown in this example, with appropriate values for your cluster. You can include additional tags, including the tags that the installer populates in the default machine set it creates, as needed. 17 Specify the type and size of the root disk. Use the category value that the installer populates in the default machine set it creates. If required, specify a different value in gigabytes for size . 19 Specify the name of the secret in the user data YAML file that is in the openshift-machine-api namespace. Use the value that the installer populates in the default machine set. 21 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. 22 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . Machine set parameters for Alibaba Cloud usage statistics The default machine sets that the installer creates for Alibaba Cloud clusters include nonessential tag values that Alibaba Cloud uses internally to track usage statistics. These tags are populated in the securityGroups , tag , and vSwitch parameters of the spec.template.spec.providerSpec.value list. When creating machine sets to deploy additional machines, you must include the required Kubernetes tags. The usage statistics tags are applied by default, even if they are not specified in the machine sets you create. You can also include additional tags as needed. The following YAML snippets indicate which tags in the default machine sets are optional and which are required. Tags in spec.template.spec.providerSpec.value.securityGroups spec: template: spec: providerSpec: value: securityGroups: - tags: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 2 Value: ocp - Key: Name Value: <infrastructure_id>-sg-<role> 3 type: Tags 1 2 Optional: This tag is applied even when not specified in the machine set. 3 Required. where: <infrastructure_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. <role> is the node label to add. Tags in spec.template.spec.providerSpec.value.tag spec: template: spec: providerSpec: value: tag: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV 2 Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 3 Value: ocp 2 3 Optional: This tag is applied even when not specified in the machine set. 1 Required. where <infrastructure_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. Tags in spec.template.spec.providerSpec.value.vSwitch spec: template: spec: providerSpec: value: vSwitch: tags: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV 2 Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 3 Value: ocp - Key: Name Value: <infrastructure_id>-vswitch-<zone> 4 type: Tags 1 2 3 Optional: This tag is applied even when not specified in the machine set. 4 Required. where: <infrastructure_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. <zone> is the zone within your region to place machines on. 7.2.1.2. Sample YAML for a machine set custom resource on AWS This sample YAML defines a machine set that runs in the us-east-1a Amazon Web Services (AWS) zone and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> 8 spec: metadata: labels: node-role.kubernetes.io/infra: "" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: ami: id: ami-046fe691f52a953f9 11 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 12 instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 13 region: <region> 14 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 15 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 16 tags: - name: kubernetes.io/cluster/<infrastructure_id> 17 value: owned userDataSecret: name: worker-user-data 1 3 5 12 15 17 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 8 Specify the infrastructure ID, <infra> node label, and zone. 6 7 9 Specify the <infra> node label. 10 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 11 Specify a valid Red Hat Enterprise Linux CoreOS (RHCOS) AMI for your AWS zone for your OpenShift Container Platform nodes. If you want to use an AWS Marketplace image, you must complete the OpenShift Container Platform subscription from the AWS Marketplace to obtain an AMI ID for your region. USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{"\n"}' \ get machineset/<infrastructure_id>-worker-<zone> 13 Specify the zone, for example, us-east-1a . 14 Specify the region, for example, us-east-1 . 16 Specify the infrastructure ID and zone. Machine sets running on AWS support non-guaranteed Spot Instances . You can save on costs by using Spot Instances at a lower price compared to On-Demand Instances on AWS. Configure Spot Instances by adding spotMarketOptions to the MachineSet YAML file. 7.2.1.3. Sample YAML for a machine set custom resource on Azure This sample YAML defines a machine set that runs in the 1 Microsoft Azure zone in a region and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 10 spec: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-machineset: <machineset_name> 11 node-role.kubernetes.io/infra: "" 12 providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 13 offer: "" publisher: "" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 14 sku: "" version: "" internalLoadBalancer: "" kind: AzureMachineProviderSpec location: <region> 15 managedIdentity: <infrastructure_id>-identity 16 metadata: creationTimestamp: null natRule: null networkResourceGroup: "" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: "" resourceGroup: <infrastructure_id>-rg 17 sshPrivateKey: "" sshPublicKey: "" tags: - name: <custom_tag_name> 18 value: <custom_tag_value> 19 subnet: <infrastructure_id>-<role>-subnet 20 21 userDataSecret: name: worker-user-data 22 vmSize: Standard_D4s_v3 vnet: <infrastructure_id>-vnet 23 zone: "1" 24 taints: 25 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 5 7 16 17 20 23 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster You can obtain the subnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 You can obtain the vnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 2 3 8 9 12 21 22 Specify the <infra> node label. 4 6 10 Specify the infrastructure ID, <infra> node label, and region. 11 Optional: Specify the machine set name to enable the use of availability sets. This setting only applies to new compute machines. 13 Specify the image details for your machine set. If you want to use an Azure Marketplace image, see "Selecting an Azure Marketplace image". 14 Specify an image that is compatible with your instance type. The Hyper-V generation V2 images created by the installation program have a -gen2 suffix, while V1 images have the same name without the suffix. 15 Specify the region to place machines on. 24 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. 18 19 Optional: Specify custom tags in your machine set. Provide the tag name in <custom_tag_name> field and the corresponding tag value in <custom_tag_value> field. 25 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . Machine sets running on Azure support non-guaranteed Spot VMs . You can save on costs by using Spot VMs at a lower price compared to standard VMs on Azure. You can configure Spot VMs by adding spotVMOptions to the MachineSet YAML file. Additional resources Selecting an Azure Marketplace image 7.2.1.4. Sample YAML for a machine set custom resource on Azure Stack Hub This sample YAML defines a machine set that runs in the 1 Microsoft Azure zone in a region and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: "" 11 taints: 12 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: machine.openshift.io/v1beta1 availabilitySet: <availability_set> 13 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: "" publisher: "" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 14 sku: "" version: "" internalLoadBalancer: "" kind: AzureMachineProviderSpec location: <region> 15 managedIdentity: <infrastructure_id>-identity 16 metadata: creationTimestamp: null natRule: null networkResourceGroup: "" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: "" resourceGroup: <infrastructure_id>-rg 17 sshPrivateKey: "" sshPublicKey: "" subnet: <infrastructure_id>-<role>-subnet 18 19 userDataSecret: name: worker-user-data 20 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 21 zone: "1" 22 1 5 7 14 16 17 18 21 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster You can obtain the subnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 You can obtain the vnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 2 3 8 9 11 19 20 Specify the <infra> node label. 4 6 10 Specify the infrastructure ID, <infra> node label, and region. 12 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 15 Specify the region to place machines on. 13 Specify the availability set for the cluster. 22 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. Note Machine sets running on Azure Stack Hub do not support non-guaranteed Spot VMs. 7.2.1.5. Sample YAML for a machine set custom resource on IBM Cloud This sample YAML defines a machine set that runs in a specified IBM Cloud zone in a region and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-<infra>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/infra: "" providerSpec: value: apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1 credentialsSecret: name: ibmcloud-credentials image: <infrastructure_id>-rhcos 11 kind: IBMCloudMachineProviderSpec primaryNetworkInterface: securityGroups: - <infrastructure_id>-sg-cluster-wide - <infrastructure_id>-sg-openshift-net subnet: <infrastructure_id>-subnet-compute-<zone> 12 profile: <instance_profile> 13 region: <region> 14 resourceGroup: <resource_group> 15 userDataSecret: name: <role>-user-data 16 vpc: <vpc_name> 17 zone: <zone> 18 taints: 19 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 5 7 The infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 8 9 16 The <infra> node label. 4 6 10 The infrastructure ID, <infra> node label, and region. 11 The custom Red Hat Enterprise Linux CoreOS (RHCOS) image that was used for cluster installation. 12 The infrastructure ID and zone within your region to place machines on. Be sure that your region supports the zone that you specify. 13 Specify the IBM Cloud instance profile . 14 Specify the region to place machines on. 15 The resource group that machine resources are placed in. This is either an existing resource group specified at installation time, or an installer-created resource group named based on the infrastructure ID. 17 The VPC name. 18 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. 19 The taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 7.2.1.6. Sample YAML for a machine set custom resource on GCP This sample YAML defines a machine set that runs in Google Cloud Platform (GCP) and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/infra: "" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a taints: 6 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 For <infrastructure_id> , specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 For <infra> , specify the <infra> node label. 3 Specify the path to the image that is used in current compute machine sets. If you have the OpenShift CLI installed, you can obtain the path to the image by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{"\n"}' \ get machineset/<infrastructure_id>-worker-a To use a GCP Marketplace image, specify the offer to use: OpenShift Container Platform: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-48-x86-64-202210040145 OpenShift Platform Plus: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-48-x86-64-202206140145 OpenShift Kubernetes Engine: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-48-x86-64-202206140145 4 Optional: Specify custom metadata in the form of a key:value pair. For example use cases, see the GCP documentation for setting custom metadata . 5 For <project_name> , specify the name of the GCP project that you use for your cluster. 6 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . Machine sets running on GCP support non-guaranteed preemptible VM instances . You can save on costs by using preemptible VM instances at a lower price compared to normal instances on GCP. You can configure preemptible VM instances by adding preemptible to the MachineSet YAML file. 7.2.1.7. Sample YAML for a machine set custom resource on RHOSP This sample YAML defines a machine set that runs on Red Hat OpenStack Platform (RHOSP) and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: "" taints: 11 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 12 kind: OpenstackProviderSpec networks: 13 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 14 primarySubnet: <rhosp_subnet_UUID> 15 securityGroups: - filter: {} name: <infrastructure_id>-worker 16 serverMetadata: Name: <infrastructure_id>-worker 17 openshiftClusterID: <infrastructure_id> 18 tags: - openshiftClusterID=<infrastructure_id> 19 trunk: true userDataSecret: name: worker-user-data 20 availabilityZone: <optional_openstack_availability_zone> 1 5 7 14 16 17 18 19 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 8 9 20 Specify the <infra> node label. 4 6 10 Specify the infrastructure ID and <infra> node label. 11 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 12 To set a server group policy for the MachineSet, enter the value that is returned from creating a server group . For most deployments, anti-affinity or soft-anti-affinity policies are recommended. 13 Required for deployments to multiple networks. If deploying to multiple networks, this list must include the network that is used as the primarySubnet value. 15 Specify the RHOSP subnet that you want the endpoints of nodes to be published on. Usually, this is the same subnet that is used as the value of machinesSubnet in the install-config.yaml file. 7.2.1.8. Sample YAML for a machine set custom resource on RHV This sample YAML defines a machine set that runs on RHV and creates nodes that are labeled with node-role.kubernetes.io/<node_role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> 5 selector: 6 matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 9 machine.openshift.io/cluster-api-machine-role: <role> 10 machine.openshift.io/cluster-api-machine-type: <role> 11 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 12 spec: metadata: labels: node-role.kubernetes.io/<role>: "" 13 providerSpec: value: apiVersion: ovirtproviderconfig.machine.openshift.io/v1beta1 cluster_id: <ovirt_cluster_id> 14 template_name: <ovirt_template_name> 15 instance_type_id: <instance_type_id> 16 cpu: 17 sockets: <number_of_sockets> 18 cores: <number_of_cores> 19 threads: <number_of_threads> 20 memory_mb: <memory_size> 21 guaranteed_memory_mb: <memory_size> 22 os_disk: 23 size_gb: <disk_size> 24 network_interfaces: 25 vnic_profile_id: <vnic_profile_id> 26 credentialsSecret: name: ovirt-credentials 27 kind: OvirtMachineProviderSpec type: <workload_type> 28 auto_pinning_policy: <auto_pinning_policy> 29 hugepages: <hugepages> 30 affinityGroupsNames: - compute 31 userDataSecret: name: worker-user-data 1 7 9 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 10 11 13 Specify the node label to add. 4 8 12 Specify the infrastructure ID and node label. These two strings together cannot be longer than 35 characters. 5 Specify the number of machines to create. 6 Selector for the machines. 14 Specify the UUID for the RHV cluster to which this VM instance belongs. 15 Specify the RHV VM template to use to create the machine. 16 Optional: Specify the VM instance type. Warning The instance_type_id field is deprecated and will be removed in a future release. If you include this parameter, you do not need to specify the hardware parameters of the VM including CPU and memory because this parameter overrides all hardware parameters. 17 Optional: The CPU field contains the CPU's configuration, including sockets, cores, and threads. 18 Optional: Specify the number of sockets for a VM. 19 Optional: Specify the number of cores per socket. 20 Optional: Specify the number of threads per core. 21 Optional: Specify the size of a VM's memory in MiB. 22 Optional: Specify the size of a virtual machine's guaranteed memory in MiB. This is the amount of memory that is guaranteed not to be drained by the ballooning mechanism. For more information, see Memory Ballooning and Optimization Settings Explained . Note If you are using a version earlier than RHV 4.4.8, see Guaranteed memory requirements for OpenShift on Red Hat Virtualization clusters . 23 Optional: Root disk of the node. 24 Optional: Specify the size of the bootable disk in GiB. 25 Optional: List of the network interfaces of the VM. If you include this parameter, OpenShift Container Platform discards all network interfaces from the template and creates new ones. 26 Optional: Specify the vNIC profile ID. 27 Specify the name of the secret that holds the RHV credentials. 28 Optional: Specify the workload type for which the instance is optimized. This value affects the RHV VM parameter. Supported values: desktop , server (default), high_performance . high_performance improves performance on the VM, but there are limitations. For example, you cannot access the VM with a graphical console. For more information, see Configuring High Performance Virtual Machines, Templates, and Pools in the Virtual Machine Management Guide . 29 Optional: AutoPinningPolicy defines the policy that automatically sets CPU and NUMA settings, including pinning to the host for this instance. Supported values: none , resize_and_pin . For more information, see Setting NUMA Nodes in the Virtual Machine Management Guide . 30 Optional: Hugepages is the size in KiB for defining hugepages in a VM. Supported values: 2048 or 1048576 . For more information, see Configuring Huge Pages in the Virtual Machine Management Guide . 31 Optional: A list of affinity group names that should be applied to the VMs. The affinity groups must exist in oVirt. Note Because RHV uses a template when creating a VM, if you do not specify a value for an optional parameter, RHV uses the value for that parameter that is specified in the template. 7.2.1.9. Sample YAML for a machine set custom resource on vSphere This sample YAML defines a machine set that runs on VMware vSphere and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: "" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: "<vm_network_name>" 11 numCPUs: 4 numCoresPerSocket: 1 snapshot: "" template: <vm_template_name> 12 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_datacenter_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcepool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17 1 3 5 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 8 Specify the infrastructure ID and <infra> node label. 6 7 9 Specify the <infra> node label. 10 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 11 Specify the vSphere VM network to deploy the machine set to. This VM network must be where other compute machines reside in the cluster. 12 Specify the vSphere VM template to use, such as user-5ddjd-rhcos . 13 Specify the vCenter Datacenter to deploy the machine set on. 14 Specify the vCenter Datastore to deploy the machine set on. 15 Specify the path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd . 16 Specify the vSphere resource pool for your VMs. 17 Specify the vCenter server IP or fully qualified domain name. 7.2.2. Creating a machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new machine set is available, the DESIRED and CURRENT values match. If the machine set is not available, wait a few minutes and run the command again. 7.2.3. Creating an infrastructure node Important See Creating infrastructure machine sets for installer-provisioned infrastructure environments or for any cluster where the control plane nodes are managed by the machine API. Requirements of the cluster dictate that infrastructure, also called infra nodes, be provisioned. The installer only provides provisions for control plane and worker nodes. Worker nodes can be designated as infrastructure nodes or application, also called app , nodes through labeling. Procedure Add a label to the worker node that you want to act as application node: USD oc label node <node-name> node-role.kubernetes.io/app="" Add a label to the worker nodes that you want to act as infrastructure nodes: USD oc label node <node-name> node-role.kubernetes.io/infra="" Check to see if applicable nodes now have the infra role and app roles: USD oc get nodes Create a default cluster-wide node selector. The default node selector is applied to pods created in all namespaces. This creates an intersection with any existing node selectors on a pod, which additionally constrains the pod's selector. Important If the default node selector key conflicts with the key of a pod's label, then the default node selector is not applied. However, do not set a default node selector that might cause a pod to become unschedulable. For example, setting the default node selector to a specific node role, such as node-role.kubernetes.io/infra="" , when a pod's label is set to a different node role, such as node-role.kubernetes.io/master="" , can cause the pod to become unschedulable. For this reason, use caution when setting the default node selector to specific node roles. You can alternatively use a project node selector to avoid cluster-wide node selector key conflicts. Edit the Scheduler object: USD oc edit scheduler cluster Add the defaultNodeSelector field with the appropriate node selector: apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: topology.kubernetes.io/region=us-east-1 1 # ... 1 This example node selector deploys pods on nodes in the us-east-1 region by default. Save the file to apply the changes. You can now move infrastructure resources to the newly labeled infra nodes. Additional resources Moving resources to infrastructure machine sets 7.2.4. Creating a machine config pool for infrastructure machines If you need infrastructure machines to have dedicated configurations, you must create an infra pool. Procedure Add a label to the node you want to assign as the infra node with a specific label: USD oc label node <node_name> <label> USD oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra= Create a machine config pool that contains both the worker role and your custom role as machine config selector: USD cat infra.mcp.yaml Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: "" 2 1 Add the worker role and your custom role. 2 Add the label you added to the node as a nodeSelector . Note Custom machine config pools inherit machine configs from the worker pool. Custom pools use any machine config targeted for the worker pool, but add the ability to also deploy changes that are targeted at only the custom pool. Because a custom pool inherits resources from the worker pool, any change to the worker pool also affects the custom pool. After you have the YAML file, you can create the machine config pool: USD oc create -f infra.mcp.yaml Check the machine configs to ensure that the infrastructure configuration rendered successfully: USD oc get machineconfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d You should see a new machine config, with the rendered-infra-* prefix. Optional: To deploy changes to a custom pool, create a machine config that uses the custom pool name as the label, such as infra . Note that this is not required and only shown for instructional purposes. In this manner, you can apply any custom configurations specific to only your infra nodes. Note After you create the new machine config pool, the MCO generates a new rendered config for that pool, and associated nodes of that pool reboot to apply the new configuration. Create a machine config: USD cat infra.mc.yaml Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra 1 Add the label you added to the node as a nodeSelector . Apply the machine config to the infra-labeled nodes: USD oc create -f infra.mc.yaml Confirm that your new machine config pool is available: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m In this example, a worker node was changed to an infra node. Additional resources See Node configuration management with machine config pools for more information on grouping infra machines in a custom pool. 7.3. Assigning machine set resources to infrastructure nodes After creating an infrastructure machine set, the worker and infra roles are applied to new infra nodes. Nodes with the infra role applied are not counted toward the total number of subscriptions that are required to run the environment, even when the worker role is also applied. However, with an infra node being assigned as a worker, there is a chance user workloads could get inadvertently assigned to an infra node. To avoid this, you can apply a taint to the infra node and tolerations for the pods you want to control. 7.3.1. Binding infrastructure node workloads using taints and tolerations If you have an infra node that has the infra and worker roles assigned, you must configure the node so that user workloads are not assigned to it. Important It is recommended that you preserve the dual infra,worker label that is created for infra nodes and use taints and tolerations to manage nodes that user workloads are scheduled on. If you remove the worker label from the node, you must create a custom pool to manage it. A node with a label other than master or worker is not recognized by the MCO without a custom pool. Maintaining the worker label allows the node to be managed by the default worker machine config pool, if no custom pools that select the custom label exists. The infra label communicates to the cluster that it does not count toward the total number of subscriptions. Prerequisites Configure additional MachineSet objects in your OpenShift Container Platform cluster. Procedure Add a taint to the infra node to prevent scheduling user workloads on it: Determine if the node has the taint: USD oc describe nodes <node_name> Sample output oc describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker ... Taints: node-role.kubernetes.io/infra:NoSchedule ... This example shows that the node has a taint. You can proceed with adding a toleration to your pod in the step. If you have not configured a taint to prevent scheduling user workloads on it: USD oc adm taint nodes <node_name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoExecute Tip You can alternatively apply the following YAML to add the taint: kind: Node apiVersion: v1 metadata: name: <node_name> labels: ... spec: taints: - key: node-role.kubernetes.io/infra effect: NoExecute value: reserved ... This example places a taint on node1 that has key node-role.kubernetes.io/infra and taint effect NoSchedule . Nodes with the NoSchedule effect schedule only pods that tolerate the taint, but allow existing pods to remain scheduled on the node. Note If a descheduler is used, pods violating node taints could be evicted from the cluster. Add tolerations for the pod configurations you want to schedule on the infra node, like router, registry, and monitoring workloads. Add the following code to the Pod object specification: tolerations: - effect: NoExecute 1 key: node-role.kubernetes.io/infra 2 operator: Exists 3 value: reserved 4 1 Specify the effect that you added to the node. 2 Specify the key that you added to the node. 3 Specify the Exists Operator to require a taint with the key node-role.kubernetes.io/infra to be present on the node. 4 Specify the value of the key-value pair taint that you added to the node. This toleration matches the taint created by the oc adm taint command. A pod with this toleration can be scheduled onto the infra node. Note Moving pods for an Operator installed via OLM to an infra node is not always possible. The capability to move Operator pods depends on the configuration of each Operator. Schedule the pod to the infra node using a scheduler. See the documentation for Controlling pod placement onto nodes for details. Additional resources See Controlling pod placement using the scheduler for general information on scheduling a pod to a node. See Moving resources to infrastructure machine sets for instructions on scheduling pods to infra nodes. 7.4. Moving resources to infrastructure machine sets Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created by adding the infrastructure node selector, as shown: spec: nodePlacement: 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Applying a specific node selector to all infrastructure components causes OpenShift Container Platform to schedule those workloads on nodes with that label . 7.4.1. Moving the router You can deploy the router pod to a different machine set. By default, the pod is deployed to a worker node. Prerequisites Configure additional machine sets in your OpenShift Container Platform cluster. Procedure View the IngressController custom resource for the router Operator: USD oc get ingresscontroller default -n openshift-ingress-operator -o yaml The command output resembles the following text: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: "11341" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: "True" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default Edit the ingresscontroller resource and change the nodeSelector to use the infra label: USD oc edit ingresscontroller default -n openshift-ingress-operator spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrastructure node, also add a matching toleration. Confirm that the router pod is running on the infra node. View the list of router pods and note the node name of the running pod: USD oc get pod -n openshift-ingress -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none> In this example, the running pod is on the ip-10-0-217-226.ec2.internal node. View the node status of the running pod: USD oc get node <node_name> 1 1 Specify the <node_name> that you obtained from the pod list. Example output NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.23.0 Because the role list includes infra , the pod is running on the correct node. 7.4.2. Moving the default registry You configure the registry Operator to deploy its pods to different nodes. Prerequisites Configure additional machine sets in your OpenShift Container Platform cluster. Procedure View the config/instance object: USD oc get configs.imageregistry.operator.openshift.io/cluster -o yaml Example output apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: "56174" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status: ... Edit the config/instance object: USD oc edit configs.imageregistry.operator.openshift.io/cluster spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Verify the registry pod has been moved to the infrastructure node. Run the following command to identify the node where the registry pod is located: USD oc get pods -o wide -n openshift-image-registry Confirm the node has the label you specified: USD oc describe node <node_name> Review the command output and confirm that node-role.kubernetes.io/infra is in the LABELS list. 7.4.3. Moving the monitoring solution The monitoring stack includes multiple components, including Prometheus, Grafana, and Alertmanager. The Cluster Monitoring Operator manages this stack. To redeploy the monitoring stack to infrastructure nodes, you can create and apply a custom config map. Procedure Edit the cluster-monitoring-config config map and change the nodeSelector to use the infra label: USD oc edit configmap cluster-monitoring-config -n openshift-monitoring apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute grafana: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Watch the monitoring pods move to the new machines: USD watch 'oc get pod -n openshift-monitoring -o wide' If a component has not moved to the infra node, delete the pod with this component: USD oc delete pod -n openshift-monitoring <pod> The component from the deleted pod is re-created on the infra node. 7.4.4. Moving OpenShift Logging resources You can configure the Cluster Logging Operator to deploy the pods for logging subsystem components, such as Elasticsearch and Kibana, to different nodes. You cannot move the Cluster Logging Operator pod from its installed location. For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. These features are not installed by default. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance apiVersion: logging.openshift.io/v1 kind: ClusterLogging ... spec: collection: logs: fluentd: resources: null type: fluentd logStore: elasticsearch: nodeCount: 3 nodeSelector: 1 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: 2 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved proxy: resources: null replicas: 1 resources: null type: kibana ... 1 2 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Verification To verify that a component has moved, you can use the oc get pod -o wide command. For example: You want to move the Kibana pod from the ip-10-0-147-79.us-east-2.compute.internal node: USD oc get pod kibana-5b8bdf44f9-ccpq9 -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none> You want to move the Kibana pod to the ip-10-0-139-48.us-east-2.compute.internal node, a dedicated infrastructure node: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.23.0 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.23.0 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.23.0 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.23.0 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.23.0 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.23.0 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.23.0 Note that the node has a node-role.kubernetes.io/infra: '' label: USD oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml Example output kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: '' ... To move the Kibana pod, edit the ClusterLogging CR to add a node selector: apiVersion: logging.openshift.io/v1 kind: ClusterLogging ... spec: ... visualization: kibana: nodeSelector: 1 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana 1 Add a node selector to match the label in the node specification. After you save the CR, the current Kibana pod is terminated and new pod is deployed: USD oc get pods Example output NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m fluentd-42dzz 1/1 Running 0 28m fluentd-d74rq 1/1 Running 0 28m fluentd-m5vr9 1/1 Running 0 28m fluentd-nkxl7 1/1 Running 0 28m fluentd-pdvqb 1/1 Running 0 28m fluentd-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s The new pod is on the ip-10-0-139-48.us-east-2.compute.internal node: USD oc get pod kibana-7d85dcffc8-bfpfp -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none> After a few moments, the original Kibana pod is removed. USD oc get pods Example output NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m fluentd-42dzz 1/1 Running 0 29m fluentd-d74rq 1/1 Running 0 29m fluentd-m5vr9 1/1 Running 0 29m fluentd-nkxl7 1/1 Running 0 29m fluentd-pdvqb 1/1 Running 0 29m fluentd-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s Additional resources See the monitoring documentation for the general instructions on moving OpenShift Container Platform components.
[ "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-<infra>-<zone> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> 10 spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: machine.openshift.io/v1 credentialsSecret: name: alibabacloud-credentials imageId: <image_id> 11 instanceType: <instance_type> 12 kind: AlibabaCloudMachineProviderConfig ramRoleName: <infrastructure_id>-role-worker 13 regionId: <region> 14 resourceGroup: 15 id: <resource_group_id> type: ID securityGroups: - tags: 16 - Key: Name Value: <infrastructure_id>-sg-<role> type: Tags systemDisk: 17 category: cloud_essd size: <disk_size> tag: 18 - Key: kubernetes.io/cluster/<infrastructure_id> Value: owned userDataSecret: name: <user_data_secret> 19 vSwitch: tags: 20 - Key: Name Value: <infrastructure_id>-vswitch-<zone> type: Tags vpcId: \"\" zoneId: <zone> 21 taints: 22 - key: node-role.kubernetes.io/infra effect: NoSchedule", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "spec: template: spec: providerSpec: value: securityGroups: - tags: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 2 Value: ocp - Key: Name Value: <infrastructure_id>-sg-<role> 3 type: Tags", "spec: template: spec: providerSpec: value: tag: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV 2 Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 3 Value: ocp", "spec: template: spec: providerSpec: value: vSwitch: tags: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV 2 Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 3 Value: ocp - Key: Name Value: <infrastructure_id>-vswitch-<zone> 4 type: Tags", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> 8 spec: metadata: labels: node-role.kubernetes.io/infra: \"\" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: ami: id: ami-046fe691f52a953f9 11 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 12 instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 13 region: <region> 14 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 15 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 16 tags: - name: kubernetes.io/cluster/<infrastructure_id> 17 value: owned userDataSecret: name: worker-user-data", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{\"\\n\"}' get machineset/<infrastructure_id>-worker-<zone>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 10 spec: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-machineset: <machineset_name> 11 node-role.kubernetes.io/infra: \"\" 12 providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 13 offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 14 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 15 managedIdentity: <infrastructure_id>-identity 16 metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg 17 sshPrivateKey: \"\" sshPublicKey: \"\" tags: - name: <custom_tag_name> 18 value: <custom_tag_value> 19 subnet: <infrastructure_id>-<role>-subnet 20 21 userDataSecret: name: worker-user-data 22 vmSize: Standard_D4s_v3 vnet: <infrastructure_id>-vnet 23 zone: \"1\" 24 taints: 25 - key: node-role.kubernetes.io/infra effect: NoSchedule", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" 11 taints: 12 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: machine.openshift.io/v1beta1 availabilitySet: <availability_set> 13 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 14 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 15 managedIdentity: <infrastructure_id>-identity 16 metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg 17 sshPrivateKey: \"\" sshPublicKey: \"\" subnet: <infrastructure_id>-<role>-subnet 18 19 userDataSecret: name: worker-user-data 20 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 21 zone: \"1\" 22", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-<infra>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1 credentialsSecret: name: ibmcloud-credentials image: <infrastructure_id>-rhcos 11 kind: IBMCloudMachineProviderSpec primaryNetworkInterface: securityGroups: - <infrastructure_id>-sg-cluster-wide - <infrastructure_id>-sg-openshift-net subnet: <infrastructure_id>-subnet-compute-<zone> 12 profile: <instance_profile> 13 region: <region> 14 resourceGroup: <resource_group> 15 userDataSecret: name: <role>-user-data 16 vpc: <vpc_name> 17 zone: <zone> 18 taints: 19 - key: node-role.kubernetes.io/infra effect: NoSchedule", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a taints: 6 - key: node-role.kubernetes.io/infra effect: NoSchedule", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{\"\\n\"}' get machineset/<infrastructure_id>-worker-a", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" taints: 11 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 12 kind: OpenstackProviderSpec networks: 13 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 14 primarySubnet: <rhosp_subnet_UUID> 15 securityGroups: - filter: {} name: <infrastructure_id>-worker 16 serverMetadata: Name: <infrastructure_id>-worker 17 openshiftClusterID: <infrastructure_id> 18 tags: - openshiftClusterID=<infrastructure_id> 19 trunk: true userDataSecret: name: worker-user-data 20 availabilityZone: <optional_openstack_availability_zone>", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> 5 selector: 6 matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 9 machine.openshift.io/cluster-api-machine-role: <role> 10 machine.openshift.io/cluster-api-machine-type: <role> 11 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 12 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" 13 providerSpec: value: apiVersion: ovirtproviderconfig.machine.openshift.io/v1beta1 cluster_id: <ovirt_cluster_id> 14 template_name: <ovirt_template_name> 15 instance_type_id: <instance_type_id> 16 cpu: 17 sockets: <number_of_sockets> 18 cores: <number_of_cores> 19 threads: <number_of_threads> 20 memory_mb: <memory_size> 21 guaranteed_memory_mb: <memory_size> 22 os_disk: 23 size_gb: <disk_size> 24 network_interfaces: 25 vnic_profile_id: <vnic_profile_id> 26 credentialsSecret: name: ovirt-credentials 27 kind: OvirtMachineProviderSpec type: <workload_type> 28 auto_pinning_policy: <auto_pinning_policy> 29 hugepages: <hugepages> 30 affinityGroupsNames: - compute 31 userDataSecret: name: worker-user-data", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: \"<vm_network_name>\" 11 numCPUs: 4 numCoresPerSocket: 1 snapshot: \"\" template: <vm_template_name> 12 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_datacenter_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcepool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc label node <node-name> node-role.kubernetes.io/app=\"\"", "oc label node <node-name> node-role.kubernetes.io/infra=\"\"", "oc get nodes", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: topology.kubernetes.io/region=us-east-1 1", "oc label node <node_name> <label>", "oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=", "cat infra.mcp.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" 2", "oc create -f infra.mcp.yaml", "oc get machineconfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d", "cat infra.mc.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra", "oc create -f infra.mc.yaml", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m", "oc describe nodes <node_name>", "describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker Taints: node-role.kubernetes.io/infra:NoSchedule", "oc adm taint nodes <node_name> <key>=<value>:<effect>", "oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoExecute", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: node-role.kubernetes.io/infra effect: NoExecute value: reserved", "tolerations: - effect: NoExecute 1 key: node-role.kubernetes.io/infra 2 operator: Exists 3 value: reserved 4", "spec: nodePlacement: 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get ingresscontroller default -n openshift-ingress-operator -o yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: \"11341\" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: \"True\" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default", "oc edit ingresscontroller default -n openshift-ingress-operator", "spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get pod -n openshift-ingress -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>", "oc get node <node_name> 1", "NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.23.0", "oc get configs.imageregistry.operator.openshift.io/cluster -o yaml", "apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: \"56174\" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status:", "oc edit configs.imageregistry.operator.openshift.io/cluster", "spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get pods -o wide -n openshift-image-registry", "oc describe node <node_name>", "oc edit configmap cluster-monitoring-config -n openshift-monitoring", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute grafana: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute", "watch 'oc get pod -n openshift-monitoring -o wide'", "oc delete pod -n openshift-monitoring <pod>", "oc edit ClusterLogging instance", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: collection: logs: fluentd: resources: null type: fluentd logStore: elasticsearch: nodeCount: 3 nodeSelector: 1 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: 2 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved proxy: resources: null replicas: 1 resources: null type: kibana", "oc get pod kibana-5b8bdf44f9-ccpq9 -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.23.0 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.23.0 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.23.0 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.23.0 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.23.0 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.23.0 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.23.0", "oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml", "kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: ''", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: visualization: kibana: nodeSelector: 1 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana", "oc get pods", "NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m fluentd-42dzz 1/1 Running 0 28m fluentd-d74rq 1/1 Running 0 28m fluentd-m5vr9 1/1 Running 0 28m fluentd-nkxl7 1/1 Running 0 28m fluentd-pdvqb 1/1 Running 0 28m fluentd-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s", "oc get pod kibana-7d85dcffc8-bfpfp -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>", "oc get pods", "NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m fluentd-42dzz 1/1 Running 0 29m fluentd-d74rq 1/1 Running 0 29m fluentd-m5vr9 1/1 Running 0 29m fluentd-nkxl7 1/1 Running 0 29m fluentd-pdvqb 1/1 Running 0 29m fluentd-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/machine_management/creating-infrastructure-machinesets
17.7. Viewing complete volume state with statedump
17.7. Viewing complete volume state with statedump The statedump subcommand writes out details of the current state of a specified process, including internal variables and other information that is useful for troubleshooting. The command is used as follows: 17.7.1. Gathering information from the server You can output all available state information, or limit statedump output to specific details, by using the statedump command with one of the following parameters. all Dumps all available state information. mem Dumps the memory usage and memory pool details of the bricks. iobuf Dumps iobuf details of the bricks. priv Dumps private information of loaded translators. callpool Dumps the pending calls of the volume. fd Dumps the open file descriptor tables of the volume. inode Dumps the inode tables of the volume. history Dumps the event history of the volume For example, to write out all available information about the data volume, run the following command on the server: If you only want to see details about the event history, run the following: The nfs parameter is required to gather details about volumes shared via NFS. It can be combined with any of the above parameters to filter output. The quotad parameter is required to gather details about the quota daemon. The following command writes out the state of the quota daemon across all nodes. If you need to see the state of a different process, such as the self-heal daemon, you can do so by running the following command using the process identifier of that process. 17.7.2. Gathering information from the client The statedump subcommand writes out details of the current state of a specified process, including internal variables and other information that is useful for troubleshooting. To generate a statedump for client-side processes, using libgfapi, run the following command on a gluster node that is connected to the libgfapi application. Important If you are using either NFS Ganesha or Samba service and you need to see the state of its clients, ensure that you use localhost instead of hostname . For example: If you need to get the state of glusterfs fuse mount process, you can do so by running the following command using the process identifier of that process. Important If you have a gfapi based application and you need to see the state of its clients, ensure that the user running the gfapi application is a member of the gluster group. For example, if your gfapi application is run by user qemu, ensure that qemu is added to the gluster group by running the following command: 17.7.3. Controlling statedump output location Information is saved to the /var/run/gluster directory by default. Output files are named according to the following conventions: For brick processes, brick_path . brick_pid .dump For volume processes and kill command results, glusterdump- glusterd_pid .dump. timestamp To change where the output files of a particular volume are saved, use the server.statedump-path parameter, like so:
[ "gluster volume statedump VOLNAME [[nfs|quotad] [all|mem|iobuf|callpool|priv|fd|inode|history] | [client hostname : pid ]]", "gluster volume statedump data all", "gluster volume statedump data history", "gluster volume statedump VOLNAME nfs all", "gluster volume statedump VOLNAME quotad", "kill -SIGUSR1 pid", "gluster volume statedump VOLNAME client hostname : pid", "gluster volume statedump VOLNAME client localhost: pid", "kill -SIGUSR1 pid", "usermod -a -G gluster qemu", "gluster volume set VOLNAME server.statedump-path PATH" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/viewing_complete_volume_state_statedump
Chapter 4. New features
Chapter 4. New features This part describes new features and major enhancements introduced in Red Hat Enterprise Linux 8.5. 4.1. Installer and image creation RHEL for Edge now supports a Simplified Installer This enhancement enables Image Builder to build the RHEL for Edge Simplified Installer ( edge-simplified-installer ) and RHEL for Edge Raw Images ( edge-raw-image ). RHEL for Edge Simplified Installer enables you to specify a new blueprint option, installation_device and thus, perform an unattended installation to a device. To create the raw image, you must provide an existing OSTree commit. It results in a raw image with the existing commit deployed in it. The installer will use this raw image to the specified installation device. Additionally, you can also use Image Builder to build RHEL for Edge Raw Images. These are compressed raw images that contain a partition layout with an existing deployed OSTree commit in it. You can install the RHEL for Edge Raw Images to flash on a hard drive or booted in a virtual machine. ( BZ#1937854 ) Warnings for deprecated kernel boot arguments Anaconda boot arguments without the inst. prefix (for example, ks , stage2 , repo and so on) are deprecated starting RHEL7. These arguments will be removed in the major RHEL release. With this release, appropriate warning messages are displayed when the boot arguments are used without the inst prefix. The warning messages are displayed in dracut when booting the installation and also when the installation program is started on a terminal. Following is a sample warning message that is displayed on a terminal: Deprecated boot argument ks must be used with the inst. prefix. Please use inst.ks instead. Anaconda boot arguments without inst. prefix have been deprecated and will be removed in a future major release. Following is a sample warning message that is displayed in dracut : ks has been deprecated. All usage of Anaconda boot arguments without the inst. prefix have been deprecated and will be removed in a future major release. Please use inst.ks instead. ( BZ#1897657 ) Red Hat Connector is now fully supported You can connect the system using Red Hat Connector ( rhc ). Red Hat Connector consists of a command-line interface and a daemon that allow users to execute Insights remediation playbook directly on their host within the web user interface of Insights (console.redhat.com). Red Hat Connector was available as a Technology Preview in RHEL 8.4 and as of RHEL 8.5, it is fully supported. ( BZ#1957316 ) Ability to override official repositories available By default, the osbuild-composer backend has its own set of official repositories defined in the /usr/share/osbuild-composer/repositories directory. Consequently, it does not inherit the system repositories located in the /etc/yum.repos.d/ directory. You can now override the official repositories. To do that, define overrides in the /etc/osbuild-composer/repositories and, as a result, the files located there take precedence over those in the /usr directory. ( BZ#1915351 ) Image Builder now supports filesystem configuration With this enhancement, you can specify custom filesystem configuration in your blueprints and you can create images with the desired disk layout. As a result, by having non-default layouts, you can benefit from security benchmarks, consistency with existing setups, performance, and protection against out-of-disk errors. To customize the filesystem configuration in your blueprint, set the following customization: ( BZ#2011448 ) Image Builder now supports creating bootable installer images With this enhancement, you can use Image Builder to create bootable ISO images that consist of a tarball file, which contains a root file system. As a result, you can use the bootable ISO image to install the tarball file system to a bare metal system. ( BZ#2019318 ) 4.2. RHEL for Edge Greenboot services now enabled by default Previously, the greenboot services were not present in the default presets so, when the greenboot package was installed, users had to manually enable these greenboot services. With this update, the greenboot services are now present in the default presets configuration and users are no longer required to manually enable it. ( BZ#1935177 ) 4.3. Software management RPM now has read-only support for the sqlite database backend The ability to query an RPM database based on sqlite may be desired when inspecting other root directories, such as containers.This update adds read-only support for the RPM sqlite database backend. As a result, it is now possible to query packages installed in a UBI 9 or Fedora container from the host RHEL 8. To do that with Podman: Mount the container's file system with the podman mount command. Run the rpm -qa command with the --root option pointing to the mounted location. Note that RPM on RHEL 8 still uses the BerkeleyDB database ( bdb ) backend. ( BZ#1938928 ) libmodulemd rebased to version 2.12.1 The libmodulemd packages have been rebased to version 2.12.1. Notable changes include: Added support for version 1 of the modulemd-obsoletes document type, which provides information about a stream obsoleting another one, or a stream reaching its end of life. Added support for version 3 of the modulemd-packager document type, which provides a packager description of a module stream content for a module build system. Added support for the static_context attribute of the version 2 modulemd document type. With that, a module context is now defined by a packager instead of being generated by a module build system. Now, a module stream value is always serialized as a quoted string. ( BZ#1894573 ) libmodulemd rebased to version 2.13.0 The libmodulemd packages have been rebased to version 2.13.0, which provides the following notable changes over the version: Added support for delisting demodularized packages from a module. Added support for validating modulemd-packager-v3 documents with a new --type option of the modulemd-validator tool. Fortified parsing integers. Fixed various modulemd-validator issues. ( BZ#1984402 ) sslverifystatus has been added to dnf configuration With this update, when sslverifystatus option is enabled, dnf checks each server certificate revocation status using the Certificate Status Request TLS extension (OCSP stapling). As a result, when a revoked certificate is encountered, dnf refuses to download from its server. ( BZ#1814383 ) 4.4. Shells and command-line tools ReaR has been updated to version 2.6 Relax-and-Recover (ReaR) has been updated to version 2.6. Notable bug fixes and enhancements include: Added support for eMMC devices. By default, all kernel modules are included in the rescue system. To include specific modules, set the MODULES array variable in the configuration file as: MODULES=( mod1 mod2 ) On the AMD and Intel 64-bit architectures and on IBM Power Systems, Little Endian, a new configuration variable GRUB2_INSTALL_DEVICES is introduced to control the location of the bootloader installation. See the description in /usr/share/rear/conf/default.conf for more details. Improved backup of multipath devices. Files under /media , /run , /mnt , /tmp are automatically excluded from backups as these directories are known to contain removable media or temporary files. See the description of the AUTOEXCLUDE_PATH variable in /usr/share/rear/conf/default.conf . CLONE_ALL_USERS_GROUPS=true is now the default. See the description in /usr/share/rear/conf/default.conf for more details. ( BZ#1988493 ) The modulemd-tools package is now available With this update, the modulemd-tools package has been introduced which provides tools for parsing and generating modulemd YAML files. To install modulemd-tools , use: (BZ#1924850) opencryptoki rebased to version 3.16.0 opencryptoki has been upgraded to version 3.16.0. Notable bug fixes and enhancements include: Improved the protected-key option and support for the attribute-bound keys in the EP11 core processor. Improved the import and export of secure key objects in the cycle-count-accurate (CCA) processor. (BZ#1919223) lsvpd rebased to version 1.7.12 lsvpd has been upgraded to version 1.7.12. Notable bug fixes and enhancements include: Added the UUID property in sysvpd . Improved the NVMe firmware version. Fixed PCI device manufacturer parsing logic. Added recommends clause to the lsvpd configuration file. (BZ#1844428) ppc64-diag rebased to version 2.7.7 ppc64-diag has been upgraded to version 2.7.7. Notable bug fixes and enhancements include: Improved unit test cases. Added the UUID property in sysvpd . The rtas_errd service does not run in the Linux containers. The obsolete logging options are no longer available in the systemd service files. (BZ#1779206) The ipmi_power and ipmi_boot modules are available in the redhat.rhel_mgmt Collection This update provides support to the Intelligent Platform Management Interface ( IPMI ) Ansible modules. IPMI is a specification for a set of management interfaces to communicate with baseboard management controller (BMC) devices. The IPMI modules - ipmi_power and ipmi_boot - are available in the redhat.rhel_mgmt Collection, which you can obtain by installing the ansible-collection-redhat-rhel_mgmt package. (BZ#1843859) udftools 2.3 are now added to RHEL The udftools packages provide user-space utilities for manipulating Universal Disk Format (UDF) file systems. With this enhancement, udftools provides the following set of tools: cdrwtool - It performs actions like blank, format, quick setup, and write to the DVD-R/CD-R/CD-RW media. mkfs.udf , mkudffs - It creates a Universal Disk Format (UDF) filesystem. pktsetup - It sets up and tears down the packet device. udfinfo - It shows information about the Universal Disk Format (UDF) file system. udflabel - It shows or changes the Universal Disk Format (UDF) file system label. wrudf - It provides an interactive shell with cp , rm , mkdir , rmdir , ls , and cd operations on the existing Universal Disk Format (UDF) file system. (BZ#1882531) Tesseract 4.1.1 is now present in RHEL 8.5 Tesseract is an open-source OCR (optical character reading) engine and has the following features: Starting with tesseract version 4, character recognition is based on Long Short-Term Memory (LSTM) neural networks. Supports UTF-8. Supports plain text, hOCR (HTML), PDF, and TSV output formats. ( BZ#1826085 ) Errors when restoring LVM with thin pools do not happen anymore With this enhancement, ReaR now detects when thin pools and other logical volume types with kernel metadata (for example, RAIDs and caches) are used in a volume group (VG) and switches to a mode where it recreates all the logical volumes (LVs) in the VG using lvcreate commands. Therefore, LVM with thin pools are restored without any errors. Note This new method does not preserve all the LV properties, for example LVM UUIDs. A restore from the backup should be tested before using ReaR in a Production environment in order to determine whether the recreated storage layout matches the requirements. ( BZ#1747468 ) Net-SNMP now detects RSA and ECC certificates Previously, Net-Simple Network Management Protocol (Net-SNMP) detected only Rivest, Shamir, Adleman (RSA) certificates. This enhancement adds support for Elliptic Curve Cryptography (ECC). As a result, Net-SNMP now detects RSA and ECC certificates. ( BZ#1919714 ) FCoE option is changed to rd.fcoe Previously, the man page for dracut.cmdline documented rd.nofcoe=0 as the command to turn off Fibre Channel over Ethernet (FCoE). With this update, the command is changed to rd.fcoe . To disable FCoE, run the command rd.fcoe=0 . For further information on FCoE see, Configuring Fibre Channel over Ethernet ( BZ#1929201 ) 4.5. Infrastructure services linuxptp rebased to version 3.1 The linuxptp package has been updated to version 3.1. Notable bug fixes and enhancements include: Added ts2phc program for synchronization of Precision Time Protocol (PTP) hardware clock to Pulse Per Second (PPS) signal. Added support for the automotive profile. Added support for client event monitoring. ( BZ#1895005 ) chrony rebased to version 4.1 chrony has been updated to version 4.1. Notable bug fixes and enhancements include: Added support for Network Time Security (NTS) authentication. For more information, see Overview of Network Time Security (NTS) in chrony . By default, the Authenticated Network Time Protocol (NTP) sources are trusted over non-authenticated NTP sources. Add the autselectmode ignore argument in the chrony.conf file to restore the original behavior. The support for authentication with RIPEMD keys - RMD128 , RMD160 , RMD256 , RMD320 is no longer available. The support for long non-standard MACs in NTPv4 packets is no longer available. If you are using chrony 2.x , non-MD5/SHA1 keys, you need to configure chrony with the version 3 option. ( BZ#1895003 ) PowerTop rebased to version 2.14 PowerTop has been upgraded to version 2.14. This is an update adding Alder Lake, Sapphire Rapids, and Rocket Lake platforms support. (BZ#1834722) TuneD now moves unnecessary IRQs to housekeeping CPUs Network device drivers like i40e , iavf , mlx5 , evaluate the online CPUs to determine the number of queues and hence the MSIX vectors to be created. In low-latency environments with a large number of isolated and very few housekeeping CPUs, when TuneD tries to move these device IRQs to the housekeeping CPUs it fails due to the per CPU vector limit. With this enhancement, TuneD explicitly adjusts the numbers of network device channels (and hence MSIX vectors) as per the housekeeping CPUs. Therefore, all the device IRQs can now be moved on the housekeeping CPUs to achieve low latency. (BZ#1951992) 4.6. Security libreswan rebased to 4.4 The libreswan packages have been upgraded to upstream version 4.4, which introduces many enhancements and bug fixes. Most notably: The IKEv2 protocol: Introduced fixes for TCP encapsulation in Transport Mode and host-to-host connections. Added the --globalstatus option to the ipsec whack command for displaying redirect statistics. The vhost and vnet values in the ipsec.conf configuration file are no longer allowed for IKEv2 connections. The pluto IKE daemon: Introduced fixes for host-to-host connections that use non-standard IKE ports. Added peer ID ( IKEv2 IDr or IKEv1 Aggr ) to select the best initial connection. Disabled the interface-ip= option because Libreswan does not provide the corresponding functionality yet. Fixed the PLUTO_PEER_CLIENT variable in the ipsec__updown script for NAT in Transport Mode . Set the PLUTO_CONNECTION_TYPE variable to transport or tunnel . Non-templated wildcard ID connections can now match. (BZ#1958968) GnuTLS rebased to 3.6.16 The gnutls packages have been updated to version 3.6.16. Notable bug fixes and enhancements include: The gnutls_x509_crt_export2() function now returns 0 instead of the size of the internal base64 blob in case of success. This aligns with the documentation in the gnutls_x509_crt_export2(3) man page. Certificate verification failures due to the Online Certificate Status Protocol (OCSP) must-stapling not being followed are now correctly marked with the GNUTLS_CERT_INVALID flag. Previously, even when TLS 1.2 was explicitly disabled through the -VERS-TLS1.2 option, the server still offered TLS 1.2 if TLS 1.3 was enabled. The version negotiation has been fixed, and TLS 1.2 can now be correctly disabled. (BZ#1956783) socat rebased to 1.7.4 The socat packages have been upgraded from version 1.7.3 to 1.7.4, which provides many bug fixes and improvements. Most notably: GOPEN and UNIX-CLIENT addresses now support SEQPACKET sockets. The generic setsockopt-int and related options are, in the case of listening or accepting addresses, applied to the connected sockets. To enable setting options on a listening socket, the setsockopt-listen option is now available. Added the -r and -R options for a raw dump of transferred data to a file. Added the ip-transparent option and the IP_TRANSPARENT socket option. OPENSSL-CONNECT now automatically uses the SNI feature and the openssl-no-sni option turns SNI off. The openssl-snihost option overrides the value of the openssl-commonname option or the server name. Added the accept-timeout and listen-timeout options. Added the ip-add-source-membership option. UDP-DATAGRAM address now does not check peer port of replies as it did in 1.7.3. Use the sourceport optioon if your scenario requires the behavior. New proxy-authorization-file option reads PROXY-CONNECT credentials from a file and enables to hide this data from the process table. Added AF_VSOCK support for VSOCK-CONNECT and VSOCK-LISTEN addresses. ( BZ#1947338 ) crypto-policies rebased to 20210617 The crypto-policies packages have been upgraded to upstream version 20210617, which provides a number of enhancements and bug fixes over the version, most notably: You can now use scoped policies to enable different sets of algorithms for different back ends. Each configuration directive can now be limited to specific protocols, libraries, or services. For a complete list of available scopes and details on the new syntax, see the crypto-policies(7) man page. For example, the following directive allows using AES-256-CBC cipher with the SSH protocol, impacting both the libssh library and the OpenSSH suite: Directives can now use asterisks for specifying multiple values using wildcards. For example, the following directive disables all CBC mode ciphers for applications using libssh : Note that future updates can introduce new algorithms matched by the current wildcards. ( BZ#1960266 ) crypto-policies now support AES-192 ciphers in custom policies The system-wide cryptographic policies now support the following values for the cipher option in custom policies and subpolicies: AES-192-GCM , AES-192-CCM , AES-192-CTR , and AES-192-CBC . As a result, you can enable the AES-192-GCM and AES-192-CBC ciphers for the Libreswan application and the AES-192-CTR and AES-192-CBC ciphers for the libssh library and the OpenSSH suite through crypto-policies . (BZ#1876846) CBC ciphers disabled in the FUTURE cryptographic policy This update of the crypto-policies packages disables ciphers that use cipher block chaining (CBC) mode in the FUTURE policy. The settings in FUTURE should withstand near-term future attacks, and this change reflects the current progress. As a result, system components respecting crypto-policies cannot use CBC mode when the FUTURE policy is active. (BZ#1933016) Adding new kernel AVC tracepoint With this enhancement, a new avc:selinux_audited kernel tracepoint is added that triggers when an SELinux denial is to be audited. This feature allows for more convenient low-level debugging of SELinux denials. The new tracepoint is available for tools such as perf . (BZ#1954024) New ACSC ISM profile in the SCAP Security Guide The scap-security-guide packages now provide the Australian Cyber Security Centre (ACSC) Information Security Manual (ISM) compliance profile and a corresponding Kickstart file. With this enhancement, you can install a system that conforms with this security baseline and use the OpenSCAP suite for checking security compliance and remediation using the risk-based approach for security controls defined by ACSC. (BZ#1955373) SCAP Security Guide rebased to 0.1.57 The scap-security-guide packages have been rebased to upstream version 0.1.57, which provides several bug fixes and improvements. Most notably: The Australian Cyber Security Centre ( ACSC ) Information Security Manual ( ISM ) profile has been introduced. The profile extends the Essential Eight profile and adds more security controls defined in the ISM. The Center for Internet Security ( CIS ) profile has been restructured into four different profiles respecting levels of hardening and system type (server and workstation) as defined in the official CIS benchmarks. The Security Technical Implementation Guide ( STIG ) security profile has been updated, and implements rules from the recently-released version V1R3. The Security Technical Implementation Guide with GUI ( STIG with GUI ) security profile has been introduced. The profile derives from the STIG profile and is compatible with RHEL installations that select the Server with GUI package selection. The ANSSI High level profile, which is based on the ANSSI BP-028 recommendations from the French National Security Agency (ANSSI), has been introduced. This contains a profile implementing rules of High hardening levels. ( BZ#1966577 ) OpenSCAP rebased to 1.3.5 The OpenSCAP packages have been rebased to upstream version 1.3.5. Notable fixes and enhancements include: Enabled Schematron-based validation by default for the validate command of oval and xccdf modules. Added SCAP 1.3 source data stream Schematron. Added XML signature validation. Allowed clamping mtime to SOURCE_DATE_EPOCH . Added severity and role attributes. Support for requires and conflicts elements of the Rule and Group (XCCDF). Kubernetes remediation in the HTML report. Handling gpfs , proc and sysfs file systems as non-local. Fixed handling of common options styled as --arg=val . Fixed behavior of the StateType operator. Namespace ignored in XPath expressions ( xmlfilecontent ) to allow for incomplete XPath queries. Fixed a problem that led to a warning about the presence of obtrusive data. Fixed multiple segfaults and a broken test in the --stig-viewer feature. Fixed the TestResult/benchmark/@href attribute. Fixed many memory management issues. Fixed many memory leaks. ( BZ#1953092 ) Validation of digitally signed SCAP source data streams To conform with the Security Content Automation Protocol (SCAP) 1.3 specifications, OpenSCAP now validates digital signatures of digitally signed SCAP source data streams. As a result, OpenSCAP validates the digital signature when evaluating a digitally signed SCAP source data stream. The signature validation is performed automatically while loading the file. Data streams with invalid signatures are rejected, and OpenSCAP does not evaluate their content. OpenSCAP uses the XML Security Library with the OpenSSL cryptography library to validate the digital signature. You can skip the signature validation by adding the --skip-signature-validation option to the oscap xccdf eval command. Important OpenSCAP does not address the trustworthiness of certificates or public keys that are part of the KeyInfo signature element and that are used to verify the signature. You should verify such keys by yourselves to prevent evaluation of data streams that have been modified and signed by bad actors. ( BZ#1966612 ) New DISA STIG profile compatible with Server with GUI installations A new profile, DISA STIG with GUI , has been added to the SCAP Security Guide . This profile is derived from the DISA STIG profile and is compatible with RHEL installations that selected the Server with GUI package group. The previously existing stig profile was not compatible with Server with GUI because DISA STIG demands uninstalling any Graphical User Interface. However, this can be overridden if properly documented by a Security Officer during evaluation. As a result, the new profile helps when installing a RHEL system as a Server with GUI aligned with the DISA STIG profile. ( BZ#1970137 ) STIG security profile updated to version V1R3 The DISA STIG for Red Hat Enterprise Linux 8 profile in the SCAP Security Guide has been updated to align with the latest version V1R3 . The profile is now also more stable and better aligns with the RHEL 8 STIG (Security Technical Implementation Guide) manual benchmark provided by the Defense Information Systems Agency (DISA). This second iteration brings approximately 90% of coverage with regards to the STIG. You should use only the current version of this profile because older versions are no longer valid. Warning Automatic remediation might render the system non-functional. Run the remediation in a test environment first. ( BZ#1993056 ) Three new CIS profiles in SCAP Security Guide Three new compliance profiles aligned with the Center for Internet Security (CIS) Red Hat Enterprise Linux 8 Benchmark have been introduced to the SCAP Security Guide. The CIS RHEL 8 Benchmark provides different configuration recommendations for "Server" and "Workstation" deployments, and defines two levels of configuration, "level 1" and "level 2" for each deployment. The CIS profile previously shipped in RHEL8 represented only the "Server Level 2". The three new profiles complete the scope of the CIS RHEL8 Benchmark profiles, and you can now more easily evaluate your system against CIS recommendations. All currently available CIS RHEL 8 profiles are: Workstation Level 1 xccdf_org.ssgproject.content_profile_cis_workstation_l1 Workstation Level 2 xccdf_org.ssgproject.content_profile_cis_workstation_l2 Server Level 1 xccdf_org.ssgproject.content_profile_cis_server_l1 Server Level 2 xccdf_org.ssgproject.content_profile_cis ( BZ#1993197 ) Performance of remediations for Audit improved by grouping similar system calls Previously, Audit remediations generated an individual rule for each system call audited by the profile. This led to large numbers of audit rules, which degraded performance. With this enhancement, remediations for Audit can group rules for similar system calls with identical fields together into a single rule, which improves performance. Examples of system calls grouped together: ( BZ#1876483 ) Added profile for ANSSI-BP-028 High level The ANSSI High level profile, based on the ANSSI BP-028 recommendations from the French National Security Agency (ANSSI), has been introduced. This completes the availability of profiles for all ANSSI-BP-028 v1.2 hardening levels in the SCAP Security Guide . With the new profile, you can harden the system to the recommendations from ANSSI for GNU/Linux Systems at the High hardening level. As a result, you can configure and automate compliance of your RHEL 8 systems to the strictest hardening level by using the ANSSI Ansible Playbooks and the ANSSI SCAP profiles. ( BZ#1955183 ) OpenSSL added for encrypting Rsyslog TCP and RELP traffic The OpenSSL network stream driver has been added to Rsyslog. This driver implements TLS-protected transport using the OpenSSL library. This provides additional functionality compared to the stream driver using the GnuTLS library. As a result, you can now use either OpenSSL or GnuTLS as an Rsyslog network stream driver. ( BZ#1891458 ) Rsyslog rebased to 8.2102.0-5 The rsyslog packages have been rebased to upstream version 8.2102.0-5, which provides the following notable changes over the version: Added the exists() script function to check whether a variable exists or not, for example USD!path!var . Added support for setting OpenSSL configuration commands with a new configuration parameter tls.tlscfgcmd for the omrelp and imrelp modules. Added new rate-limit options to the omfwd module for rate-limiting syslog messages sent to the remote server: ratelimit.interval specifies the rate-limiting interval in seconds. ratelimit.burst specifies the rate-limiting burst in the number of messages. Rewritten the immark module with various improvements. Added the max sessions config parameter to the imptcp module. The maximum is measured per instance, not globally across all instances. Added the rsyslog-openssl subpackage; this network stream driver implements TLS-protected transport using the OpenSSL library. Added per-minute rate limiting to the imfile module with the MaxBytesPerMinute and MaxLinesPerMinute options. These options accept integer values and limit the number of bytes or lines that may be sent in a minute. Added support to the imtcp and omfwd module to configure a maximum depth for the certificate chain verification with the streamdriver.TlsVerifyDepth option. ( BZ#1932795 ) 4.7. Networking Support for pause parameter of ethtool in NetworkManager Non auto-pause parameters need to be set explicitly on a specific network interface in certain cases. Previously, NetworkManager could not pause the control flow parameters of ethtool in nmstate . To disable the auto negotiation of the pause parameter and enable RX/TX pause support explicitly, use the following command: ( BZ#1899372 ) New property in NetworkManager for setting physical and virtual interfaces in promiscuous mode With this update the 802-3-ethernet.accept-all-mac-addresses property has been added to NetworkManager for setting physical and virtual interfaces in the accept all MAC addresses mode. With this update, the kernel can accept network packages targeting current interfaces' MAC address in the accept all MAC addresses mode. To enable accept all MAC addresses mode on eth1 , use the following command: ( BZ#1942331 ) NetworkManager rebased to version 1.32.10 The NetworkManager packages have been upgraded to upstream version 1.32.10, which provides a number of enhancements and bug fixes over the version. For further information about notable changes, read the upstream release notes for this version. ( BZ#1934465 ) NetworkManager now supports nftables as firewall back end This enhancement adds support for the nftables firewall framework to NetworkManager. To switch the default back end from iptables to nftables : Create the /etc/NetworkManager/conf.d/99-firewall-backend.conf file with the following content: Reload the NetworkManager service. (BZ#1548825) firewalld rebased to version 0.9.3 The firewalld packages have been upgraded to upstream version 0.9.3, which provides a number of enhancements and bug fixes over the version. For further details, see the upstream release notes: firewalld 0.9.3 Release Notes firewalld 0.9.2 Release Notes firewalld 0.8.6 Release Notes firewalld 0.8.5 Release Notes firewalld 0.8.4 Release Notes ( BZ#1872702 ) The firewalld policy objects feature is now available Previously, you could not use firewalld to filter traffic flowing between virtual machines, containers, and zones. With this update, the firewalld policy objects feature has been introduced, which provides forward and output filtering in firewalld . (BZ#1492722) Multipath TCP is now fully supported Starting with RHEL 8.5, Multipath TCP (MPTCP) is fully supported. MPTCP improves resource usage within the network and resilience to network failure. For example, with Multipath TCP on the RHEL server, smartphones with MPTCP v1 enabled can connect to an application running on the server and switch between Wi-Fi and cellular networks without interrupting the connection to the server. RHEL 8.5 introduced additional features, such as: Multiple concurrent active substreams Active-backup support Improved stream performances Better memory usage, with receive and send buffer auto-tuning SYN cookie support Note that either the applications running on the server must natively support MPTCP or administrators must load an eBPF program into the kernel to dynamically change IPPROTO_TCP to IPPROTO_MPTCP . For further details see, Getting started with Multipath TCP . (JIRA:RHELPLAN-57712) Alternative network interface naming is now available in RHEL Alternative interface naming is the RHEL kernel configuration, which provides the following networking benefits: Network interface card (NIC) names can have arbitrary length. One NIC can have multiple names at the same time. Usage of alternative names as handles for commands. (BZ#2164986) 4.8. Kernel Kernel version in RHEL 8.5 Red Hat Enterprise Linux 8.5 is distributed with the kernel version 4.18.0-348. ( BZ#1839151 ) EDAC for Intel Sapphire Rapids processors is now supported This enhancement provides Error Detection And Correction (EDAC) device support for Intel Sapphire Rapids processors. EDAC mainly handles Error Code Correction (ECC) memory and detects and reports PCI bus parity errors. (BZ#1837389) The bpftrace package rebased to version 0.12.1 The bpftrace package has been upgraded to version 0.12.1, which provides multiple bug fixes and enhancements. Notable changes over versions include: Added the new builtin path, which is a new reliable method to display the full path from a path structure. Added wildcard support for kfunc probes and tracepoint categories. ( BZ#1944716 ) vmcore capture works as expected after CPU hot-add or hot-removal operations Previously, on IBM POWER systems, after every CPU or memory hot-plug or removal operation, the CPU data on the device tree became stale unless the kdump.service is reloaded. To reload the latest CPU information, the kdump.service parses through the device nodes to fetch the CPU information. However, some of the CPU nodes are already lost during its hot-removal. Consequently, a race condition between the kdump.service reload and a CPU hot-removal happens at the same time and this may cause the dump to fail. A subsequent crash might then not capture the vmcore file. This update eliminates the need to reload the kdump.service after a CPU hot-plug and the vmcore capture works as expected in the described scenario. Note: This enhancement works as expected for firmware-assisted dumps ( fadump ). In the case of standard kdump , the kdump.service reload takes place during the hot-plug operation. (BZ#1922951) The kdumpctl command now supports the new kdumpctl estimate utility The kdumpctl command now supports the kdumpctl estimate utility. Based on the existing kdump configuration, kdumpctl estimate prints a suitable estimated value for kdump memory allocation. The minimum size of the crash kernel may vary depending on the hardware and machine specifications. Hence, previously, it was difficult to estimate an accurate crashkernel= value. With this update, the kdumpctl estimate utility provides an estimated value. This value is a best effort recommended estimate and can serve as a good reference to configure a feasible crashkernel= value. (BZ#1879558) IBM TSS 2.0 package rebased to 1.6.0 The IBM's Trusted Computing Group (TCG) Software Stack (TSS) 2.0 binary package has been upgraded to 1.6.0. This update adds the IBM TSS 2.0 support on AMD64 and Intel 64 architecture. It is a user space TSS for Trusted Platform Modules (TPM) 2.0 and implements the functionality equivalent to (but not API compatible with) the TCG TSS working group's Enhanced System Application Interface (ESAPI), System Application Interface (SAPI), and TPM Command Transmission Interface (TCTI) API with a simpler interface. It is a security middleware that allows applications and platforms to share and integrate the TPM into secure applications. This rebase provides many bug fixes and enhancements over the version. The most notable changes include the following new attributes: tsscertifyx509 : validates the x509 certificate tssgetcryptolibrary : displays the current cryptographic library tssprintattr : prints the TPM attributes as text tsspublicname : calculates the public name of an entity tsssetcommandcodeauditstatus : clears or sets code via TPM2_SetCommandCodeAuditStatus tsstpmcmd : sends an in-band TPM simulator signal (BZ#1822073) The schedutil CPU frequency governor is now available on RHEL 8 The schedutil CPU governor uses CPU utilization data available on the CPU scheduler. schedutil is a part of the CPU scheduler and it can access the scheduler's internal data structures directly. schedutil controls how the CPU would raise and lower its frequency in response to system load. You must manually select the schedutil frequency governor as it is not enabled as default. There is one policyX directory per CPU. schedutil is available in the policyX/scaling_governors list of the existing CPUFreq governors in the kernel and is attached to /sys/devices/system/cpu/cpufreq/policyx policy. The policy file can be overwritten to change it. Note that when using intel_pstate scaling drivers, it might be necessary to configure the intel_pstate=passive command line argument for intel_pstate to become available and be listed by the governor. intel_pstate is the default on Intel hardware with any modern CPU. (BZ#1938339) The rt-tests suite rebased to rt-tests-2.1 upstream version The rt-tests suite has been rebased to rt-tests-2.1 version, which provides multiple bug fixes and enhancements. The notable changes over the version include: Fixes to various programs in the rt-tests suite. Fixes to make programs more uniform with the common set of options, for example, the oslat program's option -t --runtime option is renamed to -D to specify the run duration to match the rest of the suite. Implements a new feature to output data in json format. ( BZ#1954387 ) Intel(R) QuickAssist Technology Library (QATlib) was rebased to version 21.05 The qatlib package has been rebased to version 21.05, which provides multiple bug fixes and enhancements. Notable changes include: Adding support for several encryption algorithms: AES-CCM 192/256 ChaCha20-Poly1305 PKE 8K (RSA, DH, ModExp, ModInv) Fixing device enumeration on different nodes Fixing pci_vfio_set_command for 32-bit builds For more information about QATlib installation, check Ensuring that Intel(R) QuickAssist Technology stack is working correctly on RHEL 8 . (BZ#1920237) 4.9. File systems and storage xfs_quota state command now outputs all grace times when multiple quota types are specified The xfs_quota state command now outputs grace times for multiple quota types specified on the command line. Previously, only one was shown even if more than one of -g , -p , or -u was specified. (BZ#1949743) -H option added to the rpc.gssd daemon and the set-home option added to the /etc/nfs.conf file This patch adds the -H option to rpc.gssd and the set-home option into /etc/nfs.conf , but does not change the default behavior. By default, rpc.gssd sets USDHOME to / to avoid possible deadlock that may happen when users' home directories are on an NFS share with Kerberos security. If either the -H option is added to rpc.gssd , or set-home=0 is added to /etc/nfs.conf , rpc.gssd does not set USDHOME to / . These options allow you to use Kerberos k5identity files in USDHOME/.k5identity and assumes NFS home directory is not on an NFS share with Kerberos security. These options are provided for use in only specific environments, such as the need for k5identity files. For more information see the k5identity man page. (BZ#1868087) The storage RHEL system role now supports LVM VDO volumes Virtual Data Optimizer (VDO) helps to optimize usage of the storage volumes. With this enhancement, administrators can use the storage system role to manage compression and deduplication on Logical Manager Volumes (LVM) VDO volumes. ( BZ#1882475 ) 4.10. High availability and clusters Local mode version of pcs cluster setup command is now fully supported By default, the pcs cluster setup command automatically synchronizes all configuration files to the cluster nodes. Since RHEL 8.3, the pcs cluster setup command has provided the --corosync-conf option as a Technology Preview. This feature is now fully supported in RHEL 8.5. Specifying this option switches the command to local mode. In this mode, the pcs command-line interface creates a corosync.conf file and saves it to a specified file on the local node only, without communicating with any other node. This allows you to create a corosync.conf file in a script and handle that file by means of the script. ( BZ#1839637 ) Ability to configure watchdog-only SBD for fencing on subset of cluster nodes Previously, to use a watchdog-only SBD configuration, all nodes in the cluster had to use SBD. That prevented using SBD in a cluster where some nodes support it but other nodes (often remote nodes) required some other form of fencing. Users can now configure a watchdog-only SBD setup using the new fence_watchdog agent, which allows cluster configurations where only some nodes use watchdog-only SBD for fencing and other nodes use other fencing types. A cluster may only have a single such device, and it must be named watchdog . ( BZ#1443666 ) New pcs command to update SCSI fencing device without causing restart of all other resources Updating a SCSI fencing device with the pcs stonith update command causes a restart of all resources running on the same node where the stonith resource was running. The new pcs stonith update-scsi-devices command allows you to update SCSI devices without causing a restart of other cluster resources. ( BZ#1872378 ) New reduced output display option for pcs resource safe-disable command The pcs resource safe-disable and pcs resource disable --safe commands print a lengthy simulation result after an error report. You can now specify the --brief option for those commands to print errors only. The error report now always contains resource IDs of affected resources. ( BZ#1909901 ) pcs now accepts Promoted and Unpromoted as role names The pcs command-line interface now accepts Promoted and Unpromoted anywhere roles are specified in Pacemaker configuration. These role names are the functional equivalent of the Master and Slave Pacemaker roles. Master and Slave remain the names for these roles in configuration displays and help text. ( BZ#1885293 ) New pcs resource status display commands The pcs resource status and the pcs stonith status commands now support the following options: You can display the status of resources configured on a specific node with the pcs resource status node= node_id command and the pcs stonith status node= node_id command. You can use these commands to display the status of resources on both cluster and remote nodes. You can display the status of a single resource with the pcs resource status resource_id and the pcs stonith status resource_id commands. You can display the status of all resources with a specified tag with the pcs resource status tag_id and the pcs stonith status tag_id commands. ( BZ#1290830 , BZ#1285269) New LVM volume group flag to control autoactivation LVM volume groups now support a setautoactivation flag which controls whether logical volumes that you create from a volume group will be automatically activated on startup. When creating a volume group that will be managed by Pacemaker in a cluster, set this flag to n with the vgcreate --setautoactivation n command for the volume group to prevent possible data corruption. If you have an existing volume group used in a Pacemaker cluster, set the flag with vgchange --setautoactivation n . ( BZ#1899214 ) 4.11. Dynamic programming languages, web and database servers The nodejs:16 module stream is now fully supported The nodejs:16 module stream, previously available as a Technology preview, is fully supported with the release of the RHSA-2021:5171 advisory. The nodejs:16 module stream now provides Node.js 16.13.1 , which is a Long Term Support (LTS) version. Node.js 16 included in RHEL 8.5 provides numerous new features and bug and security fixes over Node.js 14 available since RHEL 8.3. Notable changes include: The V8 engine has been upgraded to version 9.4. The npm package manager has been upgraded to version 8.1.2. A new Timers Promises API provides an alternative set of timer functions that return Promise objects. Node.js now provides a new experimental Web Streams API. Node.js now includes Corepack , an experimental tool that enables you to use package managers configured in the given project without the need to manually install them. Node.js now provides an experimental ECMAScript modules (ESM) loader hooks API, which consolidates ESM loader hooks. To install the nodejs:16 module stream, use: If you want to upgrade from the nodejs:14 stream, see Switching to a later stream . (BZ#1953991, BZ#2027610) A new module stream: ruby:3.0 RHEL 8.5 introduces Ruby 3.0.2 in a new ruby:3.0 module stream. This version provides a number of performance improvements, bug and security fixes, and new features over Ruby 2.7 distributed with RHEL 8.3. Notable enhancements include: Concurrency and parallelism features: Ractor , an Actor-model abstraction that provides thread-safe parallel execution, is provided as an experimental feature. Fiber Scheduler has been introduced as an experimental feature. Fiber Scheduler intercepts blocking operations, which enables light-weight concurrency without changing existing code. Static analysis features: The RBS language has been introduced, which describes the structure of Ruby programs. The rbs gem has been added to parse type definitions written in RBS . The TypeProf utility has been introduced, which is a type analysis tool for Ruby code. Pattern matching with the case/in expression is no longer experimental. One-line pattern matching, which is an experimental feature, has been redesigned. Find pattern has been added as an experimental feature. The following performance improvements have been implemented: Pasting long code to the Interactive Ruby Shell (IRB) is now significantly faster. The measure command has been added to IRB for time measurement. Other notable changes include: Keyword arguments have been separated from other arguments. The default directory for user-installed gems is now USDHOME/.local/share/gem/ unless the USDHOME/.gem/ directory is already present. To install the ruby:3.0 module stream, use: If you want to upgrade from an earlier ruby module stream, see Switching to a later stream . ( BZ#1938942 ) Changes in the default separator for the Python urllib parsing functions To mitigate the Web Cache Poisoning CVE-2021-23336 in the Python urllib library, the default separator for the urllib.parse.parse_qsl and urllib.parse.parse_qs functions is being changed from both ampersand ( & ) and semicolon ( ; ) to only an ampersand. This change was implemented in Python 3.6 with the release of RHEL 8.4, and now is being backported to Python 3.8 and Python 2.7. The change of the default separator is potentially backwards incompatible, therefore Red Hat provides a way to configure the behavior in Python packages where the default separator has been changed. In addition, the affected urllib parsing functions issue a warning if they detect that a customer's application has been affected by the change. For more information, see the Mitigation of Web Cache Poisoning in the Python urllib library (CVE-2021-23336) Knowledgebase article. Python 3.9 is unaffected and already includes the new default separator ( & ), which can be changed only by passing the separator parameter when calling the urllib.parse.parse_qsl and urllib.parse.parse_qs functions in Python code. (BZ#1935686, BZ#1931555, BZ#1969517) The Python ipaddress module no longer allows zeros in IPv4 addresses To mitigate CVE-2021-29921 , the Python ipaddress module now rejects IPv4 addresses with leading zeros with an AddressValueError: Leading zeros are not permitted error. This change has been introduced in the python38 and python39 modules. Earlier Python versions distributed in RHEL are not affected by CVE-2021-29921. Customers who rely on the behavior can pre-process their IPv4 address inputs to strip the leading zeros off. For example: To strip the leading zeros off with an explicit loop for readability, use: (BZ#1986007, BZ#1970504, BZ#1970505) The php:7.4 module stream rebased to version 7.4.19 The PHP scripting language, provided by the php:7.4 module stream, has been upgraded from version 7.4.6 to version 7.4.19. This update provides multiple security and bug fixes. (BZ#1944110) A new package: pg_repack A new pg_repack package has been added to the postgresql:12 and postgresql:13 module streams. The pg_repack package provides a PostgreSQL extension that lets you remove bloat from tables and indexes, and optionally restore physical order of clustered indexes. (BZ#1967193, BZ#1935889) A new module stream: nginx:1.20 The nginx 1.20 web and proxy server is now available as the nginx:1.20 module stream. This update provides a number of bug fixes, security fixes, new features, and enhancements over the previously released version 1.18. New features: nginx now supports client SSL certificate validation with Online Certificate Status Protocol (OCSP). nginx now supports cache clearing based on the minimum amount of free space. This support is implemented as the min_free parameter of the proxy_cache_path directive. A new ngx_stream_set_module module has been added, which enables you to set a value for a variable. Enhanced directives: Multiple new directives are now available, such as ssl_conf_command and ssl_reject_handshake . The proxy_cookie_flags directive now supports variables. Improved support for HTTP/2: The ngx_http_v2 module now includes the lingering_close , lingering_time , lingering_timeout directives. Handling connections in HTTP/2 has been aligned with HTTP/1.x. From nginx 1.20 , use the keepalive_timeout and keepalive_requests directives instead of the removed http2_recv_timeout , http2_idle_timeout , and http2_max_requests directives. To install the nginx:1.20 stream, use: If you want to upgrade from the nginx:1.20 stream, see Switching to a later stream . (BZ#1945671) The squid:4 module stream rebased to version 4.15 The Squid proxy server, available in the squid:4 module stream, has been upgraded from version 4.11 to version 4.15. This update provides various bug and security fixes. (BZ#1964384) LVM system.devices file feature now available in RHEL 8 RHEL 8.5 introduces the LVM system.devices file feature. By creating a list of devices in the /etc/lvm/devices/system.devices file, you can select specific devices for LVM to recognize and use, and prevent LVM from using unwanted devices. To enable the system.devices file feature, set use_devicesfile=1 in the lvm.conf configuration file and add devices to the system.devices file. LVM ignores any devices filter settings while the system.devices file feature is enabled. To prevent warning messages, remove your filter settings from the lvm.conf file. For more information, see the lvmdevices(8) man page. (BZ#1922312) quota now supports HPE XFS The quota utilities now provide support for the HPE XFS file system. As a result, users of HPE XFS can monitor and and manage user and group disk usage through quota utilities. (BZ#1945408) mutt rebased to version 2.0.7 The Mutt email client has been updated to version 2.0.7, which provides a number of enhancements and bug fixes. Notable changes include: Mutt now provides support for the OAuth 2.0 authorization protocol using the XOAUTH2 mechanism. Mutt now also supports the OAUTHBEARER authentication mechanism for the IMAP, POP, and SMTP protocols. The OAuth-based functionality is provided through external scripts. As a result, you can connect Mutt with various cloud email providers, such as Gmail using authentication tokens. For more information on how to set up Mutt with OAuth support, see How to set up Mutt with Gmail using OAuth2 authentication . Mutt adds support for domain-literal email addresses, for example, user@[IPv6:fcXX:... ] . The new USDssl_use_tlsv1_3 configuration variable allows TLS 1.3 connections if they are supported by the email server. This variable is enabled by default. The new USDimap_deflate variable adds support for the COMPRESS=DEFLATE compression. The variable is disabled by default. The USDssl_starttls variable no longer controls aborting an unencrypted IMAP PREAUTH connection. Use the USDssl_force_tls variable instead if you rely on the STARTTLS process. Note that even after an update to the new Mutt version, the ssl_force_tls configuration variable still defaults to no to prevent RHEL users from encountering problems in their existing environments. In the upstream version of Mutt , ssl_force_tls is now enabled by default. ( BZ#1912614 , BZ#1890084 ) 4.12. Compilers and development tools Go Toolset rebased to version 1.16.7 Go Toolset has been upgraded to version 1.16.7. Notable changes include: The GO111MODULE environment variable is now set to on by default. To revert this setting, change GO111MODULE to auto . The Go linker now uses less resources and improves code robustness and maintainability. This applies to all supported architectures and operating systems. With the new embed package you can access embedded files while compiling programs. All functions of the io/ioutil package have been moved to the io and os packages. While you can still use io/ioutil , the io and os packages provide better definitions. The Delve debugger has been rebased to 1.6.0 and now supports Go 1.16.7 Toolset. For more information, see Using Go Toolset . (BZ#1938071) Rust Toolset rebased to version 1.54.0 Rust Toolset has been updated to version 1.54.0. Notable changes include: The Rust standard library is now available for the wasm32-unknown-unknown target. With this enhancement, you can generate WebAssembly binaries, including newly stabilized intrinsics. Rust now includes the IntoIterator implementation for arrays. With this enhancement, you can use the IntoIterator trait to iterate over arrays by value and pass arrays to methods. However, array.into_iter() still iterates values by reference until the 2021 edition of Rust. The syntax for or patterns now allows nesting anywhere in the pattern. For example: Pattern(1|2) instead of Pattern(1)|Pattern(2) . Unicode identifiers can now contain all valid identifier characters as defined in the Unicode Standard Annex #31. Methods and trait implementations have been stabilized. Incremental compilation is re-enabled by default. For more information, see Using Rust Toolset . (BZ#1945805) LLVM Toolset rebased to version 12.0.1 LLVM Toolset has been upgraded to version 12.0.1. Notable changes include: The new compiler flag -march=x86-64-v[234] has been added. The compiler flag -fasynchronous-unwind-tables of the Clang compiler is now the default on Linux AArch64/PowerPC. The Clang compiler now supports the C++20 likelihood attributes [[likely]] and [[unlikely]]. The new function attribute tune-cpu has been added. It allows microarchitectural optimizations to be applied independently from the target-cpu attribute or TargetMachine CPU. The new sanitizer -fsanitize=unsigned-shift-base has been added to the integer sanitizer -fsanitize=integer to improve security. Code generation on PowerPC targets has been optimized. The WebAssembly backend is now enabled in LLVM. With this enhancement, you can generate WebAssembly binaries with LLVM and Clang. For more information, see Using LLVM Toolset . (BZ#1927937) CMake rebased to version 3.20.2 CMake has been rebased from 3.18.2 to 3.20.2. To use CMake on a project that requires the version 3.20.2 or less, use the command cmake_minimum_required(version 3.20.2). Notable changes include: C++23 compiler modes can now be specified by using the target properties CXX_STANDARD , CUDA_STANDARD , OBJCXX_STANDARD , or by using the cxx_std_23 meta-feature of the compile features function. CUDA language support now allows the NVIDIA CUDA compiler to be a symbolic link. The Intel oneAPI NextGen LLVM compilers are now supported with the IntelLLVM compiler ID . CMake now facilitates cross compiling for Android by merging with the Android NDK's toolchain file. When running cmake(1) to generate a project build system, unknown command-line arguments starting with a hyphen are now rejected. For further information on new features and deprecated functionalities, see the CMake Release Notes . (BZ#1957947) New GCC Toolset 11 GCC Toolset 11 is a compiler toolset that provides recent versions of development tools. It is available as an Application Stream in the form of a Software Collection in the AppStream repository. The following components have been rebased since GCC Toolset 10: GCC to version 11.2 GDB to version 10.2 Valgrind to version 3.17.0 SystemTap to version 4.5 binutils to version 2.36 elfutils to version 0.185 dwz to version 0.14 Annobin to version 9.85 For a complete list of components, see GCC Toolset 11 . To install GCC Toolset 11, run the following command as root: To run a tool from GCC Toolset 11: To run a shell session where tool versions from GCC Toolset 11 override system versions of these tools: For more information, see Using GCC Toolset . The GCC Toolset 11 components are also available in the two container images: rhel8/gcc-toolset-11-toolchain , which includes the GCC compiler, the GDB debugger, and the make automation tool. rhel8/gcc-toolset-11-perftools , which includes the performance monitoring tools, such as SystemTap and Valgrind. To pull a container image, run the following command as root: Note that only the GCC Toolset 11 container images are now supported. Container images of earlier GCC Toolset versions are deprecated. (BZ#1953094) .NET updated to version 6.0 Red Hat Enterprise Linux 8.5 is distributed with .NET version 6.0. Notable improvements include: Support for 64-bit Arm (aarch64) Support for IBM Z and LinuxONE (s390x) For more information, see Release Notes for .NET 6.0 RPM packages and Release Notes for .NET 6.0 containers . ( BZ#2022794 ) GCC Toolset 11: GCC rebased to version 11.2 In GCC Toolset 11, the GCC package has been updated to version 11.2. Notable bug fixes and enhancements include: General improvements GCC now defaults to the DWARF Version 5 debugging format. Column numbers shown in diagnostics represent real column numbers by default and respect multicolumn characters. The straight-line code vectorizer considers the whole function when vectorizing. A series of conditional expressions that compare the same variable can be transformed into a switch statement if each of them contains a comparison expression. Interprocedural optimization improvements: A new IPA-modref pass, controlled by the -fipa-modref option, tracks side effects of function calls and improves the precision of points-to analysis. The identical code folding pass, controlled by the -fipa-icf option, was significantly improved to increase the number of unified functions and reduce compile-time memory use. Link-time optimization improvements: Memory allocation during linking was improved to reduce peak memory use. Using a new GCC_EXTRA_DIAGNOSTIC_OUTPUT environment variable in IDEs, you can request machine-readable "fix-it hints" without adjusting build flags. The static analyzer, run by the -fanalyzer option, is improved significantly with numerous bug fixes and enhancements provided. Language-specific improvements C family C and C++ compilers support non-rectangular loop nests in OpenMP constructs and the allocator routines of the OpenMP 5.0 specification. Attributes: The new no_stack_protector attribute marks functions that should not be instrumented with stack protection ( -fstack-protector ). The improved malloc attribute can be used to identify allocator and deallocator API pairs. New warnings: -Wsizeof-array-div , enabled by the -Wall option, warns about divisions of two sizeof operators when the first one is applied to an array and the divisor does not equal the size of the array element. -Wstringop-overread , enabled by default, warns about calls to string functions that try to read past the end of the arrays passed to them as arguments. Enhanced warnings: -Wfree-nonheap-object detects more instances of calls to deallocation functions with pointers not returned from a dynamic memory allocation function. -Wmaybe-uninitialized diagnoses the passing of pointers and references to uninitialized memory to functions that take const -qualified arguments. -Wuninitialized detects reads from uninitialized dynamically allocated memory. C Several new features from the upcoming C2X revision of the ISO C standard are supported with the -std=c2x and -std=gnu2x options. For example: The standard attribute is supported. The __has_c_attribute preprocessor operator is supported. Labels may appear before declarations and at the end of a compound statement. C++ The default mode is changed to -std=gnu++17 . The C++ library libstdc++ has improved C++17 support now. Several new C++20 features are implemented. Note that C++20 support is experimental. For more information about the features, see C++20 Language Features . The C++ front end has experimental support for some of the upcoming C++23 draft features. New warnings: -Wctad-maybe-unsupported , disabled by default, warns about performing class template argument deduction on a type with no deduction guides. -Wrange-loop-construct , enabled by -Wall , warns when a range-based for loop is creating unnecessary and resource inefficient copies. -Wmismatched-new-delete , enabled by -Wall , warns about calls to operator delete with pointers returned from mismatched forms of operator new or from other mismatched allocation functions. -Wvexing-parse , enabled by default, warns about the most vexing parse rule: the cases when a declaration looks like a variable definition, but the C++ language requires it to be interpreted as a function declaration. Architecture-specific improvements The 64-bit ARM architecture The Armv8-R architecture is supported through the -march=armv8-r option. GCC can autovectorize operations performing addition, subtraction, multiplication, and the accumulate and subtract variants on complex numbers. AMD and Intel 64-bit architectures The following Intel CPUs are supported: Sapphire Rapids, Alder Lake, and Rocket Lake. New ISA extension support for Intel AVX-VNNI is added. The -mavxvnni compiler switch controls the AVX-VNNI intrinsics. AMD CPUs based on the znver3 core are supported with the new -march=znver3 option. Three microarchitecture levels defined in the x86-64 psABI supplement are supported with the new -march=x86-64-v2 , -march=x86-64-v3 , and -march=x86-64-v4 options. (BZ#1946782) GCC Toolset 11: dwz now supports DWARF 5 In GCC Toolset 11, the dwz tool now supports the DWARF Version 5 debugging format. (BZ#1948709) GCC Toolset 11: GCC now supports the AIA user interrupts In GCC Toolset 11, GCC now supports the Accelerator Interfacing Architecture (AIA) user interrupts. (BZ#1927516) GCC Toolset 11: Generic SVE tuning defaults improved In GCC Toolset 11, generic SVE tuning defaults have been improved on the 64-bit ARM architecture. (BZ#1979715) SystemTap rebased to version 4.5 The SystemTap package has been updated to version 4.5. Notable bug fixes and enhancements include: 32-bit floating-point variables are automatically widened to double variables and, as a result, can be accessed directly as USDcontext variables. enum values can be accessed as USDcontext variables. The BPF uconversions tapset has been extended and includes more tapset functions to access values in user space, for example user_long_error() . Concurrency control has been significantly improved to provide stable operation on large servers. For further information, see the upstream SystemTap 4.5 release notes . ( BZ#1933889 ) elfutils rebased to version 0.185 The elfutils package has been updated to version 0.185. Notable bug fixes and enhancements include: The eu-elflint and eu-readelf tools now recognize and show the SHF_GNU_RETAIN and SHT_X86_64_UNWIND flags on ELF sections. The DEBUGINFOD_SONAME macro has been added to debuginfod.h . This macro can be used with the dlopen function to load the libdebuginfod.so library dynamically from an application. A new function debuginfod_set_verbose_fd has been added to the debuginfod-client library. This function enhances the debuginfod_find_* queries functionality by redirecting the verbose output to a separate file. Setting the DEBUGINFOD_VERBOSE environment variable now shows more information about which servers the debuginfod client connects to and the HTTP responses of those servers. The debuginfod server provides a new thread-busy metric and more detailed error metrics to make it easier to inspect processes that run on the debuginfod server. The libdw library now transparently handles the DW_FORM_indirect location value so that the dwarf_whatform function returns the actual FORM of an attribute. To reduce network traffic, the debuginfod-client library stores negative results in a cache, and client objects can reuse an existing connection. ( BZ#1933890 ) Valgrind rebased to version 3.17.0 The Valgrind package has been updated to version 3.17.0. Notable bug fixes and enhancements include: Valgrind can read the DWARF Version 5 debugging format. Valgrind supports debugging queries to the debuginfod server. The ARMv8.2 processor instructions are partially supported. The Power ISA v.3.1 instructions on POWER10 processors are partially supported. The IBM z14 processor instructions are supported. Most IBM z15 instructions are supported. The Valgrind tool suite supports the miscellaneous-instruction-extensions facility 3 and the vector-enhancements facility 2 for the IBM z15 processor. As a result, Valgrind runs programs compiled with GCC -march=z15 correctly and provides improved performance and debugging experience. The --track-fds=yes option respects -q ( --quiet ) and ignores the standard file descriptors stdin , stdout , and stderr by default. To track the standard file descriptors, use the --track-fds=all option. The DHAT tool has two new modes of operation: --mode=copy and --mode=ad-hoc . ( BZ#1933891 ) Dyninst rebased to version 11.0.0 The Dyninst package has been updated to version 11.0.0. Notable bug fixes and enhancements include: Support for the debuginfod server and for fetching separate debuginfo files. Improved detection of indirect calls to procedure linkage table (PLT) stubs. Improved C++ name demangling. Fixed memory leaks during code emitting. ( BZ#1933893 ) DAWR functionality improved in GDB on IBM POWER10 With this enhancement, new hardware watchpoint capabilities are now enabled for GDB on the IBM POWER10 processors. For example, a new set of DAWR/DAWRX registers has been added. (BZ#1854784) GCC Toolset 11: GDB rebased to version 10.2 In GCC Toolset 11, the GDB package has been updated to version 10.2. Notable bug fixes and enhancements include: New features Multithreaded symbol loading is enabled by default on architectures that support this feature. This change provides better performance for programs with many symbols. Text User Interface (TUI) windows can be arranged horizontally. GDB supports debugging multiple target connections simultaneously but this support is experimental and limited. For example, you can connect each inferior to a different remote server that runs on a different machine, or you can use one inferior to debug a local native process or a core dump or some other process. New and improved commands A new tui new-layout name window weight [ window weight... ] command creates a new text user interface (TUI) layout, you can also specify a layout name and displayed windows. The improved alias [-a] [--] alias = command [ default-args ] command can specify default arguments when creating a new alias. The set exec-file-mismatch and show exec-file-mismatch commands set and show a new exec-file-mismatch option. When GDB attaches to a running process, this option controls how GDB reacts when it detects a mismatch between the current executable file loaded by GDB and the executable file used to start the process. Python API The gdb.register_window_type function implements new TUI windows in Python. You can now query dynamic types. Instances of the gdb.Type class can have a new boolean attribute dynamic and the gdb.Type.sizeof attribute can have value None for dynamic types. If Type.fields() returns a field of a dynamic type, the value of its bitpos attribute can be None . A new gdb.COMMAND_TUI constant registers Python commands as members of the TUI help class of commands. A new gdb.PendingFrame.architecture() method retrieves the architecture of the pending frame. A new gdb.Architecture.registers method returns a gdb.RegisterDescriptorIterator object, an iterator that returns gdb.RegisterDescriptor objects. Such objects do not provide the value of a register but help understand which registers are available for an architecture. A new gdb.Architecture.register_groups method returns a gdb.RegisterGroupIterator object, an iterator that returns gdb.RegisterGroup objects. Such objects help understand which register groups are available for an architecture. (BZ#1954332) GCC Toolset 11: SystemTap rebased to version 4.5 In GCC Toolset 11, the SystemTap package has been updated to version 4.5. Notable bug fixes and enhancements include: 32-bit floating-point variables are now automatically widened to double variables and, as a result, can be accessed directly as USDcontext variables. enum values can now be accessed as USDcontext variables. The BPF uconversions tapset has been extended and now includes more tapset functions to access values in user space, for example user_long_error() . Concurrency control has been significantly improved to provide stable operation on large servers. For further information, see the upstream SystemTap 4.5 release notes . ( BZ#1957944 ) GCC Toolset 11: elfutils rebased to version 0.185 In GCC Toolset 11, the elfutils package has been updated to version 0.185. Notable bug fixes and enhancements include: The eu-elflint and eu-readelf tools now recognize and show the SHF_GNU_RETAIN and SHT_X86_64_UNWIND flags on ELF sections. The DEBUGINFOD_SONAME macro has been added to debuginfod.h . This macro can be used with the dlopen function to load the libdebuginfod.so library dynamically from an application. A new function debuginfod_set_verbose_fd has been added to the debuginfod-client library. This function enhances the debuginfod_find_* queries functionality by redirecting the verbose output to a separate file. Setting the DEBUGINFOD_VERBOSE environment variable now shows more information about which servers the debuginfod client connects to and the HTTP responses of those servers. The debuginfod server provides a new thread-busy metric and more detailed error metrics to make it easier to inspect processes that run on the debuginfod server. The libdw library now transparently handles the DW_FORM_indirect location value so that the dwarf_whatform function returns the actual FORM of an attribute. The debuginfod-client library now stores negative results in a cache and client objects can reuse an existing connection. This way unnecessary network traffic when using the library is prevented. ( BZ#1957225 ) GCC Toolset 11: Valgrind rebased to version 3.17.0 In GCC Toolset 11, the Valgrind package has been updated to version 3.17.0. Notable bug fixes and enhancements include: Valgrind can now read the DWARF Version 5 debugging format. Valgrind now supports debugging queries to the debuginfod server. Valgrind now partially supports the ARMv8.2 processor instructions. Valgrind now supports the IBM z14 processor instructions. Valgrind now partially supports the Power ISA v.3.1 instructions on POWER10 processors. The --track-fds=yes option now respects -q ( --quiet ) and ignores the standard file descriptors stdin , stdout , and stderr by default. To track the standard file descriptors, use the --track-fds=all option. The DHAT tool now has two new modes of operation: --mode=copy and --mode=ad-hoc . ( BZ#1957226 ) GCC Toolset 11: Dyninst rebased to version 11.0.0 In GCC Toolset 11, the Dyninst package has been updated to version 11.0.0. Notable bug fixes and enhancements include: Support for the debuginfod server and for fetching separate debuginfo files. Improved detection of indirect calls to procedure linkage table (PLT) stubs. Improved C++ name demangling. Fixed memory leaks during code emitting. ( BZ#1957942 ) PAPI library support for Fujitsu A64FX added PAPI library support for Fujitsu A64FX has been added. With this feature, developers can collect hardware statistics. (BZ#1908126) The PCP package was rebased to 5.3.1 The Performance Co-Pilot (PCP) package has been rebased to version 5.3.1. This release includes bug fixes, enhancements, and new features. Notable changes include: Scalability improvements, which now support centrally logged performance metrics for hundreds of hosts ( pmlogger farms) and automatic monitoring with performance rules ( pmie farms). Resolved memory leaks in the pmproxy service and the libpcp_web API library, and added instrumentation and new metrics to pmproxy . A new pcp-ss tool for historical socket statistics. Improvements to the pcp-htop tool. Extensions to the over-the-wire PCP protocol which now support higher resolution timestamps. ( BZ#1922040 ) The grafana package was rebased to version 7.5.9 The grafana package has been rebased to version 7.5.9. Notable changes include: New time series panel (beta) New pie chart panel (beta) Alerting support for Loki Multiple new query transformations For more information, see What's New in Grafana v7.4 , What's New in Grafana v7.5 . ( BZ#1921191 ) The grafana-pcp package was rebased to 3.1.0 The grafana-pcp package has been rebased to version 3.1.0. Notable changes include: Performance Co-Pilot (PCP) Vector Checklist dashboards use a new time series panel, show units in graphs, and contain updated help texts. Adding pmproxy URL and hostspec variables to PCP Vector Host Overview and PCP Checklist dashboards. All dashboards display datasource selection. Marking all included dashboards as readonly. Adding compatibility with Grafana 8. ( BZ#1921190 ) grafana-container rebased to version 7.5.9 The rhel8/grafana container image provides Grafana. Notable changes include: The grafana package is now updated to version 7.5.9. The grafana-pcp package is now updated to version 3.1.0. The container now supports the GF_INSTALL_PLUGINS environment variable to install custom Grafana plugins at container startup The rebase updates the rhel8/grafana image in the Red Hat Container Registry. To pull this container image, execute the following command: ( BZ#1971557 ) pcp-container rebased to version 5.3.1 The rhel8/pcp container image provides Performance Co-Pilot. The pcp-container package has been upgraded to version 5.3.1. Notable changes include: The pcp package is now updated to version 5.3.1. The rebase updates the rhel8/pcp image in the Red Hat Container Registry. To pull this container image, execute the following command: ( BZ#1974912 ) The new pcp-ss PCP utility is now available The pcp-ss PCP utility reports socket statistics collected by the pmdasockets(1) PMDA. The command is compatible with many of the ss command line options and reporting formats. It also offers the advantages of local or remote monitoring in live mode and historical replay from a previously recorded PCP archive. ( BZ#1879350 ) Power consumption metrics now available in PCP The new pmda-denki Performance Metrics Domain Agent (PMDA) reports metrics related to power consumption. Specifically, it reports: Consumption metrics based on Running Average Power Limit (RAPL) readings, available on recent Intel CPUs Consumption metrics based on battery discharge, available on systems which have a battery (BZ#1629455) 4.13. Identity Management IdM now supports new password policy options With this update, Identity Management (IdM) supports additional libpwquality library options: --maxrepeat Specifies the maximum number of the same character in sequence. --maxsequence Specifies the maximum length of monotonic character sequences ( abcd ). --dictcheck Checks if the password is a dictionary word. --usercheck Checks if the password contains the username. Use the ipa pwpolicy-mod command to apply these options. For example, to apply the user name check to all new passwords suggested by the users in the managers group: If any of the new password policy options are set, then the minimum length of passwords is 6 characters regardless of the value of the --minlength option. The new password policy settings are applied only to new passwords. In a mixed environment with RHEL 7 and RHEL 8 servers, the new password policy settings are enforced only on servers running on RHEL 8.4 and later. If a user is logged in to an IdM client and the IdM client is communicating with an IdM server running on RHEL 8.3 or earlier, then the new password policy requirements set by the system administrator will not be applied. To ensure consistent behavior, upgrade or update all servers to RHEL 8.4 and later. (JIRA:RHELPLAN-89566) Improved the SSSD debug logging by adding a unique identifier tag for each request As SSSD processes requests asynchronously, it is not easy to follow log entries for individual requests in the backend logs, as messages from different requests are added to the same log file. To improve the readability of debug logs, a unique request identifier is now added to log messages in the form of RID#<integer> . This allows you to isolate logs pertaining to an individual request, and you can track requests from start to finish across log files from multiple SSSD components. For example, the following sample output from an SSSD log file shows the unique identifiers RID#3 and RID#4 for two different requests: (JIRA:RHELPLAN-92473) IdM now supports the automember and server Ansible modules With this update, the ansible-freeipa package contains the ipaautomember and ipaserver modules: Using the ipaautomember module, you can add, remove, and modify automember rules and conditions. As a result, future IdM users and hosts that meet the conditions will be assigned to IdM groups automatically. Using the ipaserver module, you can ensure various parameters of the presence or absence of a server in the IdM topology. You can also ensure that a replica is hidden or visible. (JIRA:RHELPLAN-96640) IdM performance baseline With this update, a RHEL 8.5 IdM server with 4 CPUs and 8GB of RAM has been tested to successfully enroll 130 IdM clients simultaneously. (JIRA:RHELPLAN-97145) SSSD Kerberos cache performance has been improved The System Security Services Daemon (SSSD) Kerberos Cache Manager (KCM) service now includes the new operation KCM_GET_CRED_LIST . This enhancement improves KCM performance by reducing the number of input and output operations while iterating through a credentials cache. ( BZ#1956388 ) SSSD now logs backtraces by default With this enhancement, SSSD now stores detailed debug logs in an in-memory buffer and appends them to log files when a failure occurs. By default, the following error levels trigger a backtrace: Level 0: fatal failures Level 1: critical failures Level 2: serious failures You can modify this behavior for each SSSD process by setting the debug_level option in the corresponding section of the sssd.conf configuration file: If you set the debugging level to 0, only level 0 events trigger a backtrace. If you set the debugging level to 1, levels 0 and 1 trigger a backtrace. If you set the debugging level to 2 or higher, events at level 0 through 2 trigger a backtrace. You can disable this feature per SSSD process by setting the debug_backtrace_enabled option to false in the corresponding section of sssd.conf : ( BZ#1949149 ) SSSD KCM now supports the auto-renewal of ticket granting tickets With this enhancement, you can now configure the System Security Services Daemon (SSSD) Kerberos Cache Manager (KCM) service to auto-renew ticket granting tickets (TGTs) stored in the KCM credential cache on an Identity Management (IdM) server. Renewals are only attempted when half of the ticket lifetime has been reached. To use auto-renewal, the key distribution center (KDC) on the IdM server must be configured to support renewable Kerberos tickets. You can enable TGT auto-renewal by modifying the [kcm] section of the /etc/sssd/sssd.conf file. For example, you can configure SSSD to check for renewable KCM-stored TGTs every 60 minutes and attempt auto-renewal if half of the ticket lifetime has been reached by adding the following options to the file: Alternatively, you can configure SSSD to inherit krb5 options for renewals from an existing domain: For more information, see the Renewals section of the sssd-kcm man page. ( BZ#1627112 ) samba rebased to version 4.14.4 Publishing printers in Active Directory (AD) has increased reliability, and additional printer features have been added to the published information in AD. Also, Samba now supports Windows drivers for the ARM64 architecture. The ctdb isnotrecmaster command has been removed. As an alternative, use ctdb pnn or the ctdb recmaster commands. The clustered trivial database (CTDB) ctdb natgw master and slave-only parameters have been renamed to ctdb natgw leader and follower-only . Back up the database files before starting Samba. When the smbd , nmbd , or winbind services start Samba automatically updates its tdb database files. Note that Red Hat does not support downgrading tdb database files. After updating Samba, verify the /etc/samba/smb.conf file using the testparm utility. For further information about notable changes, read the upstream release notes before updating. ( BZ#1944657 ) The dnaInterval configuration attribute is now supported With this update, Red Hat Directory Server supports setting the dnaInterval attribute of the Distributed Numeric Assignment (DNA) plug-in in the cn= <DNA_config_entry> ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config entry. The DNA plug-in generates unique values for specified attributes. In a replication environment, servers can share the same range. To avoid overlaps on different servers, you can set the dnaInterval attribute to skip some values. For example, if the interval is 3 and the first number in the range is 1 , the number used in the range is 4 , then 7 , then 10 . For further details, see the dnaInterval parameter description. ( BZ#1938239 ) Directory Server rebased to version 1.4.3.27 The 389-ds-base packages have been upgraded to upstream version 1.4.3.27, which provides a number of bug fixes and enhancements over the version. For a complete list of notable changes, read the upstream release notes before updating: https://directory.fedoraproject.org/docs/389ds/releases/release-1-4-3-24.html https://directory.fedoraproject.org/docs/389ds/releases/release-1-4-3-23.html https://directory.fedoraproject.org/docs/389ds/releases/release-1-4-3-22.html https://directory.fedoraproject.org/docs/389ds/releases/release-1-4-3-21.html https://directory.fedoraproject.org/docs/389ds/releases/release-1-4-3-20.html https://directory.fedoraproject.org/docs/389ds/releases/release-1-4-3-19.html https://directory.fedoraproject.org/docs/389ds/releases/release-1-4-3-18.html https://directory.fedoraproject.org/docs/389ds/releases/release-1-4-3-17.html ( BZ#1947044 ) Directory Server now supports temporary passwords This enhancement enables administrators to configure temporary password rules in global and local password policies. With these rules, you can configure that, when an administrator resets the password of a user, the password is temporary and only valid for a specific time and for a defined number of attempts. Additionally, you can configure that the expiration time does not start directly when the administrator changes the password. As a result, Directory Server allows the user only to authenticate using the temporary password for a finite period of time or attempts. Once the user authenticates successfully, Directory Server allows this user only to change its password. (BZ#1626633) IdM KDC now issues Kerberos tickets with PAC information to increase security With this update, to increase security, RHEL Identity Management (IdM) now issues Kerberos tickets with Privilege Attribute Certificate (PAC) information by default in new deployments. A PAC has rich information about a Kerberos principal, including its Security Identifier (SID), group memberships, and home directory information. As a result, Kerberos tickets are less susceptible to manipulation by malicious servers. SIDs, which Microsoft Active Directory (AD) uses by default, are globally unique identifiers that are never reused. SIDs express multiple namespaces: each domain has a SID, which is a prefix in the SID of each object. Starting with RHEL 8.5, when you install an IdM server or replica, the installation script generates SIDs for users and groups by default. This allows IdM to work with PAC data. If you installed IdM before RHEL 8.5, and you have not configured a trust with an AD domain, you may not have generated SIDs for your IdM objects. For more information about generating SIDs for your IdM objects, see Enabling Security Identifiers (SIDs) in IdM . By evaluating PAC information in Kerberos tickets, you can control resource access with much greater detail. For example, the Administrator account in one domain has a uniquely different SID than the Administrator account in any other domain. In an IdM environment with a trust to an AD domain, you can set access controls based on globally unique SIDs rather than simple user names or UIDs that might repeat in different locations, such as every Linux root account having a UID of 0. (Jira:RHELPLAN-159143) Directory Server provides monitoring settings that can prevent database corruption caused by lock exhaustion This update adds the nsslapd-db-locks-monitoring-enable parameter to the cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config entry. If it is enabled, which is the default, Directory Server aborts all of the searches if the number of active database locks is higher than the percentage threshold configured in nsslapd-db-locks-monitoring-threshold . If an issue is encountered, the administrator can increase the number of database locks in the nsslapd-db-locks parameter in the cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config entry. This can prevent data corruption. Additionally, the administrator now can set a time interval in milliseconds that the thread sleeps between the checks. For further details, see the parameter descriptions in the Red Hat Directory Server Configuration, Command, and File Reference . ( BZ#1812286 ) Directory Server can exclude attributes and suffixes from the retro changelog database This enhancement adds the nsslapd-exclude-attrs and nsslapd-exclude-suffix parameters to Directory Server. You can set these parameters in the cn=Retro Changelog Plugin,cn=plugins,cn=config entry to exclude certain attributes or suffixes from the retro changelog database. ( BZ#1850664 ) Directory Server supports the entryUUID attribute With this enhancement, Directory Server supports the entryUUID attribute to be compliant with RFC 4530 . For example, with support for entryUUID , migrations from OpenLDAP are easier. By default, Directory Server adds the entryUUID attribute only to new entries. To manually add it to existing entries, use the dsconf <instance_name> plugin entryuuid fixup command. (BZ#1944494) Added a new message to help set up nsSSLPersonalitySSL Previously, many times happened that RHDS instance failed to start if the TLS certificate nickname didn't match the value of the configuration parameter nsSSLPersonalitySSL . This mismatch happened when customer copy the NSS DB from a instance or export the certificate's data but forget to set the nsSSLPersonalitySSL value accordingly. With this update, you can see log an additional message which should help a user to set up nsSSLPersonalitySSL correctly. ( BZ#1895460 ) 4.14. Desktop You can now connect to network at the login screen With this update, you can now connect to your network and configure certain network options at the GNOME Display Manager (GDM) login screen. As a result, you can log in as an enterprise user whose home directory is stored on a remote server. The login screen supports the following network options: Wired network Wireless network, including networks protected by a password Virtual Private Network (VPN) The login screen cannot open windows for additional network configuration. As a consequence, you cannot use the following network options at the login screen: Networks that open a captive portal Modem connections Wireless networks with enterprise WPA or WPA2 encryption that have not been preconfigured The network options at the login screen are disabled by default. To enable the network settings, use the following procedure: Create the /etc/polkit-1/rules.d/org.gnome.gdm.rules file with the following content: Restart GDM: Warning Restarting GDM terminates all your graphical user sessions. At the login screen, access the network settings in the menu on the right side of the top panel. ( BZ#1935261 ) Displaying the system security classification at login You can now configure the GNOME Display Manager (GDM) login screen to display an overlay banner that contains a predefined message. This is useful for deployments where the user is required to read the security classification of the system before logging in. To enable the overlay banner and configure a security classification message, use the following procedure: Install the gnome-shell-extension-heads-up-display package: Create the /etc/dconf/db/gdm.d/99-hud-message file with the following content: Replace the following values with text that describes the security classification of your system: Security classification title A short heading that identifies the security classification. Security classification description A longer message that provides additional details, such as references to various guidelines. Update the dconf database: Reboot the system. ( BZ#1651378 ) Flicker free boot is available You can now enable flicker free boot on your system. When flicker free boot is enabled, it eliminates abrupt graphical transitions during the system boot process, and the display does not briefly turn off during boot. To enable flicker free boot, use the following procedure: Configure the boot loader menu to hide by default: Update the boot loader configuration: On UEFI systems: On legacy BIOS systems: Reboot the system. As a result, the boot loader menu does not display during system boot, and the boot process is graphically smooth. To access the boot loader menu, repeatedly press Esc after turning on the system. (JIRA:RHELPLAN-99148) Updated support for emoji This release updates support for Unicode emoji characters from version 11 to version 13 of the emoji standard. As a result, you can now use more emoji characters on RHEL. The following packages that provide emoji functionality have been rebased: Package version Rebased to version cldr-emoji-annotation 33.1.0 38 google-noto-emoji-fonts 20180508 20200723 unicode-emoji 10.90.20180207 13.0 (JIRA:RHELPLAN-61867) You can set a default desktop session for all users With this update, you can now configure a default desktop session that is preselected for all users that have not logged in yet. If a user logs in using a different session than the default, their selection persists to their login. To configure the default session, use the following procedure: Copy the configuration file template: Edit the new /etc/accountsservice/user-templates/standard file. On the Session= gnome line, replace gnome with the session that you want to set as the default. Optional: To configure an exception to the default session for a certain user, follow these steps: Copy the template file to /var/lib/AccountsService/users/ user-name : In the new file, replace variables such as USD{USER} and USD{ID} with the user values. Edit the Session value. (BZ#1812788) 4.15. Graphics infrastructures Support for new GPUs The following new GPUs are now supported. Intel graphics: Alder Lake-S (ADL-S) Support for Alder Lake-S graphics is disabled by default. To enable it, add the following option to the kernel command line: Replace PCI_ID with either the PCI device ID of your Intel GPU, or with the * character to enable support for all alpha-quality hardware that uses the i915 driver. Elkhart Lake (EHL) Comet Lake Refresh (CML-R) with the TGP Platform Controller Hub (PCH) AMD graphics: Cezzane and Barcelo Sienna Cichlid Dimgrey Cavefish (JIRA:RHELPLAN-99040, BZ#1784132, BZ#1784136, BZ#1838558) The Wayland session is available with the proprietary NVIDIA driver The proprietary NVIDIA driver now supports hardware accelerated OpenGL and Vulkan rendering in Xwayland. As a result, you can now enable the GNOME Wayland session with the proprietary NVIDIA driver. Previously, only the legacy X11 session was available with the driver. X11 remains as the default session to avoid a possible disruption when updating from a version of RHEL. To enable Wayland with the NVIDIA proprietary driver, use the following procedure: Enable Direct Rendering Manager (DRM) kernel modesetting by adding the following option to the kernel command line: For details on enabling kernel options, see Configuring kernel command-line parameters . Reboot the system. The Wayland session is now available at the login screen. Optional: To avoid the loss of video allocations when suspending or hibernating the system, enable the power management option with the driver. For details, see Configuring Power Management Support . For the limitations related to the use of DRM kernel modesetting in the proprietary NVIDIA driver, see Direct Rendering Manager Kernel Modesetting (DRM KMS) . (JIRA:RHELPLAN-99049) Improvements to GPU support The following new GPU features are now enabled: Panel Self Refresh (PSR) is now enabled for Intel Tiger Lake and later graphics, which improves power consumption. Intel Tiger Lake, Ice Lake, and later graphics can now use High Bit Rate 3 (HBR3) mode with the DisplayPort Multi-Stream Transport (DP-MST) transmission method. This enables support for certain display capabilities with docks. Modesetting is now enabled on NVIDIA Ampere GPUs. This includes the following models: GA102, GA104, and GA107, including hybrid graphics systems. Most laptops with Intel integrated graphics and an NVIDIA Ampere GPU can now output to external displays using either GPU. (JIRA:RHELPLAN-99043) Updated graphics drivers The following graphics drivers have been updated: amdgpu ast i915 mgag2000 nouveau vmwgfx vmwgfx The Mesa library Vulkan packages (JIRA:RHELPLAN-99044) Intel Tiger Lake graphics are fully supported Intel Tiger Lake UP3 and UP4 Xe graphics, which were previously available as a Technology Preview, are now fully supported. Hardware acceleration is enabled by default on these GPUs. (BZ#1783396) 4.16. Red Hat Enterprise Linux system roles Users can configure the maximum root distance using the timesync_max_distance parameter With this update, the timesync RHEL system role is able to configure the tos maxdist of ntpd and the maxdistance parameter of the chronyd service using the new timesync_max_distance parameter. The timesync_max_distance parameter configures the maximum root distance to accept measurements from Network Time Protocol (NTP) servers. The default value is 0, which keeps the provider-specific defaults. ( BZ#1938016 ) Elasticsearch can now accept lists of servers Previously, the server_host parameter in Elasticsearch output for the Logging RHEL system role accepted only a string value for a single host. With this enhancement, it also accepts a list of strings to support multiple hosts. As a result, you can now configure multiple Elasticsearch hosts in one Elasticsearch output dictionary. ( BZ#1986463 ) Network Time Security (NTS) option added to the timesync RHEL system role The nts option was added to the timesync RHEL system role to enable NTS on client servers. NTS is a new security mechanism specified for Network Time Protocol (NTP), which can secure synchronization of NTP clients without client-specific configuration and can scale to large numbers of clients. The NTS option is supported only with the chrony NTP provider in version 4.0 and later. ( BZ#1970664 ) The SSHD RHEL system role now supports non-exclusive configuration snippets With this feature, you can configure SSHD through different roles and playbooks without rewriting the configurations by using namespaces. Namespaces are similar to a drop-in directory, and define non-exclusive configuration snippets for SSHD. As a result, you can use the SSHD RHEL system role from a different role, if you need to configure only a small part of the configuration and not the entire configuration file. ( BZ#1970642 ) The SELinux role can now manage SELinux modules The SElinux RHEL system role has the ability to manage SELinux modules. With this update, users can provide their own custom modules from .pp or .cil files, which allows for a more flexible SELinux policy management. ( BZ#1848683 ) Users can manage the chrony interleaved mode, NTP filtering, and hardware timestamping With this update, the timesync RHEL system role enables you to configure the Network Time Protocol (NTP) interleaved mode, additional filtering of NTP measurements, and hardware timestamping. The chrony package of version 4.0 adds support for these functionalities to achieve a highly accurate and stable synchronization of clocks in local networks. To enable the NTP interleaved mode, make sure the server supports this feature, and set the xleave option to yes for the server in the timesync_ntp_servers list. The default value is no . To set the number of NTP measurements per clock update, set the filter option for the NTP server you are configuring. The default value is 1 . To set the list of interfaces which should have hardware timestamping enabled for NTP, use the timesync_ntp_hwts_interfaces parameter. The special value ["*"] enables timestamping on all interfaces that support it. The default is [] . ( BZ#1938020 ) timesync role enables customization settings for chrony Previously, there was no way to provide customized chrony configuration using the timesync role. This update adds the timesync_chrony_custom_settings parameter, which enables users to to provide customized settings for chrony, such as: ( BZ#1938023 ) timesync role supports hybrid end-to-end delay mechanisms With this enhancement, you can use the new hybrid_e2e option in timesync_ptp_domains to enable hybrid end-to-end delay mechanisms in the timesync role. The hybrid end-to-end delay mechanism uses unicast delay requests, which are useful to reduce multicast traffic in large networks. ( BZ#1957849 ) ethtool now supports reducing the packet loss rate and latency Tx or Rx buffers are memory spaces allocated by a network adapter to handle traffic bursts. Properly managing the size of these buffers is critical to reduce the packet loss rate and achieve acceptable network latency. The ethtool utility now reduces the packet loss rate or latency by configuring the ring option of the specified network device. The list of supported ring parameters is: rx - Changes the number of ring entries for the Rx ring. rx-jumbo - Changes the number of ring entries for the Rx Jumbo ring. rx-mini - Changes the number of ring entries for the Rx Mini ring. tx - Changes the number of ring entries for the Tx ring. ( BZ#1959649 ) New ipv6_disabled parameter is now available With this update, you can now use the ipv6_disabled parameter to disable ipv6 when configuring addresses. ( BZ#1939711 ) RHEL system roles now support VPN management Previously, it was difficult to set up secure and properly configured IPsec tunneling and virtual private networking (VPN) solutions on Linux. With this enhancement, you can use the VPN RHEL system role to set up and configure VPN tunnels for host-to-host and mesh connections more easily across large numbers of hosts. As a result, you have a consistent and stable configuration interface for VPN and IPsec tunneling configuration within the RHEL system roles project. ( BZ#1943679 ) The storage RHEL system role now supports filesystem relabel Previously, the storage role did not support relabelling. This update fixes the issue, providing support to relabel the filesystem label. To do this, set a new label string to the fs_label parameter in storage_volumes . ( BZ#1876315 ) Support for volume sizes expressed as a percentage is available in the storage system role This enhancement adds support to the storage RHEL system role to express LVM volume sizes as a percentage of the pool's total size. You can specify the size of LVM volumes as a percentage of the pool/VG size, for example: 50% in addition to the human-readable size of the file system, for example, 10g , 50 GiB . ( BZ#1894642 ) New Ansible Role for Microsoft SQL Server Management The new microsoft.sql.server role is designed to help IT and database administrators automate processes involved with setup, configuration, and performance tuning of SQL Server on Red Hat Enterprise Linux. ( BZ#2013853 ) RHEL system roles do not support Ansible 2.8 With this update, support for Ansible 2.8 is no longer supported because the version is past the end of the product life cycle. The RHEL system roles support Ansible 2.9. ( BZ#1989199 ) The postfix role of RHEL system roles is fully supported Red Hat Enterprise Linux system roles provides a configuration interface for Red Hat Enterprise Linux subsystems, which makes system configuration easier through the inclusion of Ansible Roles. This interface enables managing system configurations across multiple versions of Red Hat Enterprise Linux, as well as adopting new major releases. The rhel-system-roles packages are distributed through the AppStream repository. As of RHEL 8.5, the postfix role is fully supported. For more information, see the Knowledgebase article about RHEL system roles . ( BZ#1812552 ) 4.17. Virtualization Enhancements to managing virtual machines in the web console The Virtual Machines (VM) section of the RHEL 8 web console has been redesigned for a better user experience. In addition, the following changes and features have also been introduced: A single page now includes all the relevant VM information, such as VM status, disks, networks, or console information. You can now live migrate a VM using the web console The web console now allows editing the MAC address of a VM's network interface You can use the web console to view a list of host devices attached to a VM (JIRA:RHELPLAN-79074) zPCI device assignment It is now possible to attach zPCI devices as mediated devices to virtual machines (VMs) hosted on RHEL 8 running on IBM Z hardware. For example, this enables the use of NVMe flash drives in VMs. (JIRA:RHELPLAN-59528) 4.18. Supportability sos rebased to version 4.1 The sos package has been upgraded to version 4.1, which provides multiple bug fixes and enhancements. Notable enhancements include: Red Hat Update Infrastructure ( RHUI ) plugin is now natively implemented in the sos package. With the rhui-debug.py python binary, sos can collect reports from RHUI including, for example, the main configuration file, the rhui-manager log file, or the installation configuration. sos introduces the --cmd-timeout global option that sets manually a timeout for a command execution. The default value (-1) defers to the general command timeout, which is 300 seconds. ( BZ#1928679 ) 4.19. Containers Default container image signature verification is now available Previously, the policy YAML files for the Red Hat Container Registries had to be manually created in the /etc/containers/registries.d/ directory. Now, the registry.access.redhat.com.yaml and registry.redhat.io.yaml files are included in the containers-common package. You can now use the podman image trust command to verify the container image signatures on RHEL. (JIRA:RHELPLAN-75166) The container-tools:rhel8 module has been updated The container-tools:rhel8 module, which contains the Podman, Buildah, Skopeo, and runc tools is now available. This update provides a list of bug fixes and enhancements over the version. (JIRA:RHELPLAN-76515) The containers-common package is now available The containers-common package has been added to the container-tools:rhel8 module. The containers-common package contains common configuration files and documentation for container tools ecosystem, such as Podman, Buildah and Skopeo. (JIRA:RHELPLAN-77542) Native overlay file system support in the kernel is now available The overlay file system support is now available from kernel 5.11. The non-root users will have native overlay performance even when running rootless (as a user). Thus, this enhancement provides better performance to non-root users who wish to use overlayfs without the need for bind mounting. (JIRA:RHELPLAN-77241) A podman container image is now available The registry.redhat.io/rhel8/podman container image, previously available as a Technology Preview, is now fully supported. The registry.redhat.io/rhel8/podman container image is a containerized implementation of the podman package. The podman tool manages containers and images, volumes mounted into those containers, and pods made of groups of containers. (JIRA:RHELPLAN-57941) Universal Base Images are now available on Docker Hub Previously, Universal Base Images were only available from the Red Hat container catalog. Now, Universal Base Images are also available from Docker Hub. For more information, see Red Hat Brings Red Hat Universal Base Image to Docker Hub . (JIRA:RHELPLAN-85064) CNI plugins in Podman are now available CNI plugins are now available to use in Podman rootless mode. The rootless networking commands now work without any other requirement on the system. ( BZ#1934480 ) Podman has been updated to version 3.3.1 The Podman utility has been updated to version 3.3.1. Notable enhancements include: Podman now supports restarting containers created with the --restart option after the system is rebooted. The podman container checkpoint and podman container restore commands now support checkpointing and restoring containers that are in pods and restoring those containers into pods. Further, the podman container restore command now supports the --publish option to change ports forwarded to a container restored from an exported checkpoint. (JIRA:RHELPLAN-87877) The crun OCI runtime is now available The crun OCI runtime is now available for the container-tools:rhel8 module. The crun container runtime supports an annotation that enables the container to access the rootless user's additional groups. This is useful for container operations when volume mounting in a directory where setgid is set, or where the user only has group access. (JIRA:RHELPLAN-75164) The podman UBI image is now available The registry.access.redhat.com/ubi8/podman is now available as a part of UBI. (JIRA:RHELPLAN-77489) The container-tools:rhel8 module has been updated The container-tools:rhel8 module, which contains the Podman, Buildah, Skopeo, and runc tools is now available. This update provides a list of bug fixes and enhancements over the version. For more details, see the RHEA-2022:0352 . ( BZ#2009153 ) The ubi8/nodejs-16 and ubi8/nodejs-16-minimal container images are now fully supported The ubi8/nodejs-16 and ubi8/nodejs-16-minimal container images, previously available as a Technology Preview, are fully supported with the release of the RHBA-2021:5260 advisory. These container images include Node.js 16.13 , which is a Long Term Support (LTS) version. ( BZ#2001020 )
[ "[[customizations.filesystem]] mountpoint = \"MOUNTPOINT\" size = MINIMUM-PARTITION-SIZE", "yum install modulemd-tools", "cipher@SSH = AES-256-CBC+", "cipher@libssh = -*-CBC", "-a always, exit -F arch=b32 -S chown, fchown, fchownat, lchown -F auid>=1000 -F auid!=unset -F key=perm_mod", "-a always, exit -F arch=b32 -S unlink, unlinkat, rename, renameat, rmdir -F auid>=1000 -F auid!=unset -F key=delete", "-a always, exit -F arch=b32 -S chown, fchown, fchownat, lchown -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccesful-perm-change", "-a always, exit -F arch=b32 -S unlink, unlinkat, rename, renameat -F auid>=1000 -F auid!=unset -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccessful-delete", "nmcli connection modify enp1s0 ethtool.pause-autoneg no ethtool.pause-rx true ethtool.pause-tx true", "sudo nmcli c add type ethernet ifname eth1 connection.id eth1 802-3-ethernet.accept-all-mac-addresses true", "[main] firewall-backend=nftables", "systemctl reload NetworkManager", "yum module install nodejs:16", "yum module install ruby:3.0", ">>> def reformat_ip(address): return '.'.join(part.lstrip('0') if part != '0' else part for part in address.split('.')) >>> reformat_ip('0127.0.0.1') '127.0.0.1'", "def reformat_ip(address): parts = [] for part in address.split('.'): if part != \"0\": part = part.lstrip('0') parts.append(part) return '.'.join(parts)", "yum module install nginx:1.20", "yum install gcc-toolset-11", "scl enable gcc-toolset-11 tool", "scl enable gcc-toolset-11 bash", "podman pull registry.redhat.io/<image_name>", "podman pull registry.redhat.io/rhel8/grafana", "podman pull registry.redhat.io/rhel8/pcp", "*USD ipa pwpolicy-mod --usercheck=True managers*", "(2021-07-26 18:26:37): [be[testidm.com]] [dp_req_destructor] (0x0400): RID#3 Number of active DP request: 0 (2021-07-26 18:26:37): [be[testidm.com]] [dp_req_reply_std] (0x1000): RID#3 DP Request AccountDomain #3: Returning [Internal Error]: 3,1432158301,GetAccountDomain() not supported (2021-07-26 18:26:37): [be[testidm.com]] [dp_attach_req] (0x0400): RID#4 DP Request Account #4: REQ_TRACE: New request. sssd.nss CID #1 Flags [0x0001]. (2021-07-26 18:26:37): [be[testidm.com]] [dp_attach_req] (0x0400): RID#4 Number of active DP request: 1", "[sssd] debug_backtrace_enabled = true debug_level=0 [nss] debug_backtrace_enabled = false [domain/idm.example.com] debug_backtrace_enabled = true debug_level=2", "[kcm] tgt_renewal = true krb5_renew_interval = 60m", "[kcm] tgt_renewal = true tgt_renewal_inherit = domain-name", "The _samba_ packages have been upgraded to upstream version 4.14.4, which provides bug fixes and enhancements over the previous version:", "polkit.addRule(function(action, subject) { if (action.id == \"org.freedesktop.NetworkManager.network-control\" && subject.user == \"gdm\") { return polkit.Result.YES; } return polkit.Result.NOT_HANDLED; });", "systemctl restart gdm", "yum install gnome-shell-extension-heads-up-display", "[org/gnome/shell] enabled-extensions=['[email protected]'] [org/gnome/shell/extensions/heads-up-display] message-heading=\" Security classification title \" message-body=\" Security classification description \"", "dconf update", "grub2-editenv - set menu_auto_hide=1", "grub2-mkconfig -o /etc/grub2-efi.cfg", "grub2-mkconfig -o /etc/grub2.cfg", "cp /usr/share/accountsservice/user-templates/standard /etc/accountsservice/user-templates/standard", "cp /usr/share/accountsservice/user-templates/standard /var/lib/AccountsService/users/ user-name", "i915.force_probe= PCI_ID", "nvidia-drm.modeset=1", "timesync_chrony_custom_settings: - \"logdir /var/log/chrony\" - \"log measurements statistics tracking\"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.5_release_notes/New-features
Preface
Preface You can configure a new installation of Event-Driven Ansible controller 1.1 with automation controller 4.4 or 4.5, which is considered a general availability release fully supported by Red Hat. Automation controller 4.4 and 4.5 were released as part of Ansible Automation Platform 2.4. Upgrading to Event-Driven Ansible controller 1.1 from an earlier release is unsupported at this time. This means that you can install a new Event-Driven Ansible controller 2.5 server and configure rulebook activations to execute job templates on an Ansible Automation Platform 2.4 automation controller. With this interoperability support you can retain your existing Ansible Automation Platform 2.4 clusters and add Event-Driven Ansible controller 2.5 to them. This gives you the option to upgrade your Ansible Automation Platform 2.4 cluster to Ansible Automation Platform 2.5 at a date that suits you, whilst giving you all the benefits of Event-Driven Ansible 2.5. The installs are still two separate installs, in that you manage the Ansible Automation Platform 2.4 cluster with the Ansible Automation Platform 2.4 installer. The Event-Driven Ansible 2.5 server is managed with the Ansible Automation Platform 2.5 installer. This guide shows you how to install and configure Event-Driven Ansible 1.1 to work with automation controller 4.4 or 4.5 for a Red Hat Package Manager (RPM)-based installation.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/using_event-driven_ansible_2.5_with_ansible_automation_platform_2.4/pr01
3.2. Boot Loader
3.2. Boot Loader Installation media for IBM Power Systems now use the GRUB2 boot loader instead of the previously offered yaboot . For the big endian variant of Red Hat Enterprise Linux for POWER, GRUB2 is preferred but yaboot can also be used. The newly introduced little endian variant requires GRUB2 to boot. The Installation Guide has been updated with instructions for setting up a network boot server for IBM Power Systems using GRUB2 .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.1_release_notes/ch03s02
9.4. Configure Network Bridging Using a GUI
9.4. Configure Network Bridging Using a GUI When starting a bridge interface, NetworkManager waits for at least one port to enter the " forwarding " state before beginning any network-dependent IP configuration such as DHCP or IPv6 autoconfiguration. Static IP addressing is allowed to proceed before any ports or ports are connected or begin forwarding packets. 9.4.1. Establishing a Bridge Connection with a GUI Procedure 9.1. Adding a New Bridge Connection Using nm-connection-editor Follow the below instructions to create a new bridge connection: Enter nm-connection-editor in a terminal: Click the Add button. The Choose a Connection Type window appears. Select Bridge and click Create . The Editing Bridge connection 1 window appears. Figure 9.5. Editing Bridge Connection 1 Add port devices by referring to Procedure 9.3, "Adding a Port Interface to a Bridge" below. Procedure 9.2. Editing an Existing Bridge Connection Enter nm-connection-editor in a terminal: Select the Bridge connection you want to edit. Click the Edit button. Configuring the Connection Name, Auto-Connect Behavior, and Availability Settings Five settings in the Editing dialog are common to all connection types, see the General tab: Connection name - Enter a descriptive name for your network connection. This name will be used to list this connection in the menu of the Network window. Automatically connect to this network when it is available - Select this box if you want NetworkManager to auto-connect to this connection when it is available. See the section called "Editing an Existing Connection with control-center" for more information. All users may connect to this network - Select this box to create a connection available to all users on the system. Changing this setting may require root privileges. See Section 3.4.5, "Managing System-wide and Private Connection Profiles with a GUI" for details. Automatically connect to VPN when using this connection - Select this box if you want NetworkManager to auto-connect to a VPN connection when it is available. Select the VPN from the dropdown menu. Firewall Zone - Select the Firewall Zone from the dropdown menu. See the Red Hat Enterprise Linux 7 Security Guide for more information on Firewall Zones. 9.4.1.1. Configuring the Bridge Tab Interface name The name of the interface to the bridge. Bridged connections One or more port interfaces. Aging time The time, in seconds, a MAC address is kept in the MAC address forwarding database. Enable IGMP snooping If required, select the check box to enable IGMP snooping on the device. Enable STP (Spanning Tree Protocol) If required, select the check box to enable STP . Priority The bridge priority; the bridge with the lowest priority will be elected as the root bridge. Forward delay The time, in seconds, spent in both the Listening and Learning states before entering the Forwarding state. The default is 15 seconds. Hello time The time interval, in seconds, between sending configuration information in bridge protocol data units (BPDU). Max age The maximum time, in seconds, to store the configuration information from BPDUs. This value should be twice the Hello Time plus 1 but less than twice the Forwarding delay minus 1. Group forward mask This property is a mask of group addresses that allows group addresses to be forwarded. In most cases, group addresses in the range from 01:80:C2:00:00:00 to 01:80:C2:00:00:0F are not forwarded by the bridge device. This property is a mask of 16 bits, each corresponding to a group address in the above range, that must be forwarded. Note that the Group forward mask property cannot have any of the 0 , 1 , 2 bits set to 1 because those addresses are used for Spanning tree protocol (STP), Link Aggregation Control Protocol (LACP) and Ethernet MAC pause frames. Procedure 9.3. Adding a Port Interface to a Bridge To add a port to a bridge, select the Bridge tab in the Editing Bridge connection 1 window. If necessary, open this window by following the procedure in Procedure 9.2, "Editing an Existing Bridge Connection" . Click Add . The Choose a Connection Type menu appears. Select the type of connection to be created from the list. Click Create . A window appropriate to the connection type selected appears. Figure 9.6. The NetworkManager Graphical User Interface Add a Bridge Connection Select the Bridge Port tab. Configure Priority and Path cost as required. Note the STP priority for a bridge port is limited by the Linux kernel. Although the standard allows a range of 0 to 255 , Linux only allows 0 to 63 . The default is 32 in this case. Figure 9.7. The NetworkManager Graphical User Interface Bridge Port tab If required, select the Hairpin mode check box to enable forwarding of frames for external processing. Also known as virtual Ethernet port aggregator ( VEPA ) mode. Then, to configure: An Ethernet port, click the Ethernet tab and proceed to the section called "Basic Configuration Options " , or; A Bond port, click the Bond tab and proceed to Section 7.8.1.1, "Configuring the Bond Tab" , or; A Team port, click the Team tab and proceed to Section 8.14.1.1, "Configuring the Team Tab" , or; An VLAN port, click the VLAN tab and proceed to Section 10.5.1.1, "Configuring the VLAN Tab" , or; Saving Your New (or Modified) Connection and Making Further Configurations Once you have finished editing your new bridge connection, click the Save button to save your customized configuration. If the profile was in use while being edited, power cycle the connection to make NetworkManager apply the changes. If the profile is OFF, set it to ON or select it in the network connection icon's menu. See Section 3.4.1, "Connecting to a Network Using the control-center GUI" for information on using your new or altered connection. You can further configure an existing connection by selecting it in the Network window and clicking Options to return to the Editing dialog. Then, to configure: IPv4 settings for the connection, click the IPv4 Settings tab and proceed to Section 5.4, "Configuring IPv4 Settings" , or; IPv6 settings for the connection, click the IPv6 Settings tab and proceed to Section 5.5, "Configuring IPv6 Settings" . Once saved the Bridge will appear in the Network settings tool with each port showing in the display. Figure 9.8. The NetworkManager Graphical User Interface with Bridge
[ "~]USD nm-connection-editor", "~]USD nm-connection-editor" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-Configure_Network_Bridging_Using_a_GUI
14.9.6. Resetting a Virtual Machine
14.9.6. Resetting a Virtual Machine virsh reset domain resets the domain immediately without any guest shutdown. A reset emulates the power reset button on a machine, where all guest hardware sees the RST line and re-initializes the internal state. Note that without any guest virtual machine OS shutdown, there are risks for data loss.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-shutting_down_rebooting_and_force_shutdown_of_a_guest_virtual_machine-resetting_a_virtual_machine
8.6. Function Metadata
8.6. Function Metadata SYS.Functions This table supplies information about the functions in the virtual database. Column Name Type Description VDBName string VDB name SchemaName string Schema Name Name string Function name NameInSource string Function name in source system UID string Function UID Description string Description IsVarArgs boolean Does the function accept variable arguments? SYS.FunctionParams This supplies information on functionparameters. Column Name Type Description VDBName string VDB name SchemaName string Schema Name FunctionName string Function name FunctionUID string Function UID Name string Parameter name DataType string Data Virtualization runtime data type name Position integer Position in function args Type string Parameter direction: "In", "Out", "InOut", "ResultSet", "ReturnValue" Precision integer Precision of parameter TypeLength integer Length of parameter value Scale integer Scale of parameter Radix integer Radix of parameter NullType string Nullability: "Nullable", "No Nulls", "Unknown" UID string Function Parameter UID Description string Description OID integer Unique ID Warning The OID column is no longer used on system tables. Use UID instead.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/ch08s06
Chapter 12. Etcd [operator.openshift.io/v1]
Chapter 12. Etcd [operator.openshift.io/v1] Description Etcd provides information to configure an operator to manage etcd. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 12.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object status object 12.1.1. .spec Description Type object Property Type Description failedRevisionLimit integer failedRevisionLimit is the number of failed static pod installer revisions to keep on disk and in the api -1 = unlimited, 0 or unset = 5 (default) forceRedeploymentReason string forceRedeploymentReason can be used to force the redeployment of the operand by providing a unique string. This provides a mechanism to kick a previously failed deployment and provide a reason why you think it will work this time instead of failing again on the same config. logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". succeededRevisionLimit integer succeededRevisionLimit is the number of successful static pod installer revisions to keep on disk and in the api -1 = unlimited, 0 or unset = 5 (default) unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. 12.1.2. .status Description Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. controlPlaneHardwareSpeed string ControlPlaneHardwareSpeed declares valid hardware speed tolerance levels generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. latestAvailableRevision integer latestAvailableRevision is the deploymentID of the most recent deployment latestAvailableRevisionReason string latestAvailableRevisionReason describe the detailed reason for the most recent deployment nodeStatuses array nodeStatuses track the deployment values and errors across individual nodes nodeStatuses[] object NodeStatus provides information about the current state of a particular node managed by this operator. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 12.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 12.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 12.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 12.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 12.1.7. .status.nodeStatuses Description nodeStatuses track the deployment values and errors across individual nodes Type array 12.1.8. .status.nodeStatuses[] Description NodeStatus provides information about the current state of a particular node managed by this operator. Type object Property Type Description currentRevision integer currentRevision is the generation of the most recently successful deployment lastFailedCount integer lastFailedCount is how often the installer pod of the last failed revision failed. lastFailedReason string lastFailedReason is a machine readable failure reason string. lastFailedRevision integer lastFailedRevision is the generation of the deployment we tried and failed to deploy. lastFailedRevisionErrors array (string) lastFailedRevisionErrors is a list of human readable errors during the failed deployment referenced in lastFailedRevision. lastFailedTime string lastFailedTime is the time the last failed revision failed the last time. lastFallbackCount integer lastFallbackCount is how often a fallback to a revision happened. nodeName string nodeName is the name of the node targetRevision integer targetRevision is the generation of the deployment we're trying to apply 12.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/etcds DELETE : delete collection of Etcd GET : list objects of kind Etcd POST : create an Etcd /apis/operator.openshift.io/v1/etcds/{name} DELETE : delete an Etcd GET : read the specified Etcd PATCH : partially update the specified Etcd PUT : replace the specified Etcd /apis/operator.openshift.io/v1/etcds/{name}/status GET : read status of the specified Etcd PATCH : partially update status of the specified Etcd PUT : replace status of the specified Etcd 12.2.1. /apis/operator.openshift.io/v1/etcds Table 12.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Etcd Table 12.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 12.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Etcd Table 12.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 12.5. HTTP responses HTTP code Reponse body 200 - OK EtcdList schema 401 - Unauthorized Empty HTTP method POST Description create an Etcd Table 12.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.7. Body parameters Parameter Type Description body Etcd schema Table 12.8. HTTP responses HTTP code Reponse body 200 - OK Etcd schema 201 - Created Etcd schema 202 - Accepted Etcd schema 401 - Unauthorized Empty 12.2.2. /apis/operator.openshift.io/v1/etcds/{name} Table 12.9. Global path parameters Parameter Type Description name string name of the Etcd Table 12.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an Etcd Table 12.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 12.12. Body parameters Parameter Type Description body DeleteOptions schema Table 12.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Etcd Table 12.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 12.15. HTTP responses HTTP code Reponse body 200 - OK Etcd schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Etcd Table 12.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 12.17. Body parameters Parameter Type Description body Patch schema Table 12.18. HTTP responses HTTP code Reponse body 200 - OK Etcd schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Etcd Table 12.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.20. Body parameters Parameter Type Description body Etcd schema Table 12.21. HTTP responses HTTP code Reponse body 200 - OK Etcd schema 201 - Created Etcd schema 401 - Unauthorized Empty 12.2.3. /apis/operator.openshift.io/v1/etcds/{name}/status Table 12.22. Global path parameters Parameter Type Description name string name of the Etcd Table 12.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Etcd Table 12.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 12.25. HTTP responses HTTP code Reponse body 200 - OK Etcd schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Etcd Table 12.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 12.27. Body parameters Parameter Type Description body Patch schema Table 12.28. HTTP responses HTTP code Reponse body 200 - OK Etcd schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Etcd Table 12.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.30. Body parameters Parameter Type Description body Etcd schema Table 12.31. HTTP responses HTTP code Reponse body 200 - OK Etcd schema 201 - Created Etcd schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/operator_apis/etcd-operator-openshift-io-v1
Chapter 1. Overview
Chapter 1. Overview AMQ Spring Boot Starter is an adapter for creating Spring-based applications that use AMQ messaging. It provides a Spring Boot starter module that enables you to build standalone Spring applications. The starter uses the AMQ JMS client to communicate using the AMQP 1.0 protocol. AMQ Spring Boot Starter is part of AMQ Clients, a suite of messaging libraries supporting multiple languages and platforms. For an overview of the clients, see AMQ Clients Overview . For information about this release, see AMQ Clients 2.8 Release Notes . AMQ Spring Boot Starter is based on the AMQP 1.0 JMS Spring Boot project. 1.1. Key features Quickly build standalone Spring applications with built-in messaging Automatic configuration of JMS resources Configurable pooling of JMS connections and sessions 1.2. Supported standards and protocols Version 2.2 of the Spring Boot API Version 2.0 of the Java Message Service API Version 1.0 of the Advanced Message Queueing Protocol (AMQP) 1.3. Supported configurations AMQ Spring Boot Starter supports the OS and language versions listed below. For more information, see Red Hat AMQ 7 Supported Configurations . Red Hat Enterprise Linux 7 and 8 with the following JDKs: OpenJDK 8 and 11 Oracle JDK 8 IBM JDK 8 Red Hat Enterprise Linux 6 with the following JDKs: OpenJDK 8 Oracle JDK 8 IBM AIX 7.1 with IBM JDK 8 Microsoft Windows 10 Pro with Oracle JDK 8 Microsoft Windows Server 2012 R2 and 2016 with Oracle JDK 8 Oracle Solaris 10 and 11 with Oracle JDK 8 AMQ Spring Boot Starter is supported in combination with the following AMQ components and versions: All versions of AMQ Broker All versions of AMQ Interconnect A-MQ 6 versions 6.2.1 and newer 1.4. Document conventions The sudo command In this document, sudo is used for any command that requires root privileges. Exercise caution when using sudo because any changes can affect the entire system. For more information about sudo , see Using the sudo command . File paths In this document, all file paths are valid for Linux, UNIX, and similar operating systems (for example, /home/andrea ). On Microsoft Windows, you must use the equivalent Windows paths (for example, C:\Users\andrea ). Variable text This document contains code blocks with variables that you must replace with values specific to your environment. Variable text is enclosed in arrow braces and styled as italic monospace. For example, in the following command, replace <project-dir> with the value for your environment: USD cd <project-dir>
[ "cd <project-dir>" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_spring_boot_starter/overview
Chapter 3. Overview of Bare Metal certification
Chapter 3. Overview of Bare Metal certification The bare-metal certification overview provides details about product publication in the catalog, product release, certification duration, and recertification. 3.1. Publication in the catalog When you certify your server for bare-metal hardware on Red Hat OpenStack Platform, it is published in the Red Hat Ecosystem Catalog as Bare Metal . The Bare Metal Management feature also appears as a certified component of your server. 3.2. Red Hat product releases You have access to and are encouraged to test with pre-released Red Hat software. You can begin your engagement with the Red Hat Certification team before Red Hat software is generally available (GA) to customers to expedite the certification process for your product. However, conduct official certification testing only on the GA releases of Red Hat OpenShift Container Platform bare-metal hardware. 3.3. Certification duration Certifications are valid starting with the specific major and minor releases of Red Hat OpenStack Platform software as tested and listed on the Red Hat Ecosystem Catalog. They continue to be valid through the last minor release of the major release. This allows customers to count on certifications from the moment they are listed until the end of the product's lifecycle. 3.4. Recertification workflow You do not need to recertify after a new major or minor release of RHOSP if you have not made changes to your product. However, it is your responsibility to certify your product again any time you make significant changes to it. Red Hat recommends that you run the certification tests on your product periodically to ensure its quality, functionality, and performance with the supported versions of RHOSP. To recertify your product, open a supplemental certification.
null
https://docs.redhat.com/en/documentation/red_hat_hardware_certification/2025/html/red_hat_openstack_platform_hardware_bare_metal_certification_policy_guide/assembly-overview-of-the-bare-metal-certification-life-cycle_rhosp-bm-pol-prerequisites
7.3. Protobuf Encoding
7.3. Protobuf Encoding The Infinispan Query DSL can be used remotely via the Hot Rod client. In order to do this, protocol buffers are used to adopt a common format for storing cache entries and marshalling them. For more information, see https://developers.google.com/protocol-buffers/docs/overview Report a bug 7.3.1. Storing Protobuf Encoded Entities Protobuf requires data to be structured. This is achieved by declaring Protocol Buffer message types in .proto files For example: Example 7.3. .library.proto The provided example: An entity named Book is placed in a package named book_sample . The entity declares several fields of primitive types and a repeatable field named authors . The Author message instances are embedded in the Book message instance. Report a bug 7.3.2. About Protobuf Messages There are a few important things to note about Protobuf messages: Nesting of messages is possible, however the resulting structure is strictly a tree, and never a graph. There is no type inheritance. Collections are not supported, however arrays can be easily emulated using repeated fields. Report a bug 7.3.3. Using Protobuf with Hot Rod Protobuf can be used with JBoss Data Grid's Hot Rod using the following two steps: Configure the client to use a dedicated marshaller, in this case, the ProtoStreamMarshaller . This marshaller uses the ProtoStream library to assist in encoding objects. Important If the infinispan-remote jar is not in use, then the infinispan-remote-query-client Maven dependency must be added to use the ProtoStreamMarshaller . Instruct ProtoStream library on how to marshall message types by registering per entity marshallers. Example 7.4. Use the ProtoStreamMarshaller to Encode and Marshall Messages In the provided example, The SerializationContext is provided by the ProtoStream library. The SerializationContext.registerProtofile method receives the name of a .proto classpath resource file that contains the message type definitions. The SerializationContext associated with the RemoteCacheManager is obtained, then ProtoStream is instructed to marshall the protobuf types. Note A RemoteCacheManager has no SerializationContext associated with it unless it was configured to use ProtoStreamMarshaller . Report a bug 7.3.4. Registering Per Entity Marshallers When using the ProtoStreamMarshaller for remote querying purposes, registration of per entity marshallers for domain model types must be provided by the user for each type or marshalling will fail. When writing marshallers, it is essential that they are stateless and threadsafe, as a single instance of them is being used. The following example shows how to write a marshaller. Example 7.5. BookMarshaller.java Once the client has been set up, reading and writing Java objects to the remote cache will use the entity marshallers. The actual data stored in the cache will be protobuf encoded, provided that marshallers were registered with the remote client for all involved types. In the provided example, this would be Book and Author . Objects stored in protobuf format are able to be utilized with compatible clients written in different languages. Report a bug 7.3.5. Indexing Protobuf Encoded Entities Once the client is configured to use Protobuf, indexing can be configured for caches on the server side. To index the entries, the server must have the knowledge of the message types defined by the Protobuf schema. A Protobuf schema file is defined in a file with a .proto extension. The schema is supplied to the server either by placing it in the ___protobuf_metadata cache by a put , putAll , putIfAbsent , or replace operation, or alternatively by invoking ProtobufMetadataManager MBean via JMX. Both keys and values of ___protobuf_metadata cache are Strings, the key being the file name, while the value is the schema file contents. Example 7.6. Registering a Protocol Buffers schema file The ProtobufMetadataManager is a cluster-wide replicated repository of Protobuf schema definitions or .proto files. For each running cache manager, a separate ProtobufMetadataManager MBean instance exists, and is backed by the ___protobuf_metadata cache. The ProtobufMetadataManager ObjectName uses the following pattern: The following signature is used by the method that registers the Protobuf schema file: If indexing is enabled for a cache, all fields of Protobuf-encoded entries are indexed. All Protobuf-encoded entries are searchable, regardless of whether indexing is enabled. Note Indexing is recommended for improved performance but is not mandatory when using remote queries. Using indexing improves the searching speed but can also reduce the insert/update speeds due to the overhead required to maintain the index. Report a bug 7.3.6. Custom Fields Indexing with Protobuf All Protobuf type fields are indexed and stored by default. This behavior is acceptable in most cases but it can result in performance issues if used with too many or very large fields. To specify the fields to index and store, use the @Indexed and @IndexedField annotations directly to the Protobuf schema in the documentation comments of message type definitions and field definitions. Example 7.7. Specifying Which Fields are Indexed Add documentation annotations to the last line of the documentation comment that precedes the element to be annotated (message type or field definition). The @Indexed annotation only applies to message types, has a boolean value (the default is true ). As a result, using @Indexed is equivalent to @Indexed(true) . This annotation is used to selectively specify the fields of the message type which must be indexed. Using @Indexed(false) , however, indicates that no fields are to be indexed and so the eventual @IndexedField annotation at the field level is ignored. The @IndexedField annotation only applies to fields, has two attributes ( index and store ), both of which default to true (using @IndexedField is equivalent to @IndexedField(index=true, store=true) ). The index attribute indicates whether the field is indexed, and is therefore used for indexed queries. The store attributes indicates whether the field value must be stored in the index, so that the value is available for projections. Note The @IndexedField annotation is only effective if the message type that contains it is annotated with @Indexed . Report a bug 7.3.7. Defining Protocol Buffers Schemas With Java Annotations You can declare Protobuf metadata using Java annotations. Instead of providing a MessageMarshaller implementation and a .proto schema file, you can add minimal annotations to a Java class and its fields. The objective of this method is to marshal Java objects to protobuf using the ProtoStream library. The ProtoStream library internally generates the marshallar and does not require a manually implemented one. The Java annotations require minimal information such as the Protobuf tag number. The rest is inferred based on common sense defaults ( Protobuf type, Java collection type, and collection element type) and is possible to override. The auto-generated schema is registered with the SerializationContext and is also available to the users to be used as a reference to implement domain model classes and marshallers for other languages. The following are examples of Java annotations Example 7.8. User.Java Example 7.9. Note.Java Example 7.10. ProtoSchemaBuilderDemo.Java The following is the .proto file that is generated by the ProtoSchemaBuilderDemo.java example. Example 7.11. Sample_Schema.Proto The following table lists the supported Java annotations with its application and parameters. Table 7.2. Java Annotations Annotation Applies To Purpose Requirement Parameters @ProtoDoc Class/Field/Enum/Enum member Specifies the documentation comment that will be attached to the generated Protobuf schema element (message type, field definition, enum type, enum value definition) Optional A single String parameter, the documentation text @ProtoMessage Class Specifies the name of the generated message type. If missing, the class name if used instead Optional name (String), the name of the generated message type; if missing the Java class name is used by default @ProtoField Field, Getter or Setter Specifies the Protobuf field number and its Protobuf type. Also indicates if the field is repeated, optional or required and its (optional) default value. If the Java field type is an interface or an abstract class, its actual type must be indicated. If the field is repeatable and the declared collection type is abstract then the actual collection implementation type must be specified. If this annotation is missing, the field is ignored for marshalling (it is transient). A class must have at least one @ProtoField annotated field to be considered Protobuf marshallable. Required number (int, mandatory), the Protobuf number type (org.infinispan.protostream.descriptors.Type, optional), the Protobuf type, it can usually be inferred required (boolean, optional)name (String, optional), the Protobuf namejavaType (Class, optional), the actual type, only needed if declared type is abstract collectionImplementation (Class, optional), the actual collection type, only needed if declared type is abstract defaultValue (String, optional), the string must have the proper format according to the Java field type @ProtoEnum Enum Specifies the name of the generated enum type. If missing, the Java enum name if used instead Optional name (String), the name of the generated enum type; if missing the Java enum name is used by default @ProtoEnumValue Enum member Specifies the numeric value of the corresponding Protobuf enum value Required number (int, mandatory), the Protobuf number name (String), the Protobuf name; if missing the name of the Java member is used Note The @ProtoDoc annotation can be used to provide documentation comments in the generated schema and also allows to inject the @Indexed and @IndexedField annotations where needed (see Section 7.3.6, "Custom Fields Indexing with Protobuf" ). Report a bug
[ "package book_sample; message Book { required string title = 1; required string description = 2; required int32 publicationYear = 3; // no native Date type available in Protobuf repeated Author authors = 4; } message Author { required string name = 1; required string surname = 2; }", "package book_sample; message Book {", "required string title = 1; required string description = 2; required int32 publicationYear = 3; // no native Date type available in Protobuf repeated Author authors = 4; }", "message Author { required string name = 1; required string surname = 2; }", "import org.infinispan.client.hotrod.configuration.ConfigurationBuilder; import org.infinispan.client.hotrod.marshall.ProtoStreamMarshaller; import org.infinispan.protostream.SerializationContext; ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host(\"127.0.0.1\").port(11234) .marshaller(new ProtoStreamMarshaller()); RemoteCacheManager remoteCacheManager = new RemoteCacheManager(clientBuilder.build()); SerializationContext serCtx = ProtoStreamMarshaller.getSerializationContext(remoteCacheManager); serCtx.registerProtoFile(\"/library.proto\"); serCtx.registerMarshaller(new BookMarshaller()); serCtx.registerMarshaller(new AuthorMarshaller()); // Book and Author classes omitted for brevity", "import org.infinispan.protostream.MessageMarshaller; public class BookMarshaller implements MessageMarshaller<Book> { @Override public String getTypeName() { return \"book_sample.Book\"; } @Override public Class<? extends Book> getJavaClass() { return Book.class; } @Override public void writeTo(ProtoStreamWriter writer, Book book) throws IOException { writer.writeString(\"title\", book.getTitle()); writer.writeString(\"description\", book.getDescription()); writer.writeCollection(\"authors\", book.getAuthors(), Author.class); } @Override public Book readFrom(ProtoStreamReader reader) throws IOException { String title = reader.readString(\"title\"); String description = reader.readString(\"description\"); int publicationYear = reader.readInt(\"publicationYear\"); Set<Author> authors = reader.readCollection(\"authors\", new HashSet<Author>(), Author.class); return new Book(title, description, publicationYear, authors); } }", "import org.infinispan.client.hotrod.RemoteCache; import org.infinispan.client.hotrod.RemoteCacheManager; import org.infinispan.query.remote.client.ProtobufMetadataManagerConstants; RemoteCacheManager remoteCacheManager = ... // obtain a RemoteCacheManager // obtain the '__protobuf_metadata' cache RemoteCache<String, String> metadataCache = remoteCacheManager.getCache( ProtobufMetadataManagerConstants.PROTOBUF_METADATA_CACHE_NAME); String schemaFileContents = ... // this is the contents of the schema file metadataCache.put(\"my_protobuf_schema.proto\", schemaFileContents);", "<jmx domain>:type=RemoteQuery, name=<cache manager<methodname>putAllname>, component=ProtobufMetadataManager", "void registerProtofile(String name, String contents)", "/* This type is indexed, but not all its fields are. @Indexed */ message Note { /* This field is indexed but not stored. It can be used for querying but not for projections. @IndexedField(index=true, store=false) */ optional string text = 1; /* A field that is both indexed and stored. @IndexedField */ optional string author = 2; /* @IndexedField(index=false, store=true) */ optional bool isRead = 3; /* This field is not annotated, so it is neither indexed nor stored. */ optional int32 priority; }", "package sample; import org.infinispan.protostream.annotations.ProtoEnum; import org.infinispan.protostream.annotations.ProtoEnumValue; import org.infinispan.protostream.annotations.ProtoField; import org.infinispan.protostream.annotations.ProtoMessage; @ProtoMessage(name = \"ApplicationUser\") public class User { @ProtoEnum(name = \"Gender\") public enum Gender { @ProtoEnumValue(number = 1, name = \"M\") MALE, @ProtoEnumValue(number = 2, name = \"F\") FEMALE } @ProtoField(number = 1, required = true) public String name; @ProtoField(number = 2) public Gender gender; }", "package sample; import org.infinispan.protostream.annotations.ProtoDoc; import org.infinispan.protostream.annotations.ProtoField; @ProtoDoc(\"@Indexed\") public class Note { private String text; private User author; @ProtoDoc(\"@IndexedField(index = true, store = false)\") @ProtoField(number = 1) public String getText() { return text; } public void setText(String text) { this.text = text; } @ProtoDoc(\"@IndexedField\") @ProtoField(number = 2) public User getAuthor() { return author; } public void setAuthor(User author) { this.author = author; } }", "import org.infinispan.protostream.SerializationContext; import org.infinispan.protostream.annotations.ProtoSchemaBuilder; import org.infinispan.client.hotrod.RemoteCacheManager; import org.infinispan.client.hotrod.marshall.ProtoStreamMarshaller; RemoteCacheManager remoteCacheManager = ... // we have a RemoteCacheManager SerializationContext serCtx = ProtoStreamMarshaller.getSerializationContext(remoteCacheManager); // generate and register a Protobuf schema and marshallers based // on Note class and the referenced classes (User class) ProtoSchemaBuilder protoSchemaBuilder = new ProtoSchemaBuilder(); String generatedSchema = protoSchemaBuilder .fileName(\"sample_schema.proto\") .packageName(\"sample_package\") .addClass(Note.class) .build(serCtx); // the types can be marshalled now assertTrue(serCtx.canMarshall(User.class)); assertTrue(serCtx.canMarshall(Note.class)); assertTrue(serCtx.canMarshall(User.Gender.class)); // display the schema file System.out.println(generatedSchema);", "package sample_package; /* @Indexed */ message Note { /* @IndexedField(index = true, store = false) */ optional string text = 1; /* @IndexedField */ optional ApplicationUser author = 2; } message ApplicationUser { enum Gender { M = 1; F = 2; } required string name = 1; optional Gender gender = 2; }" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/infinispan_query_guide/sect-protobuf_encoding
Chapter 8. Managing user groups using Ansible playbooks
Chapter 8. Managing user groups using Ansible playbooks This section introduces user group management using Ansible playbooks. A user group is a set of users with common privileges, password policies, and other characteristics. A user group in Identity Management (IdM) can include: IdM users other IdM user groups external users, which are users that exist outside of IdM The section includes the following topics: The different group types in IdM Direct and indirect group members Ensuring the presence of IdM groups and group members using Ansible playbooks Using Ansible to enable AD users to administer IdM Ensuring the presence of member managers in IDM user groups using Ansible playbooks Ensuring the absence of member managers in IDM user groups using Ansible playbooks 8.1. The different group types in IdM IdM supports the following types of groups: POSIX groups (the default) POSIX groups support Linux POSIX attributes for their members. Note that groups that interact with Active Directory cannot use POSIX attributes. POSIX attributes identify users as separate entities. Examples of POSIX attributes relevant to users include uidNumber , a user number (UID), and gidNumber , a group number (GID). Non-POSIX groups Non-POSIX groups do not support POSIX attributes. For example, these groups do not have a GID defined. All members of this type of group must belong to the IdM domain. External groups Use external groups to add group members that exist in an identity store outside of the IdM domain, such as: A local system An Active Directory domain A directory service External groups do not support POSIX attributes. For example, these groups do not have a GID defined. Table 8.1. User groups created by default Group name Default group members ipausers All IdM users admins Users with administrative privileges, including the default admin user editors This is a legacy group that no longer has any special privileges trust admins Users with privileges to manage the Active Directory trusts When you add a user to a user group, the user gains the privileges and policies associated with the group. For example, to grant administrative privileges to a user, add the user to the admins group. Warning Do not delete the admins group. As admins is a pre-defined group required by IdM, this operation causes problems with certain commands. In addition, IdM creates user private groups by default whenever a new user is created in IdM. For more information about private groups, see Adding users without a private group . 8.2. Direct and indirect group members User group attributes in IdM apply to both direct and indirect members: when group B is a member of group A, all users in group B are considered indirect members of group A. For example, in the following diagram: User 1 and User 2 are direct members of group A. User 3, User 4, and User 5 are indirect members of group A. Figure 8.1. Direct and Indirect Group Membership If you set a password policy for user group A, the policy also applies to all users in user group B. 8.3. Ensuring the presence of IdM groups and group members using Ansible playbooks The following procedure describes ensuring the presence of IdM groups and group members - both users and user groups - using an Ansible playbook. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The users you want to reference in your Ansible playbook exist in IdM. For details on ensuring the presence of users using Ansible, see Managing user accounts using Ansible playbooks . Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the necessary user and group information: --- - name: Playbook to handle groups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Create group ops with gid 1234 ipagroup: ipaadmin_password: "{{ ipaadmin_password }}" name: ops gidnumber: 1234 - name: Create group sysops ipagroup: ipaadmin_password: "{{ ipaadmin_password }}" name: sysops user: - idm_user - name: Create group appops ipagroup: ipaadmin_password: "{{ ipaadmin_password }}" name: appops - name: Add group members sysops and appops to group ops ipagroup: ipaadmin_password: "{{ ipaadmin_password }}" name: ops group: - sysops - appops Run the playbook: Verification You can verify if the ops group contains sysops and appops as direct members and idm_user as an indirect member by using the ipa group-show command: Log into ipaserver as administrator: Display information about ops : The appops and sysops groups - the latter including the idm_user user - exist in IdM. Additional resources See the /usr/share/doc/ansible-freeipa/README-group.md Markdown file. 8.4. Using Ansible to add multiple IdM groups in a single task You can use the ansible-freeipa ipagroup module to add, modify, and delete multiple Identity Management (IdM) user groups with a single Ansible task. For that, use the groups option of the ipagroup module. Using the groups option, you can also specify multiple group variables that only apply to a particular group. Define this group by the name variable, which is the only mandatory variable for the groups option. Complete this procedure to ensure the presence of the sysops and the appops groups in IdM in a single task. Define the sysops group as a nonposix group and the appops group as an external group. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. You have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server in the ~/ MyPlaybooks / directory. You are using RHEL 8.9 and later. You have stored your ipaadmin_password in the secret.yml Ansible vault. Procedure Create your Ansible playbook file add-nonposix-and-external-groups.yml with the following content: Run the playbook: Additional resources The group module in ansible-freeipa upstream docs 8.5. Using Ansible to enable AD users to administer IdM Follow this procedure to use an Ansible playbook to ensure that a user ID override is present in an Identity Management (IdM) group. The user ID override is the override of an Active Directory (AD) user that you created in the Default Trust View after you established a trust with AD. As a result of running the playbook, an AD user, for example an AD administrator, is able to fully administer IdM without having two different accounts and passwords. Prerequisites You know the IdM admin password. You have installed a trust with AD . The user ID override of the AD user already exists in IdM. If it does not, create it with the ipa idoverrideuser-add 'default trust view' [email protected] command. The group to which you are adding the user ID override already exists in IdM . You are using the 4.8.7 version of IdM or later. To view the version of IdM you have installed on your server, enter ipa --version . You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to your ~/ MyPlaybooks / directory: Create an add-useridoverride-to-group.yml playbook with the following content: In the example: Secret123 is the IdM admin password. admins is the name of the IdM POSIX group to which you are adding the [email protected] ID override. Members of this group have full administrator privileges. [email protected] is the user ID override of an AD administrator. The user is stored in the AD domain with which a trust has been established. Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources ID overrides for AD users /usr/share/doc/ansible-freeipa/README-group.md /usr/share/doc/ansible-freeipa/playbooks/user Using ID views in Active Directory environments Enabling AD users to administer IdM 8.6. Ensuring the presence of member managers in IdM user groups using Ansible playbooks The following procedure describes ensuring the presence of IdM member managers - both users and user groups - using an Ansible playbook. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You must have the name of the user or group you are adding as member managers and the name of the group you want them to manage. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the necessary user and group member management information: Run the playbook: Verification You can verify if the group_a group contains test as a member manager and group_admins is a member manager of group_a by using the ipa group-show command: Log into ipaserver as administrator: Display information about managergroup1 : Additional resources See ipa host-add-member-manager --help . See the ipa man page on your system. 8.7. Ensuring the absence of member managers in IdM user groups using Ansible playbooks The following procedure describes ensuring the absence of IdM member managers - both users and user groups - using an Ansible playbook. Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You must have the name of the existing member manager user or group you are removing and the name of the group they are managing. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the necessary user and group member management information: --- - name: Playbook to handle membership management hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure member manager user and group members are absent for group_a ipagroup: ipaadmin_password: "{{ ipaadmin_password }}" name: group_a membermanager_user: test membermanager_group: group_admins action: member state: absent Run the playbook: Verification You can verify if the group_a group does not contain test as a member manager and group_admins as a member manager of group_a by using the ipa group-show command: Log into ipaserver as administrator: Display information about group_a: Additional resources See ipa host-remove-member-manager --help . See the ipa man page on your system.
[ "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle groups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Create group ops with gid 1234 ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: ops gidnumber: 1234 - name: Create group sysops ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: sysops user: - idm_user - name: Create group appops ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: appops - name: Add group members sysops and appops to group ops ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: ops group: - sysops - appops", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/add-group-members.yml", "ssh [email protected] Password: [admin@server /]USD", "ipaserver]USD ipa group-show ops Group name: ops GID: 1234 Member groups: sysops, appops Indirect Member users: idm_user", "--- - name: Playbook to add nonposix and external groups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Add nonposix group sysops and external group appops ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" groups: - name: sysops nonposix: true - name: appops external: true", "ansible-playbook --vault-password-file=password_file -v -i <path_to_inventory_directory>/hosts <path_to_playbooks_directory>/add-nonposix-and-external-groups.yml", "cd ~/ MyPlaybooks /", "--- - name: Playbook to ensure presence of users in a group hosts: ipaserver - name: Ensure the [email protected] user ID override is a member of the admins group: ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: admins idoverrideuser: - [email protected]", "ansible-playbook --vault-password-file=password_file -v -i inventory add-useridoverride-to-group.yml", "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle membership management hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure user test is present for group_a ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_a membermanager_user: test - name: Ensure group_admins is present for group_a ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_a membermanager_group: group_admins", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/add-member-managers-user-groups.yml", "ssh [email protected] Password: [admin@server /]USD", "ipaserver]USD ipa group-show group_a Group name: group_a GID: 1133400009 Membership managed by groups: group_admins Membership managed by users: test", "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle membership management hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure member manager user and group members are absent for group_a ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_a membermanager_user: test membermanager_group: group_admins action: member state: absent", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-member-managers-are-absent.yml", "ssh [email protected] Password: [admin@server /]USD", "ipaserver]USD ipa group-show group_a Group name: group_a GID: 1133400009" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_ansible_to_install_and_manage_identity_management/managing-user-groups-using-ansible-playbooks_using-ansible-to-install-and-manage-idm
11.6. Consistent Network Device Naming Using biosdevname
11.6. Consistent Network Device Naming Using biosdevname This feature, implemented through the biosdevname udev helper utility, will change the name of all embedded network interfaces, PCI card network interfaces, and virtual function network interfaces from the existing eth[0123...] to the new naming convention as shown in Table 11.2, "The biosdevname Naming Convention" . Note that unless the system is a Dell system, or biosdevname is explicitly enabled as described in Section 11.6.2, "Enabling and Disabling the Feature" , the systemd naming scheme will take precedence. Table 11.2. The biosdevname Naming Convention Device Old Name New Name Embedded network interface (LOM) eth[0123...] em[1234...] [a] PCI card network interface eth[0123...] p< slot >p< ethernet port > [b] Virtual function eth[0123...] p< slot >p< ethernet port >_< virtual interface > [c] [a] New enumeration starts at 1 . [b] For example: p3p4 [c] For example: p3p4_1 11.6.1. System Requirements The biosdevname program uses information from the system's BIOS, specifically the type 9 (System Slot) and type 41 (Onboard Devices Extended Information) fields contained within the SMBIOS. If the system's BIOS does not have SMBIOS version 2.6 or higher and this data, the new naming convention will not be used. Most older hardware does not support this feature because of a lack of BIOSes with the correct SMBIOS version and field information. For BIOS or SMBIOS version information, contact your hardware vendor. For this feature to take effect, the biosdevname package must also be installed. To install it, issue the following command as root : 11.6.2. Enabling and Disabling the Feature To disable this feature, pass the following option on the boot command line, both during and after installation: To enable this feature, pass the following option on the boot command line, both during and after installation: Unless the system meets the minimum requirements, this option will be ignored and the system will use the systemd naming scheme as described in the beginning of the chapter. If the biosdevname install option is specified, it must remain as a boot option for the lifetime of the system.
[ "~]# yum install biosdevname", "biosdevname=0", "biosdevname=1" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-consistent_network_device_naming_using_biosdevname
Chapter 5. Connecting clients to the router network
Chapter 5. Connecting clients to the router network After creating a router network, you can connect clients (messaging applications) to it so that they can begin sending and receiving messages. By default, the Red Hat Integration - AMQ Interconnect creates a Service for the router deployment and configures the following ports for client access: 5672 for plain AMQP traffic without authentication 5671 for AMQP traffic secured with TLS authentication To connect clients to the router network, you can do the following: If any clients are outside of the OpenShift cluster, expose the ports so that they can connect to the router network. Configure your clients to connect to the router network. 5.1. Exposing ports for clients outside of OpenShift Container Platform You expose ports to enable clients outside of the OpenShift Container Platform cluster to connect to the router network. Procedure Start editing the Interconnect Custom Resource YAML file that describes the router deployment for which you want to expose ports. In the spec.listeners section, expose each port that you want clients outside of the cluster to be able to access. In this example, port 5671 is exposed. This enables clients outside of the cluster to authenticate with and connect to the router network. Sample router-mesh.yaml file apiVersion: interconnectedcloud.github.io/v1alpha1 kind: Interconnect metadata: name: router-mesh spec: ... listeners: - port: 5672 - authenticatePeer: true expose: true http: true port: 8080 - port: 5671 sslProfile: default expose: true ... The Red Hat Integration - AMQ Interconnect creates a Route, which clients from outside the cluster can use to connect to the router network. 5.2. Authentication for client connections When you create a router deployment, the Red Hat Integration - AMQ Interconnect uses the Red Hat Integration - AMQ Certificate Manager Operator to create default SSL/TLS certificates for client authentication, and configures port 5671 for SSL encryption. 5.3. Configuring clients to connect to the router network You can connect messaging clients running in the same OpenShift cluster as the router network, a different cluster, or outside of OpenShift altogether so that they can exchange messages. Prerequisites If the client is outside of the OpenShift Container Platform cluster, a connecting port must be exposed. For more information, see Section 5.1, "Exposing ports for clients outside of OpenShift Container Platform" . Procedure To connect a client to the router network, use the following connection URL format: <scheme> Use one of the following: amqp - unencrypted TCP from within the same OpenShift cluster amqps - for secure connections using SSL/TLS authentication amqpws - AMQP over WebSockets for unencrypted connections from outside the OpenShift cluster <username> If you deployed the router mesh with user name/password authentication, provide the client's user name. <host> If the client is in the same OpenShift cluster as the router network, use the OpenShift Service host name. Otherwise, use the host name of the Route. <port> If you are connecting to a Route, you must specify the port. To connect on an unsecured connection, use port 80 . Otherwise, to connect on a secured connection, use port 443 . Note To connect on an unsecured connection (port 80 ), the client must use AMQP over WebSockets ( amqpws ). The following table shows some example connection URLs. URL Description amqp://admin@router-mesh:5672 The client and router network are both in the same OpenShift cluster, so the Service host name is used for the connection URL. In this case, user name/password authentication is implemented, which requires the user name ( admin ) to be provided. amqps://router-mesh-myproject.mycluster.com:443 The client is outside of OpenShift, so the Route host name is used for the connection URL. In this case, SSL/TLS authentication is implemented, which requires the amqps scheme and port 443 . amqpws://router-mesh-myproject.mycluster.com:80 The client is outside of OpenShift, so the Route host name is used for the connection URL. In this case, no authentication is implemented, which means the client must use the amqpws scheme and port 80 .
[ "oc edit -f router-mesh.yaml", "apiVersion: interconnectedcloud.github.io/v1alpha1 kind: Interconnect metadata: name: router-mesh spec: listeners: - port: 5672 - authenticatePeer: true expose: true http: true port: 8080 - port: 5671 sslProfile: default expose: true", "< scheme >://[< username >@]< host >[:< port >]" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/deploying_amq_interconnect_on_openshift/connecting-clients-router-network-router-ocp
Chapter 5. Verifying external OpenShift Data Foundation storage cluster deployment
Chapter 5. Verifying external OpenShift Data Foundation storage cluster deployment Use this section to verify that the OpenShift Data Foundation deployed as external storage is deployed correctly. 5.1. Verifying the state of the pods Click Workloads Pods from the left pane of the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see Table 5.1, "Pods corresponding to OpenShift Data Foundation components" Verify that the following pods are in running state: Table 5.1. Pods corresponding to OpenShift Data Foundation components Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any worker node) ocs-metrics-exporter-* (1 pod on any worker node) Note This pod must be present in openshift-storage-extended namespace as well such that there is 1 pod in each openshift-storage and openshift-storage extended namespace. odf-operator-controller-manager-* (1 pod on any worker node) odf-console-* (1 pod on any worker node) csi-addons-controller-manager-* (1 pod on any worker node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any worker node) CSI cephfs csi-cephfsplugin-* (1 pod on each worker node) csi-cephfsplugin-provisioner-* (2 pods distributed across worker nodes) rbd csi-rbdplugin-* (1 pod on each worker node) csi-rbdplugin-provisioner-* (2 pods distributed across worker nodes) 5.2. Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and verify that you can now see two storage system links in the pop up that appears. Click each of the storage system links and verify the following: In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 5.3. Verifying that the Multicloud Object Gateway is healthy In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the Multicloud Object Gateway (MCG) information is displayed. Note The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details are included while deploying OpenShift Data Foundation in external mode. For more information on the health of OpenShift Data Foundation cluster using the object dashboard, see Monitoring OpenShift Data Foundation . 5.4. Verifying that the storage classes are created and listed Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-external-storagecluster-ceph-rbd ocs-external-storagecluster-ceph-rgw ocs-external-storagecluster-cephfs Note If an MDS is not deployed in the external cluster, ocs-external-storagecluster-cephfs storage class will not be created. If RGW is not deployed in the external cluster, the ocs-external-storagecluster-ceph-rgw storage class will not be created. For more information regarding MDS and RGW, see Red Hat Ceph Storage documentation 5.5. Verifying that Ceph cluster is connected Run the following command to verify if the OpenShift Data Foundation cluster is connected to the external Red Hat Ceph Storage cluster. 5.6. Verifying that storage cluster is ready Run the following command to verify if the storage cluster is ready and the External option is set to true .
[ "oc get cephcluster -n openshift-storage-extended NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH EXTERNAL FSID ocs-external-storagecluster-cephcluster 51m Connected Cluster connected successfully HEALTH_OK true 8f01d842-d4b2-11ee-b43c-0050568fb522", "oc get storagecluster -n openshift-storage-extended NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-external-storagecluster 51m Ready true 2024-02-28T10:05:54Z 4.17.0" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_multiple_openshift_data_foundation_storage_clusters/verifying-odf-external-storage-cluster-deployment_rhodf
Chapter 5. Configuring Red Hat OpenStack Platform director Operator for Service Telemetry Framework
Chapter 5. Configuring Red Hat OpenStack Platform director Operator for Service Telemetry Framework To collect metrics, events, or both, and to send them to the Service Telemetry Framework (STF) storage domain, you must configure the Red Hat OpenStack Platform (RHOSP) overcloud to enable data collection and transport. STF can support both single and multiple clouds. The default configuration in RHOSP and STF set up for a single cloud installation. For a single RHOSP overcloud deployment using director Operator with default configuration, see Section 5.1, "Deploying Red Hat OpenStack Platform overcloud for Service Telemetry Framework using director Operator" . 5.1. Deploying Red Hat OpenStack Platform overcloud for Service Telemetry Framework using director Operator When you deploy the Red Hat OpenStack Platform (RHOSP) overcloud deployment using director Operator, you must configure the data collectors and the data transport for Service Telemetry Framework (STF). Prerequisites You are familiar with deploying and managing RHOSP with the RHOSP director Operator. Procedure Section 4.1.1, "Getting CA certificate from Service Telemetry Framework for overcloud configuration" Retrieving the AMQ Interconnect route address Creating the base configuration for director Operator for STF Configuring the STF connection for the overcloud Deploying the overcloud for director operator Additional resources For more information about deploying an OpenStack cloud using director Operator, see https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/17.1/html/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/index To collect data through AMQ Interconnect, see the amqp1 plug-in . 5.1.1. Getting CA certificate from Service Telemetry Framework for overcloud configuration To connect your Red Hat OpenStack Platform (RHOSP) overcloud to Service Telemetry Framework (STF), retrieve the CA certificate of AMQ Interconnect that runs within STF and use the certificate in RHOSP configuration. Procedure View a list of available certificates in STF: USD oc get secrets Retrieve and note the content of the default-interconnect-selfsigned Secret: USD oc get secret/default-interconnect-selfsigned -o jsonpath='{.data.ca\.crt}' | base64 -d 5.1.2. Retrieving the AMQ Interconnect route address When you configure the Red Hat OpenStack Platform (RHOSP) overcloud for Service Telemetry Framework (STF), you must provide the AMQ Interconnect route address in the STF connection file. Procedure Log in to your Red Hat OpenShift Container Platform environment where STF is hosted. Change to the service-telemetry project: USD oc project service-telemetry Retrieve the AMQ Interconnect route address: USD oc get routes -ogo-template='{{ range .items }}{{printf "%s\n" .spec.host }}{{ end }}' | grep "\-5671" default-interconnect-5671-service-telemetry.apps.infra.watch 5.1.3. Creating the base configuration for director Operator for STF Edit the heat-env-config-deploy ConfigMap to add the base Service Telemetry Framework (STF) configuration to the overcloud nodes. Procedure Log in to the Red Hat OpenShift Container Platform environment where RHOSP director Operator is deployed and change to the project that hosts your RHOSP deployment: USD oc project openstack Open the heat-env-config-deploy ConfigMap CR for editing: USD oc edit heat-env-config-deploy Add the enable-stf.yaml configuration to the heat-env-config-deploy ConfigMap, save your edits and close the file: enable-stf.yaml apiVersion: v1 data: [...] enable-stf.yaml: | parameter_defaults: # only send to STF, not other publishers PipelinePublishers: [] # manage the polling and pipeline configuration files for Ceilometer agents ManagePolling: true ManagePipeline: true ManageEventPipeline: false # enable Ceilometer metrics and events CeilometerQdrPublishMetrics: true # enable collection of API status CollectdEnableSensubility: true CollectdSensubilityTransport: amqp1 # enable collection of containerized service metrics CollectdEnableLibpodstats: true # set collectd overrides for higher telemetry resolution and extra plugins # to load CollectdConnectionType: amqp1 CollectdAmqpInterval: 30 CollectdDefaultPollingInterval: 30 # to collect information about the virtual memory subsystem of the kernel # CollectdExtraPlugins: # - vmem # set standard prefixes for where metrics are published to QDR MetricsQdrAddresses: - prefix: 'collectd' distribution: multicast - prefix: 'anycast/ceilometer' distribution: multicast ExtraConfig: ceilometer::agent::polling::polling_interval: 30 ceilometer::agent::polling::polling_meters: - cpu - memory.usage # to avoid filling the memory buffers if disconnected from the message bus # note: this may need an adjustment if there are many metrics to be sent. collectd::plugin::amqp1::send_queue_limit: 5000 # to receive extra information about virtual memory, you must enable vmem plugin in CollectdExtraPlugins # collectd::plugin::vmem::verbose: true # provide name and uuid in addition to hostname for better correlation # to ceilometer data collectd::plugin::virt::hostname_format: "name uuid hostname" # to capture all extra_stats metrics, comment out below config collectd::plugin::virt::extra_stats: cpu_util vcpu disk # provide the human-friendly name of the virtual instance collectd::plugin:ConfigMap :virt::plugin_instance_format: metadata # set memcached collectd plugin to report its metrics by hostname # rather than host IP, ensuring metrics in the dashboard remain uniform collectd::plugin::memcached::instances: local: host: "%{hiera('fqdn_canonical')}" port: 11211 # report root filesystem storage metrics collectd::plugin::df::ignoreselected: false Additional resources For more information about configuring the enable-stf.yaml environment file, see Section 4.1.4, "Creating the base configuration for STF" For more information about adding heat templates to a Red Hat OpenStack Platform director Operator deployment, see Adding heat templates and environment files with the director Operator 5.1.4. Configuring the STF connection for director Operator for the overcloud Edit the heat-env-config-deploy ConfigMap to create a connection from Red Hat OpenStack Platform (RHOSP) to Service Telemetry Framework. Procedure Log in to the Red Hat OpenShift Container Platform environment where RHOSP director Operator is deployed and change to the project that hosts your RHOSP deployment: USD oc project openstack Open the heat-env-config-deploy ConfigMap for editing: USD oc edit configmap heat-env-config-deploy Add your stf-connectors.yaml configuration to the heat-env-config-deploy ConfigMap, appropriate to your environment, save your edits and close the file: stf-connectors.yaml apiVersion: v1 data: [...] stf-connectors.yaml: | resource_registry: OS::TripleO::Services::Collectd: /usr/share/openstack-tripleo-heat-templates/deployment/metrics/collectd-container-puppet.yaml parameter_defaults: MetricsQdrConnectors: - host: default-interconnect-5671-service-telemetry.apps.ostest.test.metalkube.org port: 443 role: edge verifyHostname: false sslProfile: sslProfile saslUsername: guest@default-interconnect saslPassword: <password_from_stf> MetricsQdrSSLProfiles: - name: sslProfile CeilometerQdrMetricsConfig: driver: amqp topic: cloud1-metering CollectdAmqpInstances: cloud1-telemetry: format: JSON presettle: false CollectdSensubilityResultsChannel: sensubility/cloud1-telemetry The resource_registry configuration directly loads the collectd service because you do not include the collectd-write-qdr.yaml environment file for multiple cloud deployments. Replace the host sub-parameter of MetricsQdrConnectors with the value that you retrieved in Section 4.1.3, "Retrieving the AMQ Interconnect route address" . Replace the <password_from_stf> portion of the saslPassword sub-parameter of MetricsQdrConnectors with the value you retrieved in Section 4.1.2, "Retrieving the AMQ Interconnect password" . Replace the caCertFileContent parameter with the contents retrieved in Section 4.1.1, "Getting CA certificate from Service Telemetry Framework for overcloud configuration" . Set topic value of CeilometerQdrMetricsConfig.topic to define the topic for Ceilometer metrics. The value is a unique topic identifier for the cloud such as cloud1-metering . Set CollectdAmqpInstances sub-parameter to define the topic for collectd metrics. The section name is a unique topic identifier for the cloud such as cloud1-telemetry . Set CollectdSensubilityResultsChannel to define the topic for collectd-sensubility events. The value is a unique topic identifier for the cloud such as sensubility/cloud1-telemetry . Additional resources For more information about the stf-connectors.yaml environment file, see Section 4.1.5, "Configuring the STF connection for the overcloud" . For more information about adding heat templates to a RHOSP director Operator deployment, see Adding heat templates and environment files with the director Operator 5.1.5. Deploying the overcloud for director Operator Deploy or update the overcloud with the required environment files so that data is collected and transmitted to Service Telemetry Framework (STF). Procedure Log in to the Red Hat OpenShift Container Platform environment where RHOSP director Operator is deployed and change to the project that hosts your RHOSP deployment: USD oc project openstack Open the OpenStackConfigGenerator custom resource for editing: USD oc edit OpenStackConfigGenerator Add the metrics/ceilometer-write-qdr.yaml and metrics/qdr-edge-only.yaml environment files as values for the heatEnvs parameter. Save your edits, and close the OpenStackConfigGenerator custom resource: Note If you already deployed a Red Hat OpenStack Platform environment using director Operator, you must delete the existing OpenStackConfigGenerator and create a new object with the full configuration in order to re-generate the OpenStackConfigVersion . OpenStackConfigGenerator apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackConfigGenerator metadata: name: default namespace: openstack spec: heatEnvConfigMap: heat-env-config-deploy heatEnvs: - <existing_environment_file_references> - metrics/ceilometer-write-qdr.yaml - metrics/qdr-edge-only.yaml If you already deployed a Red Hat OpenStack Platform environment using director Operator and generated a new OpenStackConfigVersion , edit the OpenStackDeploy object of your deployment, and set the value of spec.configVersion to the new OpenStackConfigVersion in order to update the overcloud deployment. Additional resources For more information about getting the latest OpenStackConfigVersion , see Obtain the latest OpenStackConfigVersion For more information about applying the overcloud configuration with director Operator, see Applying overcloud configuration with the director Operator
[ "oc get secrets", "oc get secret/default-interconnect-selfsigned -o jsonpath='{.data.ca\\.crt}' | base64 -d", "oc project service-telemetry", "oc get routes -ogo-template='{{ range .items }}{{printf \"%s\\n\" .spec.host }}{{ end }}' | grep \"\\-5671\" default-interconnect-5671-service-telemetry.apps.infra.watch", "oc project openstack", "oc edit heat-env-config-deploy", "apiVersion: v1 data: [...] enable-stf.yaml: | parameter_defaults: # only send to STF, not other publishers PipelinePublishers: [] # manage the polling and pipeline configuration files for Ceilometer agents ManagePolling: true ManagePipeline: true ManageEventPipeline: false # enable Ceilometer metrics and events CeilometerQdrPublishMetrics: true # enable collection of API status CollectdEnableSensubility: true CollectdSensubilityTransport: amqp1 # enable collection of containerized service metrics CollectdEnableLibpodstats: true # set collectd overrides for higher telemetry resolution and extra plugins # to load CollectdConnectionType: amqp1 CollectdAmqpInterval: 30 CollectdDefaultPollingInterval: 30 # to collect information about the virtual memory subsystem of the kernel # CollectdExtraPlugins: # - vmem # set standard prefixes for where metrics are published to QDR MetricsQdrAddresses: - prefix: 'collectd' distribution: multicast - prefix: 'anycast/ceilometer' distribution: multicast ExtraConfig: ceilometer::agent::polling::polling_interval: 30 ceilometer::agent::polling::polling_meters: - cpu - memory.usage # to avoid filling the memory buffers if disconnected from the message bus # note: this may need an adjustment if there are many metrics to be sent. collectd::plugin::amqp1::send_queue_limit: 5000 # to receive extra information about virtual memory, you must enable vmem plugin in CollectdExtraPlugins # collectd::plugin::vmem::verbose: true # provide name and uuid in addition to hostname for better correlation # to ceilometer data collectd::plugin::virt::hostname_format: \"name uuid hostname\" # to capture all extra_stats metrics, comment out below config collectd::plugin::virt::extra_stats: cpu_util vcpu disk # provide the human-friendly name of the virtual instance collectd::plugin:ConfigMap :virt::plugin_instance_format: metadata # set memcached collectd plugin to report its metrics by hostname # rather than host IP, ensuring metrics in the dashboard remain uniform collectd::plugin::memcached::instances: local: host: \"%{hiera('fqdn_canonical')}\" port: 11211 # report root filesystem storage metrics collectd::plugin::df::ignoreselected: false", "oc project openstack", "oc edit configmap heat-env-config-deploy", "apiVersion: v1 data: [...] stf-connectors.yaml: | resource_registry: OS::TripleO::Services::Collectd: /usr/share/openstack-tripleo-heat-templates/deployment/metrics/collectd-container-puppet.yaml parameter_defaults: MetricsQdrConnectors: - host: default-interconnect-5671-service-telemetry.apps.ostest.test.metalkube.org port: 443 role: edge verifyHostname: false sslProfile: sslProfile saslUsername: guest@default-interconnect saslPassword: <password_from_stf> MetricsQdrSSLProfiles: - name: sslProfile CeilometerQdrMetricsConfig: driver: amqp topic: cloud1-metering CollectdAmqpInstances: cloud1-telemetry: format: JSON presettle: false CollectdSensubilityResultsChannel: sensubility/cloud1-telemetry", "oc project openstack", "oc edit OpenStackConfigGenerator", "apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackConfigGenerator metadata: name: default namespace: openstack spec: heatEnvConfigMap: heat-env-config-deploy heatEnvs: - <existing_environment_file_references> - metrics/ceilometer-write-qdr.yaml - metrics/qdr-edge-only.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/service_telemetry_framework_1.5/assembly-completing-the-stf-configuration-using-director-operator_assembly
Chapter 154. KafkaMirrorMaker2Status schema reference
Chapter 154. KafkaMirrorMaker2Status schema reference Used in: KafkaMirrorMaker2 Property Property type Description conditions Condition array List of status conditions. observedGeneration integer The generation of the CRD that was last reconciled by the operator. url string The URL of the REST API endpoint for managing and monitoring Kafka Connect connectors. connectors map array List of MirrorMaker 2 connector statuses, as reported by the Kafka Connect REST API. autoRestartStatuses AutoRestartStatus array List of MirrorMaker 2 connector auto restart statuses. connectorPlugins ConnectorPlugin array The list of connector plugins available in this Kafka Connect deployment. labelSelector string Label selector for pods providing this resource. replicas integer The current number of pods being used to provide this resource.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-kafkamirrormaker2status-reference
Chapter 5. Systems lifecycle in the inventory application
Chapter 5. Systems lifecycle in the inventory application A system is a Red Hat Enterprise Linux (RHEL) host that is managed by the Red Hat Insights inventory in the Red Hat Hybrid Cloud Console. System activity is automatically monitored by Red Hat. All systems registered with inventory follow a lifecycle that includes the following states: fresh , stale , and stale warning . The state that a system resides in depends on the last time it was reported by a data collector to the inventory application. Systems are automatically deleted from inventory if they do not report within a given time frame. The goal of the deletion mechanism is to maintain an up-to-date, accurate view of your inventory. Here is a description of each state: Fresh The default configuration requires systems to communicate with Red Hat daily. A system with the status of fresh, is active and is regularly reported to the inventory application. It will be reported by one of the data collectors described in section 1.2. Most systems are in this state during typical operations. Stale A system with the status of stale, has NOT been reported to the inventory application in the last day, which is equivalent to the last 26 hours. Stale warning A system with the status of stale warning, has NOT been reported to the inventory application in the last 14 days. When reaching this state, a system is flagged for automatic deletion. Once a system is removed from inventory it will no longer appear in the inventory application and Insights data analysis results will no longer be available. 5.1. Determining system state in inventory There are two ways to determine which state a system is currently in. 5.1.1. Determining system state in inventory as a user with viewer access If you have Inventory Hosts viewer access, you can view the system state on the Systems page by using the following steps: Prerequisites You have Inventory Hosts viewer access. Procedure Navigate to the Red Hat Insights > RHEL > Inventory page. Click the Filter drop-down list, and select Status . Click the Filter by status drop-down, and choose the states you want to include in your query. Click Reset filters to clear your query. 5.1.2. Determining system state in inventory as a user with administrator access If you have Inventory Hosts administrator access, you can get the system state of any system from the Dashboard by using the following steps: Prerequisites You have Inventory Hosts administrator access. Procedure Navigate to the Red Hat Insights for Red Hat Enterprise Linux dashboard page. Go to the top left of the screen where you can examine the total number of systems that are registered with Insights for Red Hat Enterprise Linux. After the total number, towards the right side of this value, you will see the number of stale systems and the number of systems to be removed . Click either: The stale systems link. The systems to be removed link, if applicable. This opens the inventory page where you view more granular details about the system. 5.2. Modifying system staleness and deletion time limits in inventory By default, system states have the following time limits: Systems are labeled stale if they are not reported in one day. A warning icon displays at the top of the Systems page in the Last seen: field. Systems are labeled stale warning if they are not reported within 7 days. In this case, the Last seen: field turns red. Systems that are not reported in 14 days are deleted. There are situations where a system is offline for an extended time period but is still in use. For example, test environments are often kept offline except when testing. Edge devices, submarines, or Internet of Things (IoT) devices, can be out of range of communication for extended time periods. You can modify the system staleness and deletion time limits to ensure that systems that are offline but still active do not get deleted. Staleness and deletion settings get applied to all of your conventional and immutable systems. Prerequisites You are logged into the Red Hat Hybrid Cloud Console as a user with the Organization staleness and deletion administrator role. Procedure On the Red Hat Hybrid Cloud Console main page, click RHEL in the Red Hat Insights tile. In the left navigation bar, click Inventory > System Configuration > Staleness and Deletion . The Staleness and Deletion page displays the current settings for system staleness, system stale warning, and system deletion for conventional systems. Optional: To manage the staleness and configuration settings for edge (immutable) systems, select the Immutable (OSTree) tab. To change these values, click Edit . The drop-down arrows to each value are now enabled. Click the arrow to the value that you want to change and then select a new value. Note The system stale warning value must be less than the system deletion value. Optional: To revert to the default values for the organization, click Reset . Click Save . Note Setting the system deletion maximum time to less than the current maximum time, deletes systems that have been stale for longer than the new maximum time.
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/viewing_and_managing_system_inventory_with_fedramp/systems-lifecycle_user-access
Chapter 22. Service [v1]
Chapter 22. Service [v1] Description Service is a named abstraction of software service (for example, mysql) consisting of local port (for example 3306) that the proxy listens on, and the selector that determines which pods will answer requests sent through the proxy. Type object 22.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ServiceSpec describes the attributes that a user creates on a service. status object ServiceStatus represents the current status of a service. 22.1.1. .spec Description ServiceSpec describes the attributes that a user creates on a service. Type object Property Type Description allocateLoadBalancerNodePorts boolean allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. clusterIP string clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies clusterIPs array (string) ClusterIPs is a list of IP addresses assigned to this service, and are usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be empty) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. If this field is not specified, it will be initialized from the clusterIP field. If this field is specified, clients must ensure that clusterIPs[0] and clusterIP have the same value. This field may hold a maximum of two entries (dual-stack IPs, in either order). These IPs must correspond to the values of the ipFamilies field. Both clusterIPs and ipFamilies are governed by the ipFamilyPolicy field. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies externalIPs array (string) externalIPs is a list of IP addresses for which nodes in the cluster will also accept traffic for this service. These IPs are not managed by Kubernetes. The user is responsible for ensuring that traffic arrives at a node with this IP. A common example is external load-balancers that are not part of the Kubernetes system. externalName string externalName is the external reference that discovery mechanisms will return as an alias for this service (e.g. a DNS CNAME record). No proxying will be involved. Must be a lowercase RFC-1123 hostname ( https://tools.ietf.org/html/rfc1123 ) and requires type to be "ExternalName". externalTrafficPolicy string externalTrafficPolicy describes how nodes distribute service traffic they receive on one of the Service's "externally-facing" addresses (NodePorts, ExternalIPs, and LoadBalancer IPs). If set to "Local", the proxy will configure the service in a way that assumes that external load balancers will take care of balancing the service traffic between nodes, and so each node will deliver traffic only to the node-local endpoints of the service, without masquerading the client source IP. (Traffic mistakenly sent to a node with no endpoints will be dropped.) The default value, "Cluster", uses the standard behavior of routing to all endpoints evenly (possibly modified by topology and other features). Note that traffic sent to an External IP or LoadBalancer IP from within the cluster will always get "Cluster" semantics, but clients sending to a NodePort from within the cluster may need to take traffic policy into account when picking a node. Possible enum values: - "Cluster" routes traffic to all endpoints. - "Local" preserves the source IP of the traffic by routing only to endpoints on the same node as the traffic was received on (dropping the traffic if there are no local endpoints). healthCheckNodePort integer healthCheckNodePort specifies the healthcheck nodePort for the service. This only applies when type is set to LoadBalancer and externalTrafficPolicy is set to Local. If a value is specified, is in-range, and is not in use, it will be used. If not specified, a value will be automatically allocated. External systems (e.g. load-balancers) can use this port to determine if a given node holds endpoints for this service or not. If this field is specified when creating a Service which does not need it, creation will fail. This field will be wiped when updating a Service to no longer need it (e.g. changing type). This field cannot be updated once set. internalTrafficPolicy string InternalTrafficPolicy describes how nodes distribute service traffic they receive on the ClusterIP. If set to "Local", the proxy will assume that pods only want to talk to endpoints of the service on the same node as the pod, dropping the traffic if there are no local endpoints. The default value, "Cluster", uses the standard behavior of routing to all endpoints evenly (possibly modified by topology and other features). Possible enum values: - "Cluster" routes traffic to all endpoints. - "Local" routes traffic only to endpoints on the same node as the client pod (dropping the traffic if there are no local endpoints). ipFamilies array (string) IPFamilies is a list of IP families (e.g. IPv4, IPv6) assigned to this service. This field is usually assigned automatically based on cluster configuration and the ipFamilyPolicy field. If this field is specified manually, the requested family is available in the cluster, and ipFamilyPolicy allows it, it will be used; otherwise creation of the service will fail. This field is conditionally mutable: it allows for adding or removing a secondary IP family, but it does not allow changing the primary IP family of the Service. Valid values are "IPv4" and "IPv6". This field only applies to Services of types ClusterIP, NodePort, and LoadBalancer, and does apply to "headless" services. This field will be wiped when updating a Service to type ExternalName. This field may hold a maximum of two entries (dual-stack families, in either order). These families must correspond to the values of the clusterIPs field, if specified. Both clusterIPs and ipFamilies are governed by the ipFamilyPolicy field. ipFamilyPolicy string IPFamilyPolicy represents the dual-stack-ness requested or required by this Service. If there is no value provided, then this field will be set to SingleStack. Services can be "SingleStack" (a single IP family), "PreferDualStack" (two IP families on dual-stack configured clusters or a single IP family on single-stack clusters), or "RequireDualStack" (two IP families on dual-stack configured clusters, otherwise fail). The ipFamilies and clusterIPs fields depend on the value of this field. This field will be wiped when updating a service to type ExternalName. Possible enum values: - "PreferDualStack" indicates that this service prefers dual-stack when the cluster is configured for dual-stack. If the cluster is not configured for dual-stack the service will be assigned a single IPFamily. If the IPFamily is not set in service.spec.ipFamilies then the service will be assigned the default IPFamily configured on the cluster - "RequireDualStack" indicates that this service requires dual-stack. Using IPFamilyPolicyRequireDualStack on a single stack cluster will result in validation errors. The IPFamilies (and their order) assigned to this service is based on service.spec.ipFamilies. If service.spec.ipFamilies was not provided then it will be assigned according to how they are configured on the cluster. If service.spec.ipFamilies has only one entry then the alternative IPFamily will be added by apiserver - "SingleStack" indicates that this service is required to have a single IPFamily. The IPFamily assigned is based on the default IPFamily used by the cluster or as identified by service.spec.ipFamilies field loadBalancerClass string loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type. loadBalancerIP string Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations. Using it is non-portable and it may not support dual-stack. Users are encouraged to use implementation-specific annotations when available. loadBalancerSourceRanges array (string) If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/ ports array The list of ports that are exposed by this service. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies ports[] object ServicePort contains information on service's port. publishNotReadyAddresses boolean publishNotReadyAddresses indicates that any agent which deals with endpoints for this Service should disregard any indications of ready/not-ready. The primary use case for setting this field is for a StatefulSet's Headless Service to propagate SRV DNS records for its Pods for the purpose of peer discovery. The Kubernetes controllers that generate Endpoints and EndpointSlice resources for Services interpret this to mean that all endpoints are considered "ready" even if the Pods themselves are not. Agents which consume only Kubernetes generated endpoints through the Endpoints or EndpointSlice resources can safely assume this behavior. selector object (string) Route service traffic to pods with label keys and values matching this selector. If empty or not present, the service is assumed to have an external process managing its endpoints, which Kubernetes will not modify. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/ sessionAffinity string Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies Possible enum values: - "ClientIP" is the Client IP based. - "None" - no session affinity. sessionAffinityConfig object SessionAffinityConfig represents the configurations of session affinity. trafficDistribution string TrafficDistribution offers a way to express preferences for how traffic is distributed to Service endpoints. Implementations can use this field as a hint, but are not required to guarantee strict adherence. If the field is not set, the implementation will apply its default routing strategy. If set to "PreferClose", implementations should prioritize endpoints that are topologically close (e.g., same zone). This is an alpha field and requires enabling ServiceTrafficDistribution feature. type string type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer. "ClusterIP" allocates a cluster-internal IP address for load-balancing to endpoints. Endpoints are determined by the selector or if that is not specified, by manual construction of an Endpoints object or EndpointSlice objects. If clusterIP is "None", no virtual IP is allocated and the endpoints are published as a set of endpoints rather than a virtual IP. "NodePort" builds on ClusterIP and allocates a port on every node which routes to the same endpoints as the clusterIP. "LoadBalancer" builds on NodePort and creates an external load-balancer (if supported in the current cloud) which routes to the same endpoints as the clusterIP. "ExternalName" aliases this service to the specified externalName. Several other fields do not apply to ExternalName services. More info: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types Possible enum values: - "ClusterIP" means a service will only be accessible inside the cluster, via the cluster IP. - "ExternalName" means a service consists of only a reference to an external name that kubedns or equivalent will return as a CNAME record, with no exposing or proxying of any pods involved. - "LoadBalancer" means a service will be exposed via an external load balancer (if the cloud provider supports it), in addition to 'NodePort' type. - "NodePort" means a service will be exposed on one port of every node, in addition to 'ClusterIP' type. 22.1.2. .spec.ports Description The list of ports that are exposed by this service. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies Type array 22.1.3. .spec.ports[] Description ServicePort contains information on service's port. Type object Required port Property Type Description appProtocol string The application protocol for this port. This is used as a hint for implementations to offer richer behavior for protocols that they understand. This field follows standard Kubernetes label syntax. Valid values are either: * Un-prefixed protocol names - reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names ). * Kubernetes-defined prefixed names: * 'kubernetes.io/h2c' - HTTP/2 prior knowledge over cleartext as described in https://www.rfc-editor.org/rfc/rfc9113.html#name-starting-http-2-with-prior- * 'kubernetes.io/ws' - WebSocket over cleartext as described in https://www.rfc-editor.org/rfc/rfc6455 * 'kubernetes.io/wss' - WebSocket over TLS as described in https://www.rfc-editor.org/rfc/rfc6455 * Other protocols should use implementation-defined prefixed names such as mycompany.com/my-custom-protocol. name string The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service. nodePort integer The port on each node on which this service is exposed when type is NodePort or LoadBalancer. Usually assigned by the system. If a value is specified, in-range, and not in use it will be used, otherwise the operation will fail. If not specified, a port will be allocated if this Service requires one. If this field is specified when creating a Service which does not need it, creation will fail. This field will be wiped when updating a Service to no longer need it (e.g. changing type from NodePort to ClusterIP). More info: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport port integer The port that will be exposed by this service. protocol string The IP protocol for this port. Supports "TCP", "UDP", and "SCTP". Default is TCP. Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. targetPort IntOrString Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map). This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field. More info: https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service 22.1.4. .spec.sessionAffinityConfig Description SessionAffinityConfig represents the configurations of session affinity. Type object Property Type Description clientIP object ClientIPConfig represents the configurations of Client IP based session affinity. 22.1.5. .spec.sessionAffinityConfig.clientIP Description ClientIPConfig represents the configurations of Client IP based session affinity. Type object Property Type Description timeoutSeconds integer timeoutSeconds specifies the seconds of ClientIP type session sticky time. The value must be >0 && ⇐86400(for 1 day) if ServiceAffinity == "ClientIP". Default value is 10800(for 3 hours). 22.1.6. .status Description ServiceStatus represents the current status of a service. Type object Property Type Description conditions array (Condition) Current service state loadBalancer object LoadBalancerStatus represents the status of a load-balancer. 22.1.7. .status.loadBalancer Description LoadBalancerStatus represents the status of a load-balancer. Type object Property Type Description ingress array Ingress is a list containing ingress points for the load-balancer. Traffic intended for the service should be sent to these ingress points. ingress[] object LoadBalancerIngress represents the status of a load-balancer ingress point: traffic intended for the service should be sent to an ingress point. 22.1.8. .status.loadBalancer.ingress Description Ingress is a list containing ingress points for the load-balancer. Traffic intended for the service should be sent to these ingress points. Type array 22.1.9. .status.loadBalancer.ingress[] Description LoadBalancerIngress represents the status of a load-balancer ingress point: traffic intended for the service should be sent to an ingress point. Type object Property Type Description hostname string Hostname is set for load-balancer ingress points that are DNS based (typically AWS load-balancers) ip string IP is set for load-balancer ingress points that are IP based (typically GCE or OpenStack load-balancers) ipMode string IPMode specifies how the load-balancer IP behaves, and may only be specified when the ip field is specified. Setting this to "VIP" indicates that traffic is delivered to the node with the destination set to the load-balancer's IP and port. Setting this to "Proxy" indicates that traffic is delivered to the node or pod with the destination set to the node's IP and node port or the pod's IP and port. Service implementations may use this information to adjust traffic routing. ports array Ports is a list of records of service ports If used, every port defined in the service should have an entry in it ports[] object 22.1.10. .status.loadBalancer.ingress[].ports Description Ports is a list of records of service ports If used, every port defined in the service should have an entry in it Type array 22.1.11. .status.loadBalancer.ingress[].ports[] Description Type object Required port protocol Property Type Description error string Error is to record the problem with the service port The format of the error shall comply with the following rules: - built-in error values shall be specified in this file and those shall use CamelCase names - cloud provider specific error values must have names that comply with the format foo.example.com/CamelCase. port integer Port is the port number of the service port of which status is recorded here protocol string Protocol is the protocol of the service port of which status is recorded here The supported values are: "TCP", "UDP", "SCTP" Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. 22.2. API endpoints The following API endpoints are available: /api/v1/services GET : list or watch objects of kind Service /api/v1/watch/services GET : watch individual changes to a list of Service. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/services DELETE : delete collection of Service GET : list or watch objects of kind Service POST : create a Service /api/v1/watch/namespaces/{namespace}/services GET : watch individual changes to a list of Service. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/services/{name} DELETE : delete a Service GET : read the specified Service PATCH : partially update the specified Service PUT : replace the specified Service /api/v1/watch/namespaces/{namespace}/services/{name} GET : watch changes to an object of kind Service. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /api/v1/namespaces/{namespace}/services/{name}/status GET : read status of the specified Service PATCH : partially update status of the specified Service PUT : replace status of the specified Service 22.2.1. /api/v1/services HTTP method GET Description list or watch objects of kind Service Table 22.1. HTTP responses HTTP code Reponse body 200 - OK ServiceList schema 401 - Unauthorized Empty 22.2.2. /api/v1/watch/services HTTP method GET Description watch individual changes to a list of Service. deprecated: use the 'watch' parameter with a list operation instead. Table 22.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 22.2.3. /api/v1/namespaces/{namespace}/services HTTP method DELETE Description delete collection of Service Table 22.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 22.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Service Table 22.5. HTTP responses HTTP code Reponse body 200 - OK ServiceList schema 401 - Unauthorized Empty HTTP method POST Description create a Service Table 22.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 22.7. Body parameters Parameter Type Description body Service schema Table 22.8. HTTP responses HTTP code Reponse body 200 - OK Service schema 201 - Created Service schema 202 - Accepted Service schema 401 - Unauthorized Empty 22.2.4. /api/v1/watch/namespaces/{namespace}/services HTTP method GET Description watch individual changes to a list of Service. deprecated: use the 'watch' parameter with a list operation instead. Table 22.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 22.2.5. /api/v1/namespaces/{namespace}/services/{name} Table 22.10. Global path parameters Parameter Type Description name string name of the Service HTTP method DELETE Description delete a Service Table 22.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 22.12. HTTP responses HTTP code Reponse body 200 - OK Service schema 202 - Accepted Service schema 401 - Unauthorized Empty HTTP method GET Description read the specified Service Table 22.13. HTTP responses HTTP code Reponse body 200 - OK Service schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Service Table 22.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 22.15. HTTP responses HTTP code Reponse body 200 - OK Service schema 201 - Created Service schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Service Table 22.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 22.17. Body parameters Parameter Type Description body Service schema Table 22.18. HTTP responses HTTP code Reponse body 200 - OK Service schema 201 - Created Service schema 401 - Unauthorized Empty 22.2.6. /api/v1/watch/namespaces/{namespace}/services/{name} Table 22.19. Global path parameters Parameter Type Description name string name of the Service HTTP method GET Description watch changes to an object of kind Service. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 22.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 22.2.7. /api/v1/namespaces/{namespace}/services/{name}/status Table 22.21. Global path parameters Parameter Type Description name string name of the Service HTTP method GET Description read status of the specified Service Table 22.22. HTTP responses HTTP code Reponse body 200 - OK Service schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Service Table 22.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 22.24. HTTP responses HTTP code Reponse body 200 - OK Service schema 201 - Created Service schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Service Table 22.25. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 22.26. Body parameters Parameter Type Description body Service schema Table 22.27. HTTP responses HTTP code Reponse body 200 - OK Service schema 201 - Created Service schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/network_apis/service-v1
8.57. file
8.57. file 8.57.1. RHSA-2014:1606 - Moderate: file security and bug fix update Updated file packages that fix multiple security issues and several bugs are now available for Red Hat Enterprise Linux 6. Red Hat Product Security has rated this update as having Moderate security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. The "file" command is used to identify a particular file according to the type of data contained in the file. The command can identify various file types, including ELF binaries, system libraries, RPM packages, and different graphics formats. Security Fixes CVE-2014-0237 , CVE-2014-0238 , CVE-2014-3479 , CVE-2014-3480 , CVE-2012-1571 Multiple denial of service flaws were found in the way file parsed certain Composite Document Format (CDF) files. A remote attacker could use either of these flaws to crash file, or an application using file, via a specially crafted CDF file. CVE-2014-1943 , CVE-2014-2270 Two denial of service flaws were found in the way file handled indirect and search rules. A remote attacker could use either of these flaws to cause file, or an application using file, to crash or consume an excessive amount of CPU. Bug Fixes BZ# 664513 Previously, the output of the "file" command contained redundant white spaces. With this update, the new STRING_TRIM flag has been introduced to remove the unnecessary white spaces. BZ# 849621 Due to a bug, the "file" command could incorrectly identify an XML document as a LaTex document. The underlying source code has been modified to fix this bug and the command now works as expected. BZ# 873997 Previously, the "file" command could not recognize .JPG files and incorrectly labeled them as "Minix filesystem". This bug has been fixed and the command now properly detects .JPG files. BZ# 884396 Under certain circumstances, the "file" command incorrectly detected NETpbm files as "x86 boot sector". This update applies a patch to fix this bug and the command now detects NETpbm files as expected. BZ# 980941 Previously, the "file" command incorrectly identified ASCII text files as a .PIC image file. With this update, a patch has been provided to address this bug and the command now correctly recognizes ASCII text files. BZ# 1037279 On 32-bit PowerPC systems, the "from" field was missing from the output of the "file" command. The underlying source code has been modified to fix this bug and "file" output now contains the "from" field as expected. BZ# 1064463 The "file" command incorrectly detected text files as "RRDTool DB version ool - Round Robin Database Tool". This update applies a patch to fix this bug and the command now correctly detects text files. BZ# 1067771 Previously, the "file" command supported only version 1 and 2 of the QCOW format. As a consequence, file was unable to detect a "qcow2 compat=1.1" file created on Red Hat Enterprise Linux 7. With this update, support for QCOW version 3 has been added so that the command now detects such files as expected. All file users are advised to upgrade to these updated packages, which contain backported patches to correct these issues.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/file
18.3.4.2. UDP Protocol
18.3.4.2. UDP Protocol These match options are available for the UDP protocol ( -p udp ): --dport - Specifies the destination port of the UDP packet, using the service name, port number, or range of port numbers. The --destination-port match option is synonymous with --dport . --sport - Specifies the source port of the UDP packet, using the service name, port number, or range of port numbers. The --source-port match option is synonymous with --sport .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s3-iptables-options-match-udp
Chapter 6. Configuring an AMQ Streams on RHEL deployment
Chapter 6. Configuring an AMQ Streams on RHEL deployment Use the Kafka and ZooKeeper properties files to configure AMQ Streams. ZooKeeper /kafka/config/zookeeper.properties Kafka /kafka/config/server.properties The properties files are in the Java format, with each property on separate line in the following format: Lines starting with # or ! will be treated as comments and will be ignored by AMQ Streams components. Values can be split into multiple lines by using \ directly before the newline / carriage return. After you save the changes in the properties files, you need to restart the Kafka broker or ZooKeeper. In a multi-node environment, you will need to repeat the process on each node in the cluster. 6.1. Using standard Kafka configuration properties Use standard Kafka configuration properties to configure Kafka components. The properties provide options to control and tune the configuration of the following Kafka components: Brokers Topics Clients (producers and consumers) Admin client Kafka Connect Kafka Streams Broker and client parameters include options to configure authorization, authentication and encryption. Note For AMQ Streams on OpenShift, some configuration properties are managed entirely by AMQ Streams and cannot be changed. For further information on Kafka configuration properties and how to use the properties to tune your deployment, see the following guides: Kafka configuration properties Kafka configuration tuning 6.2. Loading configuration values from environment variables Use the Environment Variables Configuration Provider plugin to load configuration data from environment variables. You can use the Environment Variables Configuration Provider, for example, to load certificates or JAAS configuration from environment variables. You can use the provider to load configuration data for all Kafka components, including producers and consumers. Use the provider, for example, to provide the credentials for Kafka Connect connector configuration. Prerequisites AMQ Streams is downloaded and installed on the host Environment Variables Configuration Provider JAR file The JAR file is available from the AMQ Streams archive Procedure Add the Environment Variables Configuration Provider JAR file to the Kafka libs directory. Initialize the Environment Variables Configuration Provider in the configuration properties file of the Kafka component. For example, to initialize the provider for Kafka, add the configuration to the server.properties file. Configuration to enable the Environment Variables Configuration Provider config.providers=env config.providers.env.class=io.strimzi.kafka.EnvVarConfigProvider Add configuration to the properties file to load data from environment variables. Configuration to load data from an environment variable option=USD{env: <MY_ENV_VAR_NAME> } Use capitalized or upper-case environment variable naming conventions, such as MY_ENV_VAR_NAME . Save the changes. Restart the Kafka component. For information on restarting brokers in a multi-node cluster, see Section 4.3, "Performing a graceful rolling restart of Kafka brokers" . 6.3. Configuring ZooKeeper Kafka uses ZooKeeper to store configuration data and for cluster coordination. It is strongly recommended to run a cluster of replicated ZooKeeper instances. 6.3.1. Basic configuration The most important ZooKeeper configuration options are: tickTime ZooKeeper's basic time unit in milliseconds. It is used for heartbeats and session timeouts. For example, minimum session timeout will be two ticks. dataDir The directory where ZooKeeper stores its transaction logs and snapshots of its in-memory database. This should be set to the /var/lib/zookeeper/ directory that was created during installation. clientPort Port number where clients can connect. Defaults to 2181 . An example ZooKeeper configuration file named config/zookeeper.properties is located in the AMQ Streams installation directory. It is recommended to place the dataDir directory on a separate disk device to minimize the latency in ZooKeeper. ZooKeeper configuration file should be located in /opt/kafka/config/zookeeper.properties . A basic example of the configuration file can be found below. The configuration file has to be readable by the kafka user. tickTime=2000 dataDir=/var/lib/zookeeper/ clientPort=2181 6.3.2. ZooKeeper cluster configuration In most production environments, we recommend you deploy a cluster of replicated ZooKeeper instances. A stable and highly available ZooKeeper cluster is important for running for a reliable ZooKeeper service. ZooKeeper clusters are also referred to as ensembles . ZooKeeper clusters usually consist of an odd number of nodes. ZooKeeper requires that a majority of the nodes in the cluster are up and running. For example: In a cluster with three nodes, at least two of the nodes must be up and running. This means it can tolerate one node being down. In a cluster consisting of five nodes, at least three nodes must be available. This means it can tolerate two nodes being down. In a cluster consisting of seven nodes, at least four nodes must be available. This means it can tolerate three nodes being down. Having more nodes in the ZooKeeper cluster delivers better resiliency and reliability of the whole cluster. ZooKeeper can run in clusters with an even number of nodes. The additional node, however, does not increase the resiliency of the cluster. A cluster with four nodes requires at least three nodes to be available and can tolerate only one node being down. Therefore it has exactly the same resiliency as a cluster with only three nodes. Ideally, the different ZooKeeper nodes should be located in different data centers or network segments. Increasing the number of ZooKeeper nodes increases the workload spent on cluster synchronization. For most Kafka use cases, a ZooKeeper cluster with 3, 5 or 7 nodes should be sufficient. Warning A ZooKeeper cluster with 3 nodes can tolerate only 1 unavailable node. This means that if a cluster node crashes while you are doing maintenance on another node your ZooKeeper cluster will be unavailable. Replicated ZooKeeper configuration supports all configuration options supported by the standalone configuration. Additional options are added for the clustering configuration: initLimit Amount of time to allow followers to connect and sync to the cluster leader. The time is specified as a number of ticks (see the tickTime option for more details). syncLimit Amount of time for which followers can be behind the leader. The time is specified as a number of ticks (see the tickTime option for more details). reconfigEnabled Enables or disables dynamic reconfiguration. Must be enabled in order to add or remove servers to a ZooKeeper cluster. standaloneEnabled Enables or disables standalone mode, where ZooKeeper runs with only one server. In addition to the options above, every configuration file should contain a list of servers which should be members of the ZooKeeper cluster. The server records should be specified in the format server.id=hostname:port1:port2 , where: id The ID of the ZooKeeper cluster node. hostname The hostname or IP address where the node listens for connections. port1 The port number used for intra-cluster communication. port2 The port number used for leader election. The following is an example configuration file of a ZooKeeper cluster with three nodes: tickTime=2000 dataDir=/var/lib/zookeeper/ initLimit=5 syncLimit=2 reconfigEnabled=true standaloneEnabled=false server.1=172.17.0.1:2888:3888:participant;172.17.0.1:2181 server.2=172.17.0.2:2888:3888:participant;172.17.0.2:2181 server.3=172.17.0.3:2888:3888:participant;172.17.0.3:2181 Tip To use four letter word commands, specify 4lw.commands.whitelist=* in zookeeper.properties . myid files Each node in the ZooKeeper cluster must be assigned a unique ID . Each node's ID must be configured in a myid file and stored in the dataDir folder, like /var/lib/zookeeper/ . The myid files should contain only a single line with the written ID as text. The ID can be any integer from 1 to 255. You must manually create this file on each cluster node. Using this file, each ZooKeeper instance will use the configuration from the corresponding server. line in the configuration file to configure its listeners. It will also use all other server. lines to identify other cluster members. In the above example, there are three nodes, so each one will have a different myid with values 1 , 2 , and 3 respectively. 6.3.3. Authentication By default, ZooKeeper does not use any form of authentication and allows anonymous connections. However, it supports Java Authentication and Authorization Service (JAAS) which can be used to set up authentication using Simple Authentication and Security Layer (SASL). ZooKeeper supports authentication using the DIGEST-MD5 SASL mechanism with locally stored credentials. 6.3.3.1. Authentication with SASL JAAS is configured using a separate configuration file. It is recommended to place the JAAS configuration file in the same directory as the ZooKeeper configuration ( /opt/kafka/config/ ). The recommended file name is zookeeper-jaas.conf . When using a ZooKeeper cluster with multiple nodes, the JAAS configuration file has to be created on all cluster nodes. JAAS is configured using contexts. Separate parts such as the server and client are always configured with a separate context . The context is a configuration option and has the following format: SASL Authentication is configured separately for server-to-server communication (communication between ZooKeeper instances) and client-to-server communication (communication between Kafka and ZooKeeper). Server-to-server authentication is relevant only for ZooKeeper clusters with multiple nodes. Server-to-Server authentication For server-to-server authentication, the JAAS configuration file contains two parts: The server configuration The client configuration When using DIGEST-MD5 SASL mechanism, the QuorumServer context is used to configure the authentication server. It must contain all the usernames to be allowed to connect together with their passwords in an unencrypted form. The second context, QuorumLearner , has to be configured for the client which is built into ZooKeeper. It also contains the password in an unencrypted form. An example of the JAAS configuration file for DIGEST-MD5 mechanism can be found below: In addition to the JAAS configuration file, you must enable the server-to-server authentication in the regular ZooKeeper configuration file by specifying the following options: Use the KAFKA_OPTS environment variable to pass the JAAS configuration file to the ZooKeeper server as a Java property: For more information about server-to-server authentication, see ZooKeeper wiki . Client-to-Server authentication Client-to-server authentication is configured in the same JAAS file as the server-to-server authentication. However, unlike the server-to-server authentication, it contains only the server configuration. The client part of the configuration has to be done in the client. For information on how to configure a Kafka broker to connect to ZooKeeper using authentication, see the Kafka installation section. Add the Server context to the JAAS configuration file to configure client-to-server authentication. For DIGEST-MD5 mechanism it configures all usernames and passwords: After configuring the JAAS context, enable the client-to-server authentication in the ZooKeeper configuration file by adding the following line: You must add the authProvider. <ID> property for every server that is part of the ZooKeeper cluster. Use the KAFKA_OPTS environment variable to pass the JAAS configuration file to the ZooKeeper server as a Java property: For more information about configuring ZooKeeper authentication in Kafka brokers, see Section 6.4.5, "ZooKeeper authentication" . 6.3.3.2. Enabling server-to-server authentication using DIGEST-MD5 This procedure describes how to enable authentication using the SASL DIGEST-MD5 mechanism between the nodes of the ZooKeeper cluster. Prerequisites AMQ Streams is installed on the host ZooKeeper cluster is configured with multiple nodes. Enabling SASL DIGEST-MD5 authentication On all ZooKeeper nodes, create or edit the /opt/kafka/config/zookeeper-jaas.conf JAAS configuration file and add the following contexts: The username and password must be the same in both JAAS contexts. For example: On all ZooKeeper nodes, edit the /opt/kafka/config/zookeeper.properties ZooKeeper configuration file and set the following options: quorum.auth.enableSasl=true quorum.auth.learnerRequireSasl=true quorum.auth.serverRequireSasl=true quorum.auth.learner.loginContext=QuorumLearner quorum.auth.server.loginContext=QuorumServer quorum.cnxn.threads.size=20 Restart all ZooKeeper nodes one by one. To pass the JAAS configuration to ZooKeeper, use the KAFKA_OPTS environment variable. 6.3.3.3. Enabling Client-to-server authentication using DIGEST-MD5 This procedure describes how to enable authentication using the SASL DIGEST-MD5 mechanism between ZooKeeper clients and ZooKeeper. Prerequisites AMQ Streams is installed on the host ZooKeeper cluster is configured and running . Enabling SASL DIGEST-MD5 authentication On all ZooKeeper nodes, create or edit the /opt/kafka/config/zookeeper-jaas.conf JAAS configuration file and add the following context: The super automatically has administrator priviledges. The file can contain multiple users, but only one additional user is required by the Kafka brokers. The recommended name for the Kafka user is kafka . The following example shows the Server context for client-to-server authentication: On all ZooKeeper nodes, edit the /opt/kafka/config/zookeeper.properties ZooKeeper configuration file and set the following options: requireClientAuthScheme=sasl authProvider. <IdOfBroker1> =org.apache.zookeeper.server.auth.SASLAuthenticationProvider authProvider. <IdOfBroker2> =org.apache.zookeeper.server.auth.SASLAuthenticationProvider authProvider. <IdOfBroker3> =org.apache.zookeeper.server.auth.SASLAuthenticationProvider The authProvider. <ID> property has to be added for every node which is part of the ZooKeeper cluster. An example three-node ZooKeeper cluster configuration must look like the following: requireClientAuthScheme=sasl authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider authProvider.2=org.apache.zookeeper.server.auth.SASLAuthenticationProvider authProvider.3=org.apache.zookeeper.server.auth.SASLAuthenticationProvider Restart all ZooKeeper nodes one by one. To pass the JAAS configuration to ZooKeeper, use the KAFKA_OPTS environment variable. 6.3.4. Authorization ZooKeeper supports access control lists (ACLs) to protect data stored inside it. Kafka brokers can automatically configure the ACL rights for all ZooKeeper records they create so no other ZooKeeper user can modify them. For more information about enabling ZooKeeper ACLs in Kafka brokers, see Section 6.4.7, "ZooKeeper authorization" . 6.3.5. TLS ZooKeeper supports TLS for encryption or authentication. 6.3.6. Additional configuration options You can set the following additional ZooKeeper configuration options based on your use case: maxClientCnxns The maximum number of concurrent client connections to a single member of the ZooKeeper cluster. autopurge.snapRetainCount Number of snapshots of ZooKeeper's in-memory database which will be retained. Default value is 3 . autopurge.purgeInterval The time interval in hours for purging snapshots. The default value is 0 and this option is disabled. All available configuration options can be found in the ZooKeeper documentation . 6.4. Configuring Kafka Kafka uses a properties file to store static configuration. The recommended location for the configuration file is /opt/kafka/config/server.properties . The configuration file has to be readable by the kafka user. AMQ Streams ships an example configuration file that highlights various basic and advanced features of the product. It can be found under config/server.properties in the AMQ Streams installation directory. This chapter explains the most important configuration options. 6.4.1. ZooKeeper Kafka brokers need ZooKeeper to store some parts of their configuration as well as to coordinate the cluster (for example to decide which node is a leader for which partition). Connection details for the ZooKeeper cluster are stored in the configuration file. The field zookeeper.connect contains a comma-separated list of hostnames and ports of members of the zookeeper cluster. For example: zookeeper.connect=zoo1.my-domain.com:2181,zoo2.my-domain.com:2181,zoo3.my-domain.com:2181 Kafka will use these addresses to connect to the ZooKeeper cluster. With this configuration, all Kafka znodes will be created directly in the root of ZooKeeper database. Therefore, such a ZooKeeper cluster could be used only for a single Kafka cluster. To configure multiple Kafka clusters to use single ZooKeeper cluster, specify a base (prefix) path at the end of the ZooKeeper connection string in the Kafka configuration file: zookeeper.connect=zoo1.my-domain.com:2181,zoo2.my-domain.com:2181,zoo3.my-domain.com:2181/my-cluster-1 6.4.2. Listeners Listeners are used to connect to Kafka brokers. Each Kafka broker can be configured to use multiple listeners. Each listener requires a different configuration so it can listen on a different port or network interface. To configure listeners, edit the listeners property in the configuration file ( /opt/kafka/config/server.properties ). Add listeners to the listeners property as a comma-separated list. Configure each property as follows: If <hostname> is empty, Kafka uses the java.net.InetAddress.getCanonicalHostName() class as the hostname. Example configuration for multiple listeners When a Kafka client wants to connect to a Kafka cluster, it first connects to the bootstrap server , which is one of the cluster nodes. The bootstrap server provides the client with a list of all the brokers in the cluster, and the client connects to each one individually. The list of brokers is based on the configured listeners . Advertised listeners Optionally, you can use the advertised.listeners property to provide the client with a different set of listener addresses than those given in the listeners property. This is useful if additional network infrastructure, such as a proxy, is between the client and the broker, or an external DNS name is being used instead of an IP address. The advertised.listeners property is formatted in the same way as the listeners property. Example configuration for advertised listeners Note The names of the advertised listeners must match those listed in the listeners property. Inter-broker listeners Inter-broker listeners are used for communication between Kafka brokers. Inter-broker communication is required for: Coordinating workloads between different brokers Replicating messages between partitions stored on different brokers Handling administrative tasks from the controller, such as partition leadership changes The inter-broker listener can be assigned to a port of your choice. When multiple listeners are configured, you can define the name of the inter-broker listener in the inter.broker.listener.name property. Here, the inter-broker listener is named as REPLICATION : Control plane listeners By default, communication between the controller and other brokers uses the inter-broker listener . The controller is responsible for coordinating administrative tasks, such as partition leadership changes. You can enable a dedicated control plane listener for controller connections. The control plane listener can be assigned to a port of your choice. To enable the control plane listener, configure the control.plane.listener.name property with a listener name: Enabling the control plane listener might improve cluster performance because controller communications are not delayed by data replication across brokers. Data replication continues through the inter-broker listener. If control.plane.listener is not configured, controller connections use the inter-broker listener . 6.4.3. Commit logs Apache Kafka stores all records it receives from producers in commit logs. The commit logs contain the actual data, in the form of records, that Kafka needs to deliver. These are not the application log files which record what the broker is doing. Log directories You can configure log directories using the log.dirs property file to store commit logs in one or multiple log directories. It should be set to /var/lib/kafka directory created during installation: For performance reasons, you can configure log.dirs to multiple directories and place each of them on a different physical device to improve disk I/O performance. For example: 6.4.4. Broker ID Broker ID is a unique identifier for each broker in the cluster. You can assign an integer greater than or equal to 0 as broker ID. The broker ID is used to identify the brokers after restarts or crashes and it is therefore important that the id is stable and does not change over time. The broker ID is configured in the broker properties file: 6.4.5. ZooKeeper authentication By default, connections between ZooKeeper and Kafka are not authenticated. However, Kafka and ZooKeeper support Java Authentication and Authorization Service (JAAS) which can be used to set up authentication using Simple Authentication and Security Layer (SASL). ZooKeeper supports authentication using the DIGEST-MD5 SASL mechanism with locally stored credentials. 6.4.5.1. JAAS Configuration SASL authentication for ZooKeeper connections has to be configured in the JAAS configuration file. By default, Kafka will use the JAAS context named Client for connecting to ZooKeeper. The Client context should be configured in the /opt/kafka/config/jass.conf file. The context has to enable the PLAIN SASL authentication, as in the following example: 6.4.5.2. Enabling ZooKeeper authentication This procedure describes how to enable authentication using the SASL DIGEST-MD5 mechanism when connecting to ZooKeeper. Prerequisites Client-to-server authentication is enabled in ZooKeeper Enabling SASL DIGEST-MD5 authentication On all Kafka broker nodes, create or edit the /opt/kafka/config/jaas.conf JAAS configuration file and add the following context: The username and password should be the same as configured in ZooKeeper. Following example shows the Client context: Restart all Kafka broker nodes one by one. To pass the JAAS configuration to Kafka brokers, use the KAFKA_OPTS environment variable. For information on restarting brokers in a multi-node cluster, see Section 4.3, "Performing a graceful rolling restart of Kafka brokers" . Additional resources Authentication 6.4.6. Authorization Authorization in Kafka brokers is implemented using authorizer plugins. In this section we describe how to use the AclAuthorizer plugin provided with Kafka. Alternatively, you can use your own authorization plugins. For example, if you are using OAuth 2.0 token-based authentication , you can use OAuth 2.0 authorization . 6.4.6.1. Simple ACL authorizer Authorizer plugins, including AclAuthorizer , are enabled through the authorizer.class.name property: authorizer.class.name=kafka.security.auth.AclAuthorizer A fully-qualified name is required for the chosen authorizer. For AclAuthorizer , the fully-qualified name is kafka.security.auth.AclAuthorizer . 6.4.6.1.1. ACL rules AclAuthorizer uses ACL rules to manage access to Kafka brokers. ACL rules are defined in the format: Principal P is allowed / denied operation O on Kafka resource R from host H For example, a rule might be set so that user: John can view the topic comments from host 127.0.0.1 Host is the IP address of the machine that John is connecting from. In most cases, the user is a producer or consumer application: Consumer01 can write to the consumer group accounts from host 127.0.0.1 If ACL rules are not present If ACL rules are not present for a given resource, all actions are denied. This behavior can be changed by setting the property allow.everyone.if.no.acl.found to true in the Kafka configuration file /opt/kafka/config/server.properties . 6.4.6.1.2. Principals A principal represents the identity of a user. The format of the ID depends on the authentication mechanism used by clients to connect to Kafka: User:ANONYMOUS when connected without authentication. User:<username> when connected using simple authentication mechanisms, such as PLAIN or SCRAM. For example User:admin or User:user1 . User:<DistinguishedName> when connected using TLS client authentication. For example User:CN=user1,O=MyCompany,L=Prague,C=CZ . User:<Kerberos username> when connected using Kerberos. The DistinguishedName is the distinguished name from the client certificate. The Kerberos username is the primary part of the Kerberos principal, which is used by default when connecting using Kerberos. You can use the sasl.kerberos.principal.to.local.rules property to configure how the Kafka principal is built from the Kerberos principal. 6.4.6.1.3. Authentication of users To use authorization, you need to have authentication enabled and used by your clients. Otherwise, all connections will have the principal User:ANONYMOUS . For more information on methods of authentication, see Encryption and authentication . 6.4.6.1.4. Super users Super users are allowed to take all actions regardless of the ACL rules. Super users are defined in the Kafka configuration file using the property super.users . For example: 6.4.6.1.5. Replica broker authentication When authorization is enabled, it is applied to all listeners and all connections. This includes the inter-broker connections used for replication of data between brokers. If enabling authorization, therefore, ensure that you use authentication for inter-broker connections and give the users used by the brokers sufficient rights. For example, if authentication between brokers uses the kafka-broker user, then super user configuration must include the username super.users=User:kafka-broker . 6.4.6.1.6. Supported resources You can apply Kafka ACLs to these types of resource: Topics Consumer groups The cluster TransactionId DelegationToken 6.4.6.1.7. Supported operations AclAuthorizer authorizes operations on resources. Fields with X in the following table mark the supported operations for each resource. Table 6.1. Supported operations for a resource Topics Consumer Groups Cluster Read X X Write X Create X Delete X Alter X Describe X X X ClusterAction X All X X X 6.4.6.1.8. ACL management options ACL rules are managed using the bin/kafka-acls.sh utility, which is provided as part of the Kafka distribution package. Use kafka-acls.sh parameter options to add, list and remove ACL rules, and perform other functions. The parameters require a double-hyphen convention, such as --add . Option Type Description Default add Action Add ACL rule. remove Action Remove ACL rule. list Action List ACL rules. authorizer Action Fully-qualified class name of the authorizer. kafka.security.auth.AclAuthorizer authorizer-properties Configuration Key/value pairs passed to the authorizer for initialization. For AclAuthorizer , the example values are: zookeeper.connect=zoo1.my-domain.com:2181 . bootstrap-server Resource Host/port pairs to connect to the Kafka cluster. Use this option or the authorizer option, not both. command-config Resource Configuration property file to pass to the Admin Client, which is used in conjunction with the bootstrap-server parameter. cluster Resource Specifies a cluster as an ACL resource. topic Resource Specifies a topic name as an ACL resource. An asterisk ( * ) used as a wildcard translates to all topics . A single command can specify multiple --topic options. group Resource Specifies a consumer group name as an ACL resource. A single command can specify multiple --group options. transactional-id Resource Specifies a transactional ID as an ACL resource. Transactional delivery means that all messages sent by a producer to multiple partitions must be successfully delivered or none of them. An asterisk ( * ) used as a wildcard translates to all IDs . delegation-token Resource Specifies a delegation token as an ACL resource. An asterisk ( * ) used as a wildcard translates to all tokens . resource-pattern-type Configuration Specifies a type of resource pattern for the add parameter or a resource pattern filter value for the list or remove parameters. Use literal or prefixed as the resource pattern type for a resource name. Use any or match as resource pattern filter values, or a specific pattern type filter. literal allow-principal Principal Principal added to an allow ACL rule. A single command can specify multiple --allow-principal options. deny-principal Principal Principal added to a deny ACL rule. A single command can specify multiple --deny-principal options. principal Principal Principal name used with the list parameter to return a list of ACLs for the principal. A single command can specify multiple --principal options. allow-host Host IP address that allows access to the principals listed in --allow-principal . Hostnames or CIDR ranges are not supported. If --allow-principal is specified, defaults to * meaning "all hosts". deny-host Host IP address that denies access to the principals listed in --deny-principal . Hostnames or CIDR ranges are not supported. if --deny-principal is specified, defaults to * meaning "all hosts". operation Operation Allows or denies an operation. A single command can specify multipleMultiple --operation options can be specified in single command. All producer Shortcut A shortcut to allow or deny all operations needed by a message producer (WRITE and DESCRIBE on topic, CREATE on cluster). consumer Shortcut A shortcut to allow or deny all operations needed by a message consumer (READ and DESCRIBE on topic, READ on consumer group). idempotent Shortcut A shortcut to enable idempotence when used with the --producer parameter, so that messages are delivered exactly once to a partition. Idepmotence is enabled automatically if the producer is authorized to send messages based on a specific transactional ID. force Shortcut A shortcut to accept all queries and do not prompt. 6.4.6.2. Enabling authorization This procedure describes how to enable the AclAuthorizer plugin for authorization in Kafka brokers. Prerequisites AMQ Streams is installed on all hosts used as Kafka brokers. Procedure Edit the /opt/kafka/config/server.properties Kafka configuration file to use the AclAuthorizer . (Re)start the Kafka brokers. 6.4.6.3. Adding ACL rules When using the AclAuthorizer plugin to control access to Kafka brokers based on Access Control Lists (ACLs), you can add new ACL rules using the kafka-acls.sh utility. Prerequisites Users have been created and granted appropriate permissions to access Kafka resources. AMQ Streams is installed on all hosts used as Kafka brokers. Authorization is enabled in Kafka brokers. Procedure Run kafka-acls.sh with the --add option. Examples: Allow user1 and user2 access to read from myTopic using the MyConsumerGroup consumer group. opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --operation Read --topic myTopic --allow-principal User:user1 --allow-principal User:user2 opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --operation Describe --topic myTopic --allow-principal User:user1 --allow-principal User:user2 opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --operation Read --operation Describe --group MyConsumerGroup --allow-principal User:user1 --allow-principal User:user2 Deny user1 access to read myTopic from IP address host 127.0.0.1 . opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --operation Describe --operation Read --topic myTopic --group MyConsumerGroup --deny-principal User:user1 --deny-host 127.0.0.1 Add user1 as the consumer of myTopic with MyConsumerGroup . opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --consumer --topic myTopic --group MyConsumerGroup --allow-principal User:user1 Additional resources kafka-acls.sh options 6.4.6.4. Listing ACL rules When using the AclAuthorizer plugin to control access to Kafka brokers based on Access Control Lists (ACLs), you can list existing ACL rules using the kafka-acls.sh utility. Prerequisites ACLs have been added . Procedure Run kafka-acls.sh with the --list option. For example: opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --list --topic myTopic Current ACLs for resource `Topic:myTopic`: User:user1 has Allow permission for operations: Read from hosts: * User:user2 has Allow permission for operations: Read from hosts: * User:user2 has Deny permission for operations: Read from hosts: 127.0.0.1 User:user1 has Allow permission for operations: Describe from hosts: * User:user2 has Allow permission for operations: Describe from hosts: * User:user2 has Deny permission for operations: Describe from hosts: 127.0.0.1 Additional resources kafka-acls.sh options 6.4.6.5. Removing ACL rules When using the AclAuthorizer plugin to control access to Kafka brokers based on Access Control Lists (ACLs), you can remove existing ACL rules using the kafka-acls.sh utility. Prerequisites ACLs have been added . Procedure Run kafka-acls.sh with the --remove option. Examples: Remove the ACL allowing Allow user1 and user2 access to read from myTopic using the MyConsumerGroup consumer group. opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --operation Read --topic myTopic --allow-principal User:user1 --allow-principal User:user2 opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --operation Describe --topic myTopic --allow-principal User:user1 --allow-principal User:user2 opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --operation Read --operation Describe --group MyConsumerGroup --allow-principal User:user1 --allow-principal User:user2 Remove the ACL adding user1 as the consumer of myTopic with MyConsumerGroup . opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --consumer --topic myTopic --group MyConsumerGroup --allow-principal User:user1 Remove the ACL denying user1 access to read myTopic from IP address host 127.0.0.1 . opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --operation Describe --operation Read --topic myTopic --group MyConsumerGroup --deny-principal User:user1 --deny-host 127.0.0.1 Additional resources kafka-acls.sh options 6.4.7. ZooKeeper authorization When authentication is enabled between Kafka and ZooKeeper, you can use ZooKeeper Access Control List (ACL) rules to automatically control access to Kafka's metadata stored in ZooKeeper. 6.4.7.1. ACL Configuration Enforcement of ZooKeeper ACL rules is controlled by the zookeeper.set.acl property in the config/server.properties Kafka configuration file. The property is disabled by default and enabled by setting to true : If ACL rules are enabled, when a znode is created in ZooKeeper only the Kafka user who created it can modify or delete it. All other users have read-only access. Kafka sets ACL rules only for newly created ZooKeeper znodes . If the ACLs are only enabled after the first start of the cluster, the zookeeper-security-migration.sh tool can set ACLs on all existing znodes . Confidentiality of data in ZooKeeper Data stored in ZooKeeper includes: Topic names and their configuration Salted and hashed user credentials when SASL SCRAM authentication is used. But ZooKeeper does not store any records sent and received using Kafka. The data stored in ZooKeeper is assumed to be non-confidential. If the data is to be regarded as confidential (for example because topic names contain customer IDs), the only option available for protection is isolating ZooKeeper on the network level and allowing access only to Kafka brokers. 6.4.7.2. Enabling ZooKeeper ACLs for a new Kafka cluster This procedure describes how to enable ZooKeeper ACLs in Kafka configuration for a new Kafka cluster. Use this procedure only before the first start of the Kafka cluster. For enabling ZooKeeper ACLs in a cluster that is already running, see Section 6.4.7.3, "Enabling ZooKeeper ACLs in an existing Kafka cluster" . Prerequisites AMQ Streams is installed on all hosts which will be used as Kafka brokers. ZooKeeper cluster is configured and running . Client-to-server authentication is enabled in ZooKeeper. ZooKeeper authentication is enabled in the Kafka brokers. Kafka brokers have not yet been started. Procedure Edit the /opt/kafka/config/server.properties Kafka configuration file to set the zookeeper.set.acl field to true on all cluster nodes. Start the Kafka brokers. 6.4.7.3. Enabling ZooKeeper ACLs in an existing Kafka cluster This procedure describes how to enable ZooKeeper ACLs in Kafka configuration for a Kafka cluster that is running. Use the zookeeper-security-migration.sh tool to set ZooKeeper ACLs on all existing znodes . The zookeeper-security-migration.sh is available as part of AMQ Streams, and can be found in the bin directory. Prerequisites Kafka cluster is configured and running . Enabling the ZooKeeper ACLs Edit the /opt/kafka/config/server.properties Kafka configuration file to set the zookeeper.set.acl field to true on all cluster nodes. Restart all Kafka brokers one by one. For information on restarting brokers in a multi-node cluster, see Section 4.3, "Performing a graceful rolling restart of Kafka brokers" . Set the ACLs on all existing ZooKeeper znodes using the zookeeper-security-migration.sh tool. For example: 6.4.8. Encryption and authentication AMQ Streams supports encryption and authentication, which is configured as part of the listener configuration. 6.4.8.1. Listener configuration Encryption and authentication in Kafka brokers is configured per listener. For more information about Kafka listener configuration, see Section 6.4.2, "Listeners" . Each listener in the Kafka broker is configured with its own security protocol. The configuration property listener.security.protocol.map defines which listener uses which security protocol. It maps each listener name to its security protocol. Supported security protocols are: PLAINTEXT Listener without any encryption or authentication. SSL Listener using TLS encryption and, optionally, authentication using TLS client certificates. SASL_PLAINTEXT Listener without encryption but with SASL-based authentication. SASL_SSL Listener with TLS-based encryption and SASL-based authentication. Given the following listeners configuration: the listener.security.protocol.map might look like this: This would configure the listener INT1 to use unencrypted connections with SASL authentication, the listener INT2 to use encrypted connections with SASL authentication and the REPLICATION interface to use TLS encryption (possibly with TLS client authentication). The same security protocol can be used multiple times. The following example is also a valid configuration: Such a configuration would use TLS encryption and TLS authentication for all interfaces. The following chapters will explain in more detail how to configure TLS and SASL. 6.4.8.2. TLS Encryption Kafka supports TLS for encrypting communication with Kafka clients. In order to use TLS encryption and server authentication, a keystore containing private and public keys has to be provided. This is usually done using a file in the Java Keystore (JKS) format. A path to this file is set in the ssl.keystore.location property. The ssl.keystore.password property should be used to set the password protecting the keystore. For example: In some cases, an additional password is used to protect the private key. Any such password can be set using the ssl.key.password property. Kafka is able to use keys signed by certification authorities as well as self-signed keys. Using keys signed by certification authorities should always be the preferred method. In order to allow clients to verify the identity of the Kafka broker they are connecting to, the certificate should always contain the advertised hostname(s) as its Common Name (CN) or in the Subject Alternative Names (SAN). It is possible to use different SSL configurations for different listeners. All options starting with ssl. can be prefixed with listener.name.<NameOfTheListener>. , where the name of the listener has to be always in lower case. This will override the default SSL configuration for that specific listener. The following example shows how to use different SSL configurations for different listeners: Additional TLS configuration options In addition to the main TLS configuration options described above, Kafka supports many options for fine-tuning the TLS configuration. For example, to enable or disable TLS / SSL protocols or cipher suites: ssl.cipher.suites List of enabled cipher suites. Each cipher suite is a combination of authentication, encryption, MAC and key exchange algorithms used for the TLS connection. By default, all available cipher suites are enabled. ssl.enabled.protocols List of enabled TLS / SSL protocols. Defaults to TLSv1.2,TLSv1.1,TLSv1 . 6.4.8.3. Enabling TLS encryption This procedure describes how to enable encryption in Kafka brokers. Prerequisites AMQ Streams is installed on all hosts which will be used as Kafka brokers. Procedure Generate TLS certificates for all Kafka brokers in your cluster. The certificates should have their advertised and bootstrap addresses in their Common Name or Subject Alternative Name. Edit the /opt/kafka/config/server.properties Kafka configuration file on all cluster nodes for the following: Change the listener.security.protocol.map field to specify the SSL protocol for the listener where you want to use TLS encryption. Set the ssl.keystore.location option to the path to the JKS keystore with the broker certificate. Set the ssl.keystore.password option to the password you used to protect the keystore. For example: (Re)start the Kafka brokers 6.4.8.4. Authentication For authentication, you can use: TLS client authentication based on X.509 certificates on encrypted connections A supported Kafka SASL (Simple Authentication and Security Layer) mechanism OAuth 2.0 token based authentication 6.4.8.4.1. TLS client authentication TLS client authentication can be used only on connections which are already using TLS encryption. To use TLS client authentication, a truststore with public keys can be provided to the broker. These keys can be used to authenticate clients connecting to the broker. The truststore should be provided in Java Keystore (JKS) format and should contain public keys of the certification authorities. All clients with public and private keys signed by one of the certification authorities included in the truststore will be authenticated. The location of the truststore is set using field ssl.truststore.location . In case the truststore is password protected, the password should be set in the ssl.truststore.password property. For example: Once the truststore is configured, TLS client authentication has to be enabled using the ssl.client.auth property. This property can be set to one of three different values: none TLS client authentication is switched off. (Default value) requested TLS client authentication is optional. Clients will be asked to authenticate using TLS client certificate but they can choose not to. required Clients are required to authenticate using TLS client certificate. When a client authenticates using TLS client authentication, the authenticated principal name is the distinguished name from the authenticated client certificate. For example, a user with a certificate which has a distinguished name CN=someuser will be authenticated with the following principal CN=someuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown . When TLS client authentication is not used and SASL is disabled, the principal name will be ANONYMOUS . 6.4.8.4.2. SASL authentication SASL authentication is configured using Java Authentication and Authorization Service (JAAS). JAAS is also used for authentication of connections between Kafka and ZooKeeper. JAAS uses its own configuration file. The recommended location for this file is /opt/kafka/config/jaas.conf . The file has to be readable by the kafka user. When running Kafka, the location of this file is specified using Java system property java.security.auth.login.config . This property has to be passed to Kafka when starting the broker nodes: SASL authentication is supported both through plain unencrypted connections as well as through TLS connections. SASL can be enabled individually for each listener. To enable it, the security protocol in listener.security.protocol.map has to be either SASL_PLAINTEXT or SASL_SSL . SASL authentication in Kafka supports several different mechanisms: PLAIN Implements authentication based on username and passwords. Usernames and passwords are stored locally in Kafka configuration. SCRAM-SHA-256 and SCRAM-SHA-512 Implements authentication using Salted Challenge Response Authentication Mechanism (SCRAM). SCRAM credentials are stored centrally in ZooKeeper. SCRAM can be used in situations where ZooKeeper cluster nodes are running isolated in a private network. GSSAPI Implements authentication against a Kerberos server. Warning The PLAIN mechanism sends the username and password over the network in an unencrypted format. It should be therefore only be used in combination with TLS encryption. The SASL mechanisms are configured via the JAAS configuration file. Kafka uses the JAAS context named KafkaServer . After they are configured in JAAS, the SASL mechanisms have to be enabled in the Kafka configuration. This is done using the sasl.enabled.mechanisms property. This property contains a comma-separated list of enabled mechanisms: In case the listener used for inter-broker communication is using SASL, the property sasl.mechanism.inter.broker.protocol has to be used to specify the SASL mechanism which it should use. For example: The username and password which will be used for the inter-broker communication has to be specified in the KafkaServer JAAS context using the field username and password . SASL PLAIN To use the PLAIN mechanism, the usernames and password which are allowed to connect are specified directly in the JAAS context. The following example shows the context configured for SASL PLAIN authentication. The example configures three different users: admin user1 user2 The JAAS configuration file with the user database should be kept in sync on all Kafka brokers. When SASL PLAIN is also used for inter-broker authentication, the username and password properties should be included in the JAAS context: SASL SCRAM SCRAM authentication in Kafka consists of two mechanisms: SCRAM-SHA-256 and SCRAM-SHA-512 . These mechanisms differ only in the hashing algorithm used - SHA-256 versus stronger SHA-512. To enable SCRAM authentication, the JAAS configuration file has to include the following configuration: When enabling SASL authentication in the Kafka configuration file, both SCRAM mechanisms can be listed. However, only one of them can be chosen for the inter-broker communication. For example: User credentials for the SCRAM mechanism are stored in ZooKeeper. The kafka-configs.sh tool can be used to manage them. For example, run the following command to add user user1 with password 123456: To delete a user credential use: SASL GSSAPI The SASL mechanism used for authentication using Kerberos is called GSSAPI . To configure Kerberos SASL authentication, the following configuration should be added to the JAAS configuration file: The domain name in the Kerberos principal has to be always in upper case. In addition to the JAAS configuration, the Kerberos service name needs to be specified in the sasl.kerberos.service.name property in the Kafka configuration: Multiple SASL mechanisms Kafka can use multiple SASL mechanisms at the same time. The different JAAS configurations can be all added to the same context: When multiple mechanisms are enabled, clients will be able to choose the mechanism which they want to use. 6.4.8.5. Enabling TLS client authentication This procedure describes how to enable TLS client authentication in Kafka brokers. Prerequisites AMQ Streams is installed on all hosts which will be used as Kafka brokers. TLS encryption is enabled . Procedure Prepare a JKS truststore containing the public key of the certification authority used to sign the user certificates. Edit the /opt/kafka/config/server.properties Kafka configuration file on all cluster nodes for the following: Set the ssl.truststore.location option to the path to the JKS truststore with the certification authority of the user certificates. Set the ssl.truststore.password option to the password you used to protect the truststore. Set the ssl.client.auth option to required . For example: (Re)start the Kafka brokers 6.4.8.6. Enabling SASL PLAIN authentication This procedure describes how to enable SASL PLAIN authentication in Kafka brokers. Prerequisites AMQ Streams is installed on all hosts which will be used as Kafka brokers. Procedure Edit or create the /opt/kafka/config/jaas.conf JAAS configuration file. This file should contain all your users and their passwords. Make sure this file is the same on all Kafka brokers. For example: Edit the /opt/kafka/config/server.properties Kafka configuration file on all cluster nodes for the following: Change the listener.security.protocol.map field to specify the SASL_PLAINTEXT or SASL_SSL protocol for the listener where you want to use SASL PLAIN authentication. Set the sasl.enabled.mechanisms option to PLAIN . For example: (Re)start the Kafka brokers using the KAFKA_OPTS environment variable to pass the JAAS configuration to Kafka brokers. 6.4.8.7. Enabling SASL SCRAM authentication This procedure describes how to enable SASL SCRAM authentication in Kafka brokers. Prerequisites AMQ Streams is installed on all hosts which will be used as Kafka brokers. Procedure Edit or create the /opt/kafka/config/jaas.conf JAAS configuration file. Enable the ScramLoginModule for the KafkaServer context. Make sure this file is the same on all Kafka brokers. For example: Edit the /opt/kafka/config/server.properties Kafka configuration file on all cluster nodes for the following: Change the listener.security.protocol.map field to specify the SASL_PLAINTEXT or SASL_SSL protocol for the listener where you want to use SASL SCRAM authentication. Set the sasl.enabled.mechanisms option to SCRAM-SHA-256 or SCRAM-SHA-512 . For example: (Re)start the Kafka brokers using the KAFKA_OPTS environment variable to pass the JAAS configuration to Kafka brokers. Additional resources Adding SASL SCRAM users Deleting SASL SCRAM users 6.4.8.8. Adding SASL SCRAM users This procedure describes how to add new users for authentication using SASL SCRAM. Prerequisites AMQ Streams is installed on all hosts which will be used as Kafka brokers. SASL SCRAM authentication is enabled . Procedure Use the kafka-configs.sh tool to add new SASL SCRAM users. bin/kafka-configs.sh --bootstrap-server <broker_address> --alter --add-config 'SCRAM-SHA-512=[password= <Password> ]' --entity-type users --entity-name <Username> For example: 6.4.8.9. Deleting SASL SCRAM users This procedure describes how to remove users when using SASL SCRAM authentication. Prerequisites AMQ Streams is installed on all hosts which will be used as Kafka brokers. SASL SCRAM authentication is enabled . Procedure Use the kafka-configs.sh tool to delete SASL SCRAM users. /opt/kafka/bin/kafka-configs.sh --bootstrap-server <broker_address> --alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name <Username> For example: 6.4.9. Using OAuth 2.0 token-based authentication AMQ Streams supports the use of OAuth 2.0 authentication using the OAUTHBEARER and PLAIN mechanisms. OAuth 2.0 enables standardized token-based authentication and authorization between applications, using a central authorization server to issue tokens that grant limited access to resources. You can configure OAuth 2.0 authentication, then OAuth 2.0 authorization . Kafka brokers and clients both need to be configured to use OAuth 2.0. OAuth 2.0 authentication can also be used in conjunction with simple or OPA-based Kafka authorization. Using OAuth 2.0 authentication, application clients can access resources on application servers (called resource servers ) without exposing account credentials. The application client passes an access token as a means of authenticating, which application servers can also use to determine the level of access to grant. The authorization server handles the granting of access and inquiries about access. In the context of AMQ Streams: Kafka brokers act as OAuth 2.0 resource servers Kafka clients act as OAuth 2.0 application clients Kafka clients authenticate to Kafka brokers. The brokers and clients communicate with the OAuth 2.0 authorization server, as necessary, to obtain or validate access tokens. For a deployment of AMQ Streams, OAuth 2.0 integration provides: Server-side OAuth 2.0 support for Kafka brokers Client-side OAuth 2.0 support for Kafka MirrorMaker, Kafka Connect, and the Kafka Bridge AMQ Streams on RHEL includes two OAuth 2.0 libraries: kafka-oauth-client Provides a custom login callback handler class named io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler . To handle the OAUTHBEARER authentication mechanism, use the login callback handler with the OAuthBearerLoginModule provided by Apache Kafka. kafka-oauth-common A helper library that provides some of the functionality needed by the kafka-oauth-client library. The provided client libraries also have dependencies on some additional third-party libraries, such as: keycloak-core , jackson-databind , and slf4j-api . We recommend using a Maven project to package your client to ensure that all the dependency libraries are included. Dependency libraries might change in future versions. Additional resources OAuth 2.0 site 6.4.9.1. OAuth 2.0 authentication mechanisms AMQ Streams supports the OAUTHBEARER and PLAIN mechanisms for OAuth 2.0 authentication. Both mechanisms allow Kafka clients to establish authenticated sessions with Kafka brokers. The authentication flow between clients, the authorization server, and Kafka brokers is different for each mechanism. We recommend that you configure clients to use OAUTHBEARER whenever possible. OAUTHBEARER provides a higher level of security than PLAIN because client credentials are never shared with Kafka brokers. Consider using PLAIN only with Kafka clients that do not support OAUTHBEARER. You configure Kafka broker listeners to use OAuth 2.0 authentication for connecting clients. If necessary, you can use the OAUTHBEARER and PLAIN mechanisms on the same oauth listener. The properties to support each mechanism must be explicitly specified in the oauth listener configuration. OAUTHBEARER overview To use OAUTHBEARER, set sasl.enabled.mechanisms to OAUTHBEARER in the OAuth authentication listener configuration for the Kafka broker. For detailed configuration, see Section 6.4.9.2, "OAuth 2.0 Kafka broker configuration" . listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER Many Kafka client tools use libraries that provide basic support for OAUTHBEARER at the protocol level. To support application development, AMQ Streams provides an OAuth callback handler for the upstream Kafka Client Java libraries (but not for other libraries). Therefore, you do not need to write your own callback handlers. An application client can use the callback handler to provide the access token. Clients written in other languages, such as Go, must use custom code to connect to the authorization server and obtain the access token. With OAUTHBEARER, the client initiates a session with the Kafka broker for credentials exchange, where credentials take the form of a bearer token provided by the callback handler. Using the callbacks, you can configure token provision in one of three ways: Client ID and Secret (by using the OAuth 2.0 client credentials mechanism) A long-lived access token, obtained manually at configuration time A long-lived refresh token, obtained manually at configuration time Note OAUTHBEARER authentication can only be used by Kafka clients that support the OAUTHBEARER mechanism at the protocol level. PLAIN overview To use PLAIN, add PLAIN to the value of sasl.enabled.mechanisms . listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER,PLAIN PLAIN is a simple authentication mechanism used by all Kafka client tools. To enable PLAIN to be used with OAuth 2.0 authentication, AMQ Streams provides OAuth 2.0 over PLAIN server-side callbacks. With the AMQ Streams implementation of PLAIN, the client credentials are not stored in ZooKeeper. Instead, client credentials are handled centrally behind a compliant authorization server, similar to when OAUTHBEARER authentication is used. When used with the OAuth 2.0 over PLAIN callbacks, Kafka clients authenticate with Kafka brokers using either of the following methods: Client ID and secret (by using the OAuth 2.0 client credentials mechanism) A long-lived access token, obtained manually at configuration time For both methods, the client must provide the PLAIN username and password properties to pass credentials to the Kafka broker. The client uses these properties to pass a client ID and secret or username and access token. Client IDs and secrets are used to obtain access tokens. Access tokens are passed as password property values. You pass the access token with or without an USDaccessToken: prefix. If you configure a token endpoint ( oauth.token.endpoint.uri ) in the listener configuration, you need the prefix. If you don't configure a token endpoint ( oauth.token.endpoint.uri ) in the listener configuration, you don't need the prefix. The Kafka broker interprets the password as a raw access token. If the password is set as the access token, the username must be set to the same principal name that the Kafka broker obtains from the access token. You can specify username extraction options in your listener using the oauth.username.claim , oauth.fallback.username.claim , oauth.fallback.username.prefix , and oauth.userinfo.endpoint.uri properties. The username extraction process also depends on your authorization server; in particular, how it maps client IDs to account names. Note OAuth over PLAIN does not support passing a username and password (password grants) using the (deprecated) OAuth 2.0 password grant mechanism. 6.4.9.1.1. Configuring OAuth 2.0 with properties or variables You can configure OAuth 2.0 settings using Java Authentication and Authorization Service (JAAS) properties or environment variables. JAAS properties are configured in the server.properties configuration file, and passed as key-values pairs of the listener.name. LISTENER-NAME .oauthbearer.sasl.jaas.config property. If using environment variables, you still need to provide the listener.name. LISTENER-NAME .oauthbearer.sasl.jaas.config property in the server.properties file, but you can omit the other JAAS properties. You can use capitalized or upper-case environment variable naming conventions. The AMQ Streams OAuth 2.0 libraries use properties that start with: oauth. to configure authentication strimzi. to configure OAuth 2.0 authorization Additional resources OAuth 2.0 Kafka broker configuration 6.4.9.2. OAuth 2.0 Kafka broker configuration Kafka broker configuration for OAuth 2.0 authentication involves: Creating the OAuth 2.0 client in the authorization server Configuring OAuth 2.0 authentication in the Kafka cluster Note In relation to the authorization server, Kafka brokers and Kafka clients are both regarded as OAuth 2.0 clients. 6.4.9.2.1. OAuth 2.0 client configuration on an authorization server To configure a Kafka broker to validate the token received during session initiation, the recommended approach is to create an OAuth 2.0 client definition in an authorization server, configured as confidential , with the following client credentials enabled: Client ID of kafka-broker (for example) Client ID and secret as the authentication mechanism Note You only need to use a client ID and secret when using a non-public introspection endpoint of the authorization server. The credentials are not typically required when using public authorization server endpoints, as with fast local JWT token validation. 6.4.9.2.2. OAuth 2.0 authentication configuration in the Kafka cluster To use OAuth 2.0 authentication in the Kafka cluster, you enable an OAuth authentication listener configuration for your Kafka cluster, in the Kafka server.properties file. A minimum configuration is required. You can also configure a TLS listener, where TLS is used for inter-broker communication. You can configure the broker for token validation by the authorization server using one of the following methods: Fast local token validation: a JWKS endpoint in combination with signed JWT-formatted access tokens Introspection endpoint You can configure OAUTHBEARER or PLAIN authentication, or both. The following example shows a minimum configuration that applies a global listener configuration, which means that inter-broker communication goes through the same listener as application clients. The example also shows an OAuth 2.0 configuration for a specific listener, where you specify listener.name. LISTENER-NAME .sasl.enabled.mechanisms instead of sasl.enabled.mechanisms . LISTENER-NAME is the case-insensitive name of the listener. Here, we name the listener CLIENT , so the property name is listener.name.client.sasl.enabled.mechanisms . The example uses OAUTHBEARER authentication. Example: Minimum listener configuration for OAuth 2.0 authentication using a JWKS endpoint sasl.enabled.mechanisms=OAUTHBEARER 1 listeners=CLIENT://0.0.0.0:9092 2 listener.security.protocol.map=CLIENT:SASL_PLAINTEXT 3 listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER 4 sasl.mechanism.inter.broker.protocol=OAUTHBEARER 5 inter.broker.listener.name=CLIENT 6 listener.name.client.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler 7 listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ 8 oauth.valid.issuer.uri="https://<oauth_server_address>" \ 9 oauth.jwks.endpoint.uri="https://<oauth_server_address>/jwks" \ 10 oauth.username.claim="preferred_username" \ 11 oauth.client.id="kafka-broker" \ 12 oauth.client.secret="kafka-secret" \ 13 oauth.token.endpoint.uri="https://<oauth_server_address>/token" ; 14 listener.name.client.oauthbearer.sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 15 listener.name.client.oauthbearer.connections.max.reauth.ms=3600000 16 1 Enables the OAUTHBEARER mechanism for credentials exchange over SASL. 2 Configures a listener for client applications to connect to. The system hostname is used as an advertised hostname, which clients must resolve in order to reconnect. The listener is named CLIENT in this example. 3 Specifies the channel protocol for the listener. SASL_SSL is for TLS. SASL_PLAINTEXT is used for an unencrypted connection (no TLS), but there is risk of eavesdropping and interception at the TCP connection layer. 4 Specifies the OAUTHBEARER mechanism for the CLIENT listener. The client name ( CLIENT ) is usually specified in uppercase in the listeners property, in lowercase for listener.name properties ( listener.name.client ), and in lowercase when part of a listener.name. client .* property. 5 Specifies the OAUTHBEARER mechanism for inter-broker communication. 6 Specifies the listener for inter-broker communication. The specification is required for the configuration to be valid. 7 Configures OAuth 2.0 authentication on the client listener. 8 Configures authentication settings for client and inter-broker communication. The oauth.client.id , oauth.client.secret , and auth.token.endpoint.uri properties relate to inter-broker configuration. 9 A valid issuer URI. Only access tokens issued by this issuer will be accepted. For example, https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME . 10 The JWKS endpoint URL. For example, https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/certs . 11 The token claim (or key) that contains the actual user name in the token. The user name is the principal used to identify the user. The value will depend on the authentication flow and the authorization server used. If required, you can use a JsonPath expression like "['user.info'].['user.id']" to retrieve the username from nested JSON attributes within a token. 12 Client ID of the Kafka broker, which is the same for all brokers. This is the client registered with the authorization server as kafka-broker . 13 Secret for the Kafka broker, which is the same for all brokers. 14 The OAuth 2.0 token endpoint URL to your authorization server. For production, always use https:// urls. For example, https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/token . 15 Enables (and is only required for) OAuth 2.0 authentication for inter-broker communication. 16 (Optional) Enforces session expiry when a token expires, and also activates the Kafka re-authentication mechanism . If the specified value is less than the time left for the access token to expire, then the client will have to re-authenticate before the actual token expiry. By default, the session does not expire when the access token expires, and the client does not attempt re-authentication. The following example shows a minimum configuration for a TLS listener, where TLS is used for inter-broker communication. Example: TLS listener configuration for OAuth 2.0 authentication listeners=REPLICATION://kafka:9091,CLIENT://kafka:9092 1 listener.security.protocol.map=REPLICATION:SSL,CLIENT:SASL_PLAINTEXT 2 listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER inter.broker.listener.name=REPLICATION listener.name.replication.ssl.keystore.password=<keystore_password> 3 listener.name.replication.ssl.truststore.password=<truststore_password> listener.name.replication.ssl.keystore.type=JKS listener.name.replication.ssl.truststore.type=JKS listener.name.replication.ssl.secure.random.implementation=SHA1PRNG 4 listener.name.replication.ssl.endpoint.identification.algorithm=HTTPS 5 listener.name.replication.ssl.keystore.location=<path_to_keystore> 6 listener.name.replication.ssl.truststore.location=<path_to_truststore> 7 listener.name.replication.ssl.client.auth=required 8 listener.name.client.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ 9 oauth.valid.issuer.uri="https://<oauth_server_address>" \ oauth.jwks.endpoint.uri="https://<oauth_server_address>/jwks" \ oauth.username.claim="preferred_username" ; 1 Separate configurations are required for inter-broker communication and client applications. 2 Configures the REPLICATION listener to use TLS, and the CLIENT listener to use SASL over an unencrypted channel. The client could use an encrypted channel ( SASL_SSL ) in a production environment. 3 The ssl. properties define the TLS configuration. 4 Random number generator implementation. If not set, the Java platform SDK default is used. 5 Hostname verification. If set to an empty string, the hostname verification is turned off. If not set, the default value is HTTPS , which enforces hostname verification for server certificates. 6 Path to the keystore for the listener. 7 Path to the truststore for the listener. 8 Specifies that clients of the REPLICATION listener have to authenticate with a client certificate when establishing a TLS connection (used for inter-broker connectivity). 9 Configures the CLIENT listener for OAuth 2.0. Connectivity with the authorization server should use secure HTTPS connections. The following example shows a minimum configuration for OAuth 2.0 authentication using the PLAIN authentication mechanism for credentials exchange over SASL. Fast local token validation is used. Example: Minimum listener configuration for PLAIN authentication listeners=CLIENT://0.0.0.0:9092 1 listener.security.protocol.map=CLIENT:SASL_PLAINTEXT 2 listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER,PLAIN 3 sasl.mechanism.inter.broker.protocol=OAUTHBEARER 4 inter.broker.listener.name=CLIENT 5 listener.name.client.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler 6 listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ 7 oauth.valid.issuer.uri="http://<auth_server>/auth/realms/<realm>" \ 8 oauth.jwks.endpoint.uri="https://<auth_server>/auth/realms/<realm>/protocol/openid-connect/certs" \ 9 oauth.username.claim="preferred_username" \ 10 oauth.client.id="kafka-broker" \ 11 oauth.client.secret="kafka-secret" \ 12 oauth.token.endpoint.uri="https://<oauth_server_address>/token" ; 13 listener.name.client.oauthbearer.sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 14 listener.name.client.plain.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.plain.JaasServerOauthOverPlainValidatorCallbackHandler 15 listener.name.client.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \ 16 oauth.valid.issuer.uri="https://<oauth_server_address>" \ 17 oauth.jwks.endpoint.uri="https://<oauth_server_address>/jwks" \ 18 oauth.username.claim="preferred_username" \ 19 oauth.token.endpoint.uri="http://<auth_server>/auth/realms/<realm>/protocol/openid-connect/token" ; 20 connections.max.reauth.ms=3600000 21 1 Configures a listener (named CLIENT in this example) for client applications to connect to. The system hostname is used as an advertised hostname, which clients must resolve in order to reconnect. Because this is the only configured listener, it is also used for inter-broker communication. 2 Configures the example CLIENT listener to use SASL over an unencrypted channel. In a production environment, the client should use an encrypted channel ( SASL_SSL ) in order to guard against eavesdropping and interception at the TCP connection layer. 3 Enables the PLAIN authentication mechanism for credentials exchange over SASL as well as OAUTHBEARER . OAUTHBEARER is also specified because it is required for inter-broker communication. Kafka clients can choose which mechanism to use to connect. 4 Specifies the OAUTHBEARER authentication mechanism for inter-broker communication. 5 Specifies the listener (named CLIENT in this example) for inter-broker communication. Required for the configuration to be valid. 6 Configures the server callback handler for the OAUTHBEARER mechanism. 7 Configures authentication settings for client and inter-broker communication using the OAUTHBEARER mechanism. The oauth.client.id , oauth.client.secret , and oauth.token.endpoint.uri properties relate to inter-broker configuration. 8 A valid issuer URI. Only access tokens from this issuer are accepted. For example, https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME 9 The JWKS endpoint URL. For example, https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/certs 10 The token claim (or key) that contains the actual user name in the token. The user name is the principal used to identify the user. The value will depend on the authentication flow and the authorization server used. If required, you can use a JsonPath expression like "['user.info'].['user.id']" to retrieve the username from nested JSON attributes within a token. 11 Client ID of the Kafka broker, which is the same for all brokers. This is the client registered with the authorization server as kafka-broker . 12 Secret for the Kafka broker (the same for all brokers). 13 The OAuth 2.0 token endpoint URL to your authorization server. For production, always use https:// urls. For example, https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/token 14 Enables OAuth 2.0 authentication for inter-broker communication. 15 Configures the server callback handler for PLAIN authentication. 16 Configures authentication settings for client communication using PLAIN authentication. oauth.token.endpoint.uri is an optional property that enables OAuth 2.0 over PLAIN using the OAuth 2.0 client credentials mechanism . 17 A valid issuer URI. Only access tokens from this issuer are accepted. For example, https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME 18 The JWKS endpoint URL. For example, https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/certs 19 The token claim (or key) that contains the actual user name in the token. The user name is the principal used to identify the user. The value will depend on the authentication flow and the authorization server used. If required, you can use a JsonPath expression like "['user.info'].['user.id']" to retrieve the username from nested JSON attributes within a token. 20 The OAuth 2.0 token endpoint URL to your authorization server. Additional configuration for the PLAIN mechanism. If specified, clients can authenticate over PLAIN by passing an access token as the password using an USDaccessToken: prefix. For production, always use https:// urls. For example, https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/token . 21 (Optional) Enforces session expiry when a token expires, and also activates the Kafka re-authentication mechanism . If the specified value is less than the time left for the access token to expire, then the client will have to re-authenticate before the actual token expiry. By default, the session does not expire when the access token expires, and the client does not attempt re-authentication. 6.4.9.2.3. Fast local JWT token validation configuration Fast local JWT token validation checks a JWT token signature locally. The local check ensures that a token: Conforms to type by containing a ( typ ) claim value of Bearer for an access token Is valid (not expired) Has an issuer that matches a validIssuerURI You specify a valid issuer URI when you configure the listener, so that any tokens not issued by the authorization server are rejected. The authorization server does not need to be contacted during fast local JWT token validation. You activate fast local JWT token validation by specifying a JWKs endpoint URI exposed by the OAuth 2.0 authorization server. The endpoint contains the public keys used to validate signed JWT tokens, which are sent as credentials by Kafka clients. Note All communication with the authorization server should be performed using HTTPS. For a TLS listener, you can configure a certificate truststore and point to the truststore file. Example properties for fast local JWT token validation listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.valid.issuer.uri="https://<oauth_server_address>" \ 1 oauth.jwks.endpoint.uri="https://<oauth_server_address>/jwks" \ 2 oauth.jwks.refresh.seconds="300" \ 3 oauth.jwks.refresh.min.pause.seconds="1" \ 4 oauth.jwks.expiry.seconds="360" \ 5 oauth.username.claim="preferred_username" \ 6 oauth.ssl.truststore.location="<path_to_truststore_p12_file>" \ 7 oauth.ssl.truststore.password="<truststore_password>" \ 8 oauth.ssl.truststore.type="PKCS12" ; 9 1 A valid issuer URI. Only access tokens issued by this issuer will be accepted. For example, https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME . 2 The JWKS endpoint URL. For example, https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/certs . 3 The period between endpoint refreshes (default 300). 4 The minimum pause in seconds between consecutive attempts to refresh JWKS public keys. When an unknown signing key is encountered, the JWKS keys refresh is scheduled outside the regular periodic schedule with at least the specified pause since the last refresh attempt. The refreshing of keys follows the rule of exponential backoff, retrying on unsuccessful refreshes with ever increasing pause, until it reaches oauth.jwks.refresh.seconds . The default value is 1. 5 The duration the JWKs certificates are considered valid before they expire. Default is 360 seconds. If you specify a longer time, consider the risk of allowing access to revoked certificates. 6 The token claim (or key) that contains the actual user name in the token. The user name is the principal used to identify the user. The value will depend on the authentication flow and the authorization server used. If required, you can use a JsonPath expression like "['user.info'].['user.id']" to retrieve the username from nested JSON attributes within a token. 7 The location of the truststore used in the TLS configuration. 8 Password to access the truststore. 9 The truststore type in PKCS #12 format. 6.4.9.2.4. OAuth 2.0 introspection endpoint configuration Token validation using an OAuth 2.0 introspection endpoint treats a received access token as opaque. The Kafka broker sends an access token to the introspection endpoint, which responds with the token information necessary for validation. Importantly, it returns up-to-date information if the specific access token is valid, and also information about when the token expires. To configure OAuth 2.0 introspection-based validation, you specify an introspection endpoint URI rather than the JWKs endpoint URI specified for fast local JWT token validation. Depending on the authorization server, you typically have to specify a client ID and client secret , because the introspection endpoint is usually protected. Example properties for an introspection endpoint listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.introspection.endpoint.uri="https://<oauth_server_address>/introspection" \ 1 oauth.client.id="kafka-broker" \ 2 oauth.client.secret="kafka-broker-secret" \ 3 oauth.ssl.truststore.location="<path_to_truststore_p12_file>" \ 4 oauth.ssl.truststore.password="<truststore_password>" \ 5 oauth.ssl.truststore.type="PKCS12" \ 6 oauth.username.claim="preferred_username" ; 7 1 The OAuth 2.0 introspection endpoint URI. For example, https://AUTH-SERVER-ADDRESS/auth/realms/REALM-NAME/protocol/openid-connect/token/introspect . 2 Client ID of the Kafka broker. 3 Secret for the Kafka broker. 4 The location of the truststore used in the TLS configuration. 5 Password to access the truststore. 6 The truststore type in PKCS #12 format. 7 The token claim (or key) that contains the actual user name in the token. The user name is the principal used to identify the user. The value will depend on the authentication flow and the authorization server used. If required, you can use a JsonPath expression like "['user.info'].['user.id']" to retrieve the username from nested JSON attributes within a token. 6.4.9.3. Session re-authentication for Kafka brokers You can configure OAuth listeners to use Kafka session re-authentication for OAuth 2.0 sessions between Kafka clients and Kafka brokers. This mechanism enforces the expiry of an authenticated session between the client and the broker after a defined period of time. When a session expires, the client immediately starts a new session by reusing the existing connection rather than dropping it. Session re-authentication is disabled by default. You can enable it in the server.properties file. Set the connections.max.reauth.ms property for a TLS listener with OAUTHBEARER or PLAIN enabled as the SASL mechanism. You can specify session re-authentication per listener. For example: Session re-authentication must be supported by the Kafka client libraries used by the client. Session re-authentication can be used with fast local JWT or introspection endpoint token validation. Client re-authentication When the broker's authenticated session expires, the client must re-authenticate to the existing session by sending a new, valid access token to the broker, without dropping the connection. If token validation is successful, a new client session is started using the existing connection. If the client fails to re-authenticate, the broker will close the connection if further attempts are made to send or receive messages. Java clients that use Kafka client library 2.2 or later automatically re-authenticate if the re-authentication mechanism is enabled on the broker. Session re-authentication also applies to refresh tokens, if used. When the session expires, the client refreshes the access token by using its refresh token. The client then uses the new access token to re-authenticate over the existing connection. Session expiry for OAUTHBEARER and PLAIN When session re-authentication is configured, session expiry works differently for OAUTHBEARER and PLAIN authentication. For OAUTHBEARER and PLAIN, using the client ID and secret method: The broker's authenticated session will expire at the configured connections.max.reauth.ms . The session will expire earlier if the access token expires before the configured time. For PLAIN using the long-lived access token method: The broker's authenticated session will expire at the configured connections.max.reauth.ms . Re-authentication will fail if the access token expires before the configured time. Although session re-authentication is attempted, PLAIN has no mechanism for refreshing tokens. If connections.max.reauth.ms is not configured, OAUTHBEARER and PLAIN clients can remain connected to brokers indefinitely, without needing to re-authenticate. Authenticated sessions do not end with access token expiry. However, this can be considered when configuring authorization, for example, by using keycloak authorization or installing a custom authorizer. Additional resources OAuth 2.0 Kafka broker configuration Configuring OAuth 2.0 support for Kafka brokers KIP-368: Allow SASL Connections to Periodically Re-Authenticate 6.4.9.4. OAuth 2.0 Kafka client configuration A Kafka client is configured with either: The credentials required to obtain a valid access token from an authorization server (client ID and Secret) A valid long-lived access token or refresh token, obtained using tools provided by an authorization server The only information ever sent to the Kafka broker is an access token. The credentials used to authenticate with the authorization server to obtain the access token are never sent to the broker. When a client obtains an access token, no further communication with the authorization server is needed. The simplest mechanism is authentication with a client ID and Secret. Using a long-lived access token, or a long-lived refresh token, adds more complexity because there is an additional dependency on authorization server tools. Note If you are using long-lived access tokens, you may need to configure the client in the authorization server to increase the maximum lifetime of the token. If the Kafka client is not configured with an access token directly, the client exchanges credentials for an access token during Kafka session initiation by contacting the authorization server. The Kafka client exchanges either: Client ID and Secret Client ID, refresh token, and (optionally) a secret Username and password, with client ID and (optionally) a secret 6.4.9.5. OAuth 2.0 client authentication flows OAuth 2.0 authentication flows depend on the underlying Kafka client and Kafka broker configuration. The flows must also be supported by the authorization server used. The Kafka broker listener configuration determines how clients authenticate using an access token. The client can pass a client ID and secret to request an access token. If a listener is configured to use PLAIN authentication, the client can authenticate with a client ID and secret or username and access token. These values are passed as the username and password properties of the PLAIN mechanism. Listener configuration supports the following token validation options: You can use fast local token validation based on JWT signature checking and local token introspection, without contacting an authorization server. The authorization server provides a JWKS endpoint with public certificates that are used to validate signatures on the tokens. You can use a call to a token introspection endpoint provided by an authorization server. Each time a new Kafka broker connection is established, the broker passes the access token received from the client to the authorization server. The Kafka broker checks the response to confirm whether or not the token is valid. Note An authorization server might only allow the use of opaque access tokens, which means that local token validation is not possible. Kafka client credentials can also be configured for the following types of authentication: Direct local access using a previously generated long-lived access token Contact with the authorization server for a new access token to be issued (using a client ID and a secret, or a refresh token, or a username and a password) 6.4.9.5.1. Example client authentication flows using the SASL OAUTHBEARER mechanism You can use the following communication flows for Kafka authentication using the SASL OAUTHBEARER mechanism. Client using client ID and secret, with broker delegating validation to authorization server Client using client ID and secret, with broker performing fast local token validation Client using long-lived access token, with broker delegating validation to authorization server Client using long-lived access token, with broker performing fast local validation Client using client ID and secret, with broker delegating validation to authorization server The Kafka client requests an access token from the authorization server using a client ID and secret, and optionally a refresh token. Alternatively, the client may authenticate using a username and a password. The authorization server generates a new access token. The Kafka client authenticates with the Kafka broker using the SASL OAUTHBEARER mechanism to pass the access token. The Kafka broker validates the access token by calling a token introspection endpoint on the authorization server using its own client ID and secret. A Kafka client session is established if the token is valid. Client using client ID and secret, with broker performing fast local token validation The Kafka client authenticates with the authorization server from the token endpoint, using a client ID and secret, and optionally a refresh token. Alternatively, the client may authenticate using a username and a password. The authorization server generates a new access token. The Kafka client authenticates with the Kafka broker using the SASL OAUTHBEARER mechanism to pass the access token. The Kafka broker validates the access token locally using a JWT token signature check, and local token introspection. Client using long-lived access token, with broker delegating validation to authorization server The Kafka client authenticates with the Kafka broker using the SASL OAUTHBEARER mechanism to pass the long-lived access token. The Kafka broker validates the access token by calling a token introspection endpoint on the authorization server, using its own client ID and secret. A Kafka client session is established if the token is valid. Client using long-lived access token, with broker performing fast local validation The Kafka client authenticates with the Kafka broker using the SASL OAUTHBEARER mechanism to pass the long-lived access token. The Kafka broker validates the access token locally using a JWT token signature check and local token introspection. Warning Fast local JWT token signature validation is suitable only for short-lived tokens as there is no check with the authorization server if a token has been revoked. Token expiration is written into the token, but revocation can happen at any time, so cannot be accounted for without contacting the authorization server. Any issued token would be considered valid until it expires. 6.4.9.5.2. Example client authentication flows using the SASL PLAIN mechanism You can use the following communication flows for Kafka authentication using the OAuth PLAIN mechanism. Client using a client ID and secret, with the broker obtaining the access token for the client Client using a long-lived access token without a client ID and secret Client using a client ID and secret, with the broker obtaining the access token for the client The Kafka client passes a clientId as a username and a secret as a password. The Kafka broker uses a token endpoint to pass the clientId and secret to the authorization server. The authorization server returns a fresh access token or an error if the client credentials are not valid. The Kafka broker validates the token in one of the following ways: If a token introspection endpoint is specified, the Kafka broker validates the access token by calling the endpoint on the authorization server. A session is established if the token validation is successful. If local token introspection is used, a request is not made to the authorization server. The Kafka broker validates the access token locally using a JWT token signature check. Client using a long-lived access token without a client ID and secret The Kafka client passes a username and password. The password provides the value of an access token that was obtained manually and configured before running the client. The password is passed with or without an USDaccessToken: string prefix depending on whether or not the Kafka broker listener is configured with a token endpoint for authentication. If the token endpoint is configured, the password should be prefixed by USDaccessToken: to let the broker know that the password parameter contains an access token rather than a client secret. The Kafka broker interprets the username as the account username. If the token endpoint is not configured on the Kafka broker listener (enforcing a no-client-credentials mode ), the password should provide the access token without the prefix. The Kafka broker interprets the username as the account username. In this mode, the client doesn't use a client ID and secret, and the password parameter is always interpreted as a raw access token. The Kafka broker validates the token in one of the following ways: If a token introspection endpoint is specified, the Kafka broker validates the access token by calling the endpoint on the authorization server. A session is established if token validation is successful. If local token introspection is used, there is no request made to the authorization server. Kafka broker validates the access token locally using a JWT token signature check. 6.4.9.6. Configuring OAuth 2.0 authentication OAuth 2.0 is used for interaction between Kafka clients and AMQ Streams components. In order to use OAuth 2.0 for AMQ Streams, you must: Configure an OAuth 2.0 authorization server for the AMQ Streams cluster and Kafka clients Deploy or update the Kafka cluster with Kafka broker listeners configured to use OAuth 2.0 Update your Java-based Kafka clients to use OAuth 2.0 6.4.9.6.1. Configuring Red Hat Single Sign-On as an OAuth 2.0 authorization server This procedure describes how to deploy Red Hat Single Sign-On as an authorization server and configure it for integration with AMQ Streams. The authorization server provides a central point for authentication and authorization, and management of users, clients, and permissions. Red Hat Single Sign-On has a concept of realms where a realm represents a separate set of users, clients, permissions, and other configuration. You can use a default master realm , or create a new one. Each realm exposes its own OAuth 2.0 endpoints, which means that application clients and application servers all need to use the same realm. To use OAuth 2.0 with AMQ Streams, you use a deployment of Red Hat Single Sign-On to create and manage authentication realms. Note If you already have Red Hat Single Sign-On deployed, you can skip the deployment step and use your current deployment. Before you begin You will need to be familiar with using Red Hat Single Sign-On. For installation and administration instructions, see: Server Installation and Configuration Guide Server Administration Guide Prerequisites AMQ Streams and Kafka are running For the Red Hat Single Sign-On deployment: Check the Red Hat Single Sign-On Supported Configurations Procedure Install Red Hat Single Sign-On. You can install from a ZIP file or by using an RPM. Log in to the Red Hat Single Sign-On Admin Console to create the OAuth 2.0 policies for AMQ Streams. Login details are provided when you deploy Red Hat Single Sign-On. Create and enable a realm. You can use an existing master realm. Adjust the session and token timeouts for the realm, if required. Create a client called kafka-broker . From the Settings tab, set: Access Type to Confidential Standard Flow Enabled to OFF to disable web login for this client Service Accounts Enabled to ON to allow this client to authenticate in its own name Click Save before continuing. From the Credentials tab, take a note of the secret for using in your AMQ Streams Kafka cluster configuration. Repeat the client creation steps for any application client that will connect to your Kafka brokers. Create a definition for each new client. You will use the names as client IDs in your configuration. What to do After deploying and configuring the authorization server, configure the Kafka brokers to use OAuth 2.0 . 6.4.9.6.2. Configuring OAuth 2.0 support for Kafka brokers This procedure describes how to configure Kafka brokers so that the broker listeners are enabled to use OAuth 2.0 authentication using an authorization server. We advise use of OAuth 2.0 over an encrypted interface through configuration of TLS listeners. Plain listeners are not recommended. Configure the Kafka brokers using properties that support your chosen authorization server, and the type of authorization you are implementing. Before you start For more information on the configuration and authentication of Kafka broker listeners, see: Listeners OAuth 2.0 authentication mechanisms For a description of the properties used in the listener configuration, see: OAuth 2.0 Kafka broker configuration Prerequisites AMQ Streams and Kafka are running An OAuth 2.0 authorization server is deployed Procedure Configure the Kafka broker listener configuration in the server.properties file. For example, using the OAUTHBEARER mechanism: sasl.enabled.mechanisms=OAUTHBEARER listeners=CLIENT://0.0.0.0:9092 listener.security.protocol.map=CLIENT:SASL_PLAINTEXT listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER sasl.mechanism.inter.broker.protocol=OAUTHBEARER inter.broker.listener.name=CLIENT listener.name.client.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required ; listener.name.client.oauthbearer.sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler Configure broker connection settings as part of the listener.name.client.oauthbearer.sasl.jaas.config . The examples here show connection configuration options. Example 1: Local token validation using a JWKS endpoint configuration listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.valid.issuer.uri="https://<oauth_server_address>/auth/realms/<realm_name>" \ oauth.jwks.endpoint.uri="https://<oauth_server_address>/auth/realms/<realm_name>/protocol/openid-connect/certs" \ oauth.jwks.refresh.seconds="300" \ oauth.jwks.refresh.min.pause.seconds="1" \ oauth.jwks.expiry.seconds="360" \ oauth.username.claim="preferred_username" \ oauth.ssl.truststore.location="<path_to_truststore_p12_file>" \ oauth.ssl.truststore.password="<truststore_password>" \ oauth.ssl.truststore.type="PKCS12" ; listener.name.client.oauthbearer.connections.max.reauth.ms=3600000 Example 2: Delegating token validation to the authorization server through the OAuth 2.0 introspection endpoint listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.introspection.endpoint.uri=" https://<oauth_server_address>/auth/realms/<realm_name>/protocol/openid-connect/introspection " \ # ... If required, configure access to the authorization server. This step is normally required for a production environment, unless a technology like service mesh is used to configure secure channels outside containers. Provide a custom truststore for connecting to a secured authorization server. SSL is always required for access to the authorization server. Set properties to configure the truststore. For example: listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ # ... oauth.client.id="kafka-broker" \ oauth.client.secret="kafka-broker-secret" \ oauth.ssl.truststore.location="<path_to_truststore_p12_file>" \ oauth.ssl.truststore.password="<truststore_password>" \ oauth.ssl.truststore.type="PKCS12" ; If the certificate hostname does not match the access URL hostname, you can turn off certificate hostname validation: oauth.ssl.endpoint.identification.algorithm="" The check ensures that client connection to the authorization server is authentic. You may wish to turn off the validation in a non-production environment. Configure additional properties according to your chosen authentication flow: listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ # ... oauth.token.endpoint.uri="https://<oauth_server_address>/auth/realms/<realm_name>/protocol/openid-connect/token" \ 1 oauth.custom.claim.check="@.custom == 'custom-value'" \ 2 oauth.scope="<scope>" \ 3 oauth.check.audience="true" \ 4 oauth.audience="<audience>" \ 5 oauth.valid.issuer.uri="https://https://<oauth_server_address>/auth/<realm_name>" \ 6 oauth.client.id="kafka-broker" \ 7 oauth.client.secret="kafka-broker-secret" \ 8 oauth.connect.timeout.seconds=60 \ 9 oauth.read.timeout.seconds=60 \ 10 oauth.http.retries=2 \ 11 oauth.http.retry.pause.millis=300 \ 12 oauth.groups.claim="USD.groups" \ 13 oauth.groups.claim.delimiter="," ; 14 1 The OAuth 2.0 token endpoint URL to your authorization server. For production, always use https:// urls. Required when KeycloakAuthorizer is used, or an OAuth 2.0 enabled listener is used for inter-broker communication. 2 (Optional) Custom claim checking . A JsonPath filter query that applies additional custom rules to the JWT access token during validation. If the access token does not contain the necessary data, it is rejected. When using the introspection endpoint method, the custom check is applied to the introspection endpoint response JSON. 3 (Optional) A scope parameter passed to the token endpoint. A scope is used when obtaining an access token for inter-broker authentication. It is also used in the name of a client for OAuth 2.0 over PLAIN client authentication using a clientId and secret . This only affects the ability to obtain the token, and the content of the token, depending on the authorization server. It does not affect token validation rules by the listener. 4 (Optional) Audience checking . If your authorization server provides an aud (audience) claim, and you want to enforce an audience check, set ouath.check.audience to true . Audience checks identify the intended recipients of tokens. As a result, the Kafka broker will reject tokens that do not have its clientId in their aud claims. Default is false . 5 (Optional) An audience parameter passed to the token endpoint. An audience is used when obtaining an access token for inter-broker authentication. It is also used in the name of a client for OAuth 2.0 over PLAIN client authentication using a clientId and secret . This only affects the ability to obtain the token, and the content of the token, depending on the authorization server. It does not affect token validation rules by the listener. 6 A valid issuer URI. Only access tokens issued by this issuer will be accepted. (Always required.) 7 The configured client ID of the Kafka broker, which is the same for all brokers. This is the client registered with the authorization server as kafka-broker . Required when an introspection endpoint is used for token validation, or when KeycloakAuthorizer is used. 8 The configured secret for the Kafka broker, which is the same for all brokers. When the broker must authenticate to the authorization server, either a client secret, access token or a refresh token has to be specified. 9 (Optional) The connect timeout in seconds when connecting to the authorization server. The default value is 60. 10 (Optional) The read timeout in seconds when connecting to the authorization server. The default value is 60. 11 The maximum number of times to retry a failed HTTP request to the authorization server. The default value is 0, meaning that no retries are performed. To use this option effectively, consider reducing the timeout times for the oauth.connect.timeout.seconds and oauth.read.timeout.seconds options. However, note that retries may prevent the current worker thread from being available to other requests, and if too many requests stall, it could make the Kafka broker unresponsive. 12 The time to wait before attempting another retry of a failed HTTP request to the authorization server. By default, this time is set to zero, meaning that no pause is applied. This is because many issues that cause failed requests are per-request network glitches or proxy issues that can be resolved quickly. However, if your authorization server is under stress or experiencing high traffic, you may want to set this option to a value of 100 ms or more to reduce the load on the server and increase the likelihood of successful retries. 13 A JsonPath query used to extract groups information from JWT token or introspection endpoint response. Not set by default. This can be used by a custom authorizer to make authorization decisions based on user groups. 14 A delimiter used to parse groups information when returned as a single delimited string. The default value is ',' (comma). Depending on how you apply OAuth 2.0 authentication, and the type of authorization server being used, add additional configuration settings: listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ # ... oauth.check.issuer=false \ 1 oauth.fallback.username.claim="<client_id>" \ 2 oauth.fallback.username.prefix="<client_account>" \ 3 oauth.valid.token.type="bearer" \ 4 oauth.userinfo.endpoint.uri="https://<oauth_server_address>/auth/realms/<realm_name>/protocol/openid-connect/userinfo" ; 5 1 If your authorization server does not provide an iss claim, it is not possible to perform an issuer check. In this situation, set oauth.check.issuer to false and do not specify a oauth.valid.issuer.uri . Default is true . 2 An authorization server may not provide a single attribute to identify both regular users and clients. When a client authenticates in its own name, the server might provide a client ID . When a user authenticates using a username and password, to obtain a refresh token or an access token, the server might provide a username attribute in addition to a client ID. Use this fallback option to specify the username claim (attribute) to use if a primary user ID attribute is not available. If required, you can use a JsonPath expression like "['client.info'].['client.id']" to retrieve the fallback username from nested JSON attributes within a token. 3 In situations where oauth.fallback.username.claim is applicable, it may also be necessary to prevent name collisions between the values of the username claim, and those of the fallback username claim. Consider a situation where a client called producer exists, but also a regular user called producer exists. In order to differentiate between the two, you can use this property to add a prefix to the user ID of the client. 4 (Only applicable when using oauth.introspection.endpoint.uri ) Depending on the authorization server you are using, the introspection endpoint may or may not return the token type attribute, or it may contain different values. You can specify a valid token type value that the response from the introspection endpoint has to contain. 5 (Only applicable when using oauth.introspection.endpoint.uri ) The authorization server may be configured or implemented in such a way to not provide any identifiable information in an introspection endpoint response. In order to obtain the user ID, you can configure the URI of the userinfo endpoint as a fallback. The oauth.fallback.username.claim , oauth.fallback.username.claim , and oauth.fallback.username.prefix settings are applied to the response of the userinfo endpoint. What to do Configure your Kafka clients to use OAuth 2.0 6.4.9.6.3. Configuring Kafka Java clients to use OAuth 2.0 Configure Kafka producer and consumer APIs to use OAuth 2.0 for interaction with Kafka brokers. Add a callback plugin to your client pom.xml file, then configure your client for OAuth 2.0. Specify the following in your client configuration: A SASL (Simple Authentication and Security Layer) security protocol: SASL_SSL for authentication over TLS encrypted connections SASL_PLAINTEXT for authentication over unencrypted connections Use SASL_SSL for production and SASL_PLAINTEXT for local development only. When using SASL_SSL , additional ssl.truststore configuration is needed. The truststore configuration is required for secure connection ( https:// ) to the OAuth 2.0 authorization server. To verify the OAuth 2.0 authorization server, add the CA certificate for the authorization server to the truststore in your client configuration. You can configure a truststore in PEM or PKCS #12 format. A Kafka SASL mechanism: OAUTHBEARER for credentials exchange using a bearer token PLAIN to pass client credentials (clientId + secret) or an access token A JAAS (Java Authentication and Authorization Service) module that implements the SASL mechanism: org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule implements the OAUTHBEARER mechanism org.apache.kafka.common.security.plain.PlainLoginModule implements the PLAIN mechanism SASL authentication properties, which support the following authentication methods: OAuth 2.0 client credentials OAuth 2.0 password grant (deprecated) Access token Refresh token Add the SASL authentication properties as JAAS configuration ( sasl.jaas.config ). How you configure the authentication properties depends on the authentication method you are using to access the OAuth 2.0 authorization server. In this procedure, the properties are specified in a properties file, then loaded into the client configuration. Note You can also specify authentication properties as environment variables, or as Java system properties. For Java system properties, you can set them using setProperty and pass them on the command line using the -D option. Prerequisites AMQ Streams and Kafka are running An OAuth 2.0 authorization server is deployed and configured for OAuth access to Kafka brokers Kafka brokers are configured for OAuth 2.0 Procedure Add the client library with OAuth 2.0 support to the pom.xml file for the Kafka client: <dependency> <groupId>io.strimzi</groupId> <artifactId>kafka-oauth-client</artifactId> <version>0.13.0.redhat-00015</version> </dependency> Configure the client properties by specifying the following configuration in a properties file: The security protocol The SASL mechanism The JAAS module and authentication properties according to the method being used For example, we can add the following to a client.properties file: Client credentials mechanism properties security.protocol=SASL_SSL 1 sasl.mechanism=OAUTHBEARER 2 ssl.truststore.location=/tmp/truststore.p12 3 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.token.endpoint.uri=" <token_endpoint_url> " \ 4 oauth.client.id=" <client_id> " \ 5 oauth.client.secret=" <client_secret> " \ 6 oauth.ssl.truststore.location="/tmp/oauth-truststore.p12" \ 7 oauth.ssl.truststore.password="USDSTOREPASS" \ 8 oauth.ssl.truststore.type="PKCS12" \ 9 oauth.scope=" <scope> " \ 10 oauth.audience=" <audience> " ; 11 1 SASL_SSL security protocol for TLS-encrypted connections. Use SASL_PLAINTEXT over unencrypted connections for local development only. 2 The SASL mechanism specified as OAUTHBEARER or PLAIN . 3 The truststore configuration for secure access to the Kafka cluster. 4 URI of the authorization server token endpoint. 5 Client ID, which is the name used when creating the client in the authorization server. 6 Client secret created when creating the client in the authorization server. 7 The location contains the public key certificate ( truststore.p12 ) for the authorization server. 8 The password for accessing the truststore. 9 The truststore type. 10 (Optional) The scope for requesting the token from the token endpoint. An authorization server may require a client to specify the scope. 11 (Optional) The audience for requesting the token from the token endpoint. An authorization server may require a client to specify the audience. Password grants mechanism properties security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.token.endpoint.uri=" <token_endpoint_url> " \ oauth.client.id=" <client_id> " \ 1 oauth.client.secret=" <client_secret> " \ 2 oauth.password.grant.username=" <username> " \ 3 oauth.password.grant.password=" <password> " \ 4 oauth.ssl.truststore.location="/tmp/oauth-truststore.p12" \ oauth.ssl.truststore.password="USDSTOREPASS" \ oauth.ssl.truststore.type="PKCS12" \ oauth.scope=" <scope> " \ oauth.audience=" <audience> " ; 1 Client ID, which is the name used when creating the client in the authorization server. 2 (Optional) Client secret created when creating the client in the authorization server. 3 Username for password grant authentication. OAuth password grant configuration (username and password) uses the OAuth 2.0 password grant method. To use password grants, create a user account for a client on your authorization server with limited permissions. The account should act like a service account. Use in environments where user accounts are required for authentication, but consider using a refresh token first. 4 Password for password grant authentication. Note SASL PLAIN does not support passing a username and password (password grants) using the OAuth 2.0 password grant method. Access token properties security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.token.endpoint.uri=" <token_endpoint_url> " \ oauth.access.token=" <access_token> " ; 1 oauth.ssl.truststore.location="/tmp/oauth-truststore.p12" \ oauth.ssl.truststore.password="USDSTOREPASS" \ oauth.ssl.truststore.type="PKCS12" \ 1 Long-lived access token for Kafka clients. Refresh token properties security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.token.endpoint.uri=" <token_endpoint_url> " \ oauth.client.id=" <client_id> " \ 1 oauth.client.secret=" <client_secret> " \ 2 oauth.refresh.token=" <refresh_token> " ; 3 oauth.ssl.truststore.location="/tmp/oauth-truststore.p12" \ oauth.ssl.truststore.password="USDSTOREPASS" \ oauth.ssl.truststore.type="PKCS12" \ 1 Client ID, which is the name used when creating the client in the authorization server. 2 (Optional) Client secret created when creating the client in the authorization server. 3 Long-lived refresh token for Kafka clients. Input the client properties for OAUTH 2.0 authentication into the Java client code. Example showing input of client properties Properties props = new Properties(); try (FileReader reader = new FileReader("client.properties", StandardCharsets.UTF_8)) { props.load(reader); } Verify that the Kafka client can access the Kafka brokers. 6.4.10. Using OAuth 2.0 token-based authorization If you are using OAuth 2.0 with Red Hat Single Sign-On for token-based authentication, you can also use Red Hat Single Sign-On to configure authorization rules to constrain client access to Kafka brokers. Authentication establishes the identity of a user. Authorization decides the level of access for that user. AMQ Streams supports the use of OAuth 2.0 token-based authorization through Red Hat Single Sign-On Authorization Services , which allows you to manage security policies and permissions centrally. Security policies and permissions defined in Red Hat Single Sign-On are used to grant access to resources on Kafka brokers. Users and clients are matched against policies that permit access to perform specific actions on Kafka brokers. Kafka allows all users full access to brokers by default, and also provides the AclAuthorizer plugin to configure authorization based on Access Control Lists (ACLs). ZooKeeper stores ACL rules that grant or deny access to resources based on username . However, OAuth 2.0 token-based authorization with Red Hat Single Sign-On offers far greater flexibility on how you wish to implement access control to Kafka brokers. In addition, you can configure your Kafka brokers to use OAuth 2.0 authorization and ACLs. Additional resources Using OAuth 2.0 token-based authentication Kafka Authorization Red Hat Single Sign-On documentation 6.4.10.1. OAuth 2.0 authorization mechanism OAuth 2.0 authorization in AMQ Streams uses Red Hat Single Sign-On server Authorization Services REST endpoints to extend token-based authentication with Red Hat Single Sign-On by applying defined security policies on a particular user, and providing a list of permissions granted on different resources for that user. Policies use roles and groups to match permissions to users. OAuth 2.0 authorization enforces permissions locally based on the received list of grants for the user from Red Hat Single Sign-On Authorization Services. 6.4.10.1.1. Kafka broker custom authorizer A Red Hat Single Sign-On authorizer ( KeycloakAuthorizer ) is provided with AMQ Streams. To be able to use the Red Hat Single Sign-On REST endpoints for Authorization Services provided by Red Hat Single Sign-On, you configure a custom authorizer on the Kafka broker. The authorizer fetches a list of granted permissions from the authorization server as needed, and enforces authorization locally on the Kafka Broker, making rapid authorization decisions for each client request. 6.4.10.2. Configuring OAuth 2.0 authorization support This procedure describes how to configure Kafka brokers to use OAuth 2.0 authorization using Red Hat Single Sign-On Authorization Services. Before you begin Consider the access you require or want to limit for certain users. You can use a combination of Red Hat Single Sign-On groups , roles , clients , and users to configure access in Red Hat Single Sign-On. Typically, groups are used to match users based on organizational departments or geographical locations. And roles are used to match users based on their function. With Red Hat Single Sign-On, you can store users and groups in LDAP, whereas clients and roles cannot be stored this way. Storage and access to user data may be a factor in how you choose to configure authorization policies. Note Super users always have unconstrained access to a Kafka broker regardless of the authorization implemented on the Kafka broker. Prerequisites AMQ Streams must be configured to use OAuth 2.0 with Red Hat Single Sign-On for token-based authentication . You use the same Red Hat Single Sign-On server endpoint when you set up authorization. You need to understand how to manage policies and permissions for Red Hat Single Sign-On Authorization Services, as described in the Red Hat Single Sign-On documentation . Procedure Access the Red Hat Single Sign-On Admin Console or use the Red Hat Single Sign-On Admin CLI to enable Authorization Services for the Kafka broker client you created when setting up OAuth 2.0 authentication. Use Authorization Services to define resources, authorization scopes, policies, and permissions for the client. Bind the permissions to users and clients by assigning them roles and groups. Configure the Kafka brokers to use Red Hat Single Sign-On authorization. Add the following to the Kafka server.properties configuration file to install the authorizer in Kafka: authorizer.class.name=io.strimzi.kafka.oauth.server.authorizer.KeycloakAuthorizer principal.builder.class=io.strimzi.kafka.oauth.server.OAuthKafkaPrincipalBuilder Add configuration for the Kafka brokers to access the authorization server and Authorization Services. Here we show example configuration added as additional properties to server.properties , but you can also define them as environment variables using capitalized or upper-case naming conventions. strimzi.authorization.token.endpoint.uri="https://<auth_server_address>/auth/realms/REALM-NAME/protocol/openid-connect/token" 1 strimzi.authorization.client.id="kafka" 2 1 The OAuth 2.0 token endpoint URL to Red Hat Single Sign-On. For production, always use https:// urls. 2 The client ID of the OAuth 2.0 client definition in Red Hat Single Sign-On that has Authorization Services enabled. Typically, kafka is used as the ID. (Optional) Add configuration for specific Kafka clusters. For example: strimzi.authorization.kafka.cluster.name="kafka-cluster" 1 1 The name of a specific Kafka cluster. Names are used to target permissions, making it possible to manage multiple clusters within the same Red Hat Single Sign-On realm. The default value is kafka-cluster . (Optional) Delegate to simple authorization. For example: strimzi.authorization.delegate.to.kafka.acl="false" 1 1 Delegate authorization to Kafka AclAuthorizer if access is denied by Red Hat Single Sign-On Authorization Services policies. The default is false . (Optional) Add configuration for TLS connection to the authorization server. For example: strimzi.authorization.ssl.truststore.location=<path_to_truststore> 1 strimzi.authorization.ssl.truststore.password=<my_truststore_password> 2 strimzi.authorization.ssl.truststore.type=JKS 3 strimzi.authorization.ssl.secure.random.implementation=SHA1PRNG 4 strimzi.authorization.ssl.endpoint.identification.algorithm=HTTPS 5 1 The path to the truststore that contain the certificates. 2 The password for the truststore. 3 The truststore type. If not set, the default Java keystore type is used. 4 Random number generator implementation. If not set, the Java platform SDK default is used. 5 Hostname verification. If set to an empty string, the hostname verification is turned off. If not set, the default value is HTTPS , which enforces hostname verification for server certificates. (Optional) Configure the refresh of grants from the authorization server. The grants refresh job works by enumerating the active tokens and requesting the latest grants for each. For example: strimzi.authorization.grants.refresh.period.seconds="120" 1 strimzi.authorization.grants.refresh.pool.size="10" 2 strimzi.authorization.grants.max.idle.time.seconds="300" 3 strimzi.authorization.grants.gc.period.seconds="300" 4 strimzi.authorization.reuse.grants="false" 5 1 Specifies how often the list of grants from the authorization server is refreshed (once per minute by default). To turn grants refresh off for debugging purposes, set to "0" . 2 Specifies the size of the thread pool (the degree of parallelism) used by the grants refresh job. The default value is "5" . 3 The time, in seconds, after which an idle grant in the cache can be evicted. The default value is 300. 4 The time, in seconds, between consecutive runs of a job that cleans stale grants from the cache. The default value is 300. 5 Controls whether the latest grants are fetched for a new session. When disabled, grants are retrieved from Red Hat Single Sign-On and cached for the user. The default value is true . (Optional) Configure network timeouts when communicating with the authorization server. For example: strimzi.authorization.connect.timeout.seconds="60" 1 strimzi.authorization.read.timeout.seconds="60" 2 strimzi.authorization.http.retries="2" 3 1 The connect timeout in seconds when connecting to the Red Hat Single Sign-On token endpoint. The default value is 60 . 2 The read timeout in seconds when connecting to the Red Hat Single Sign-On token endpoint. The default value is 60 . 3 The maximum number of times to retry (without pausing) a failed HTTP request to the authorization server. The default value is 0 , meaning that no retries are performed. To use this option effectively, consider reducing the timeout times for the strimzi.authorization.connect.timeout.seconds and strimzi.authorization.read.timeout.seconds options. However, note that retries may prevent the current worker thread from being available to other requests, and if too many requests stall, it could make the Kafka broker unresponsive. (Optional) Enable OAuth 2.0 metrics for token validation and authorization. For example: oauth.enable.metrics="true" 1 1 Controls whether to enable or disable OAuth metrics. The default value is false . Verify the configured permissions by accessing Kafka brokers as clients or users with specific roles, making sure they have the necessary access, or do not have the access they are not supposed to have. 6.4.11. Using OPA policy-based authorization Open Policy Agent (OPA) is an open-source policy engine. You can integrate OPA with AMQ Streams to act as a policy-based authorization mechanism for permitting client operations on Kafka brokers. When a request is made from a client, OPA will evaluate the request against policies defined for Kafka access, then allow or deny the request. Note Red Hat does not support the OPA server. Additional resources Open Policy Agent website 6.4.11.1. Defining OPA policies Before integrating OPA with AMQ Streams, consider how you will define policies to provide fine-grained access controls. You can define access control for Kafka clusters, consumer groups and topics. For instance, you can define an authorization policy that allows write access from a producer client to a specific broker topic. For this, the policy might specify the: User principal and host address associated with the producer client Operations allowed for the client Resource type ( topic ) and resource name the policy applies to Allow and deny decisions are written into the policy, and a response is provided based on the request and client identification data provided. In our example the producer client would have to satisfy the policy to be allowed to write to the topic. 6.4.11.2. Connecting to the OPA To enable Kafka to access the OPA policy engine to query access control policies, , you configure a custom OPA authorizer plugin ( kafka-authorizer-opa- VERSION .jar ) in your Kafka server.properties file. When a request is made by a client, the OPA policy engine is queried by the plugin using a specified URL address and a REST endpoint, which must be the name of the defined policy. The plugin provides the details of the client request - user principal, operation, and resource - in JSON format to be checked against the policy. The details will include the unique identity of the client; for example, taking the distinguished name from the client certificate if TLS authentication is used. OPA uses the data to provide a response - either true or false - to the plugin to allow or deny the request. 6.4.11.3. Configuring OPA authorization support This procedure describes how to configure Kafka brokers to use OPA authorization. Before you begin Consider the access you require or want to limit for certain users. You can use a combination of users and Kafka resources to define OPA policies. It is possible to set up OPA to load user information from an LDAP data source. Note Super users always have unconstrained access to a Kafka broker regardless of the authorization implemented on the Kafka broker. Prerequisites An OPA server must be available for connection. OPA authorizer plugin for Kafka Procedure Write the OPA policies required for authorizing client requests to perform operations on the Kafka brokers. See Defining OPA policies . Now configure the Kafka brokers to use OPA. Install the OPA authorizer plugin for Kafka . See Connecting to the OPA . Make sure that the plugin files are included in the Kafka classpath. Add the following to the Kafka server.properties configuration file to enable the OPA plugin: authorizer.class.name: com.bisnode.kafka.authorization.OpaAuthorizer Add further configuration to server.properties for the Kafka brokers to access the OPA policy engine and policies. For example: opa.authorizer.url=https:// OPA-ADDRESS /allow 1 opa.authorizer.allow.on.error=false 2 opa.authorizer.cache.initial.capacity=50000 3 opa.authorizer.cache.maximum.size=50000 4 opa.authorizer.cache.expire.after.seconds=600000 5 super.users=User:alice;User:bob 6 1 (Required) The OAuth 2.0 token endpoint URL for the policy the authorizer plugin will query. In this example, the policy is called allow . 2 Flag to specify whether a client is allowed or denied access by default if the authorizer plugin fails to connect with the OPA policy engine. 3 Initial capacity in bytes of the local cache. The cache is used so that the plugin does not have to query the OPA policy engine for every request. 4 Maximum capacity in bytes of the local cache. 5 Time in milliseconds that the local cache is refreshed by reloading from the OPA policy engine. 6 A list of user principals treated as super users, so that they are always allowed without querying the Open Policy Agent policy. Refer to the Open Policy Agent website for information on authentication and authorization options. Verify the configured permissions by accessing Kafka brokers using clients that have and do not have the correct authorization.
[ "<option> = <value>", "This is a comment", "sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username=\"bob\" password=\"bobs-password\";", "config.providers=env config.providers.env.class=io.strimzi.kafka.EnvVarConfigProvider", "option=USD{env: <MY_ENV_VAR_NAME> }", "tickTime=2000 dataDir=/var/lib/zookeeper/ clientPort=2181", "tickTime=2000 dataDir=/var/lib/zookeeper/ initLimit=5 syncLimit=2 reconfigEnabled=true standaloneEnabled=false server.1=172.17.0.1:2888:3888:participant;172.17.0.1:2181 server.2=172.17.0.2:2888:3888:participant;172.17.0.2:2181 server.3=172.17.0.3:2888:3888:participant;172.17.0.3:2181", "ContextName { param1 param2; };", "QuorumServer { org.apache.zookeeper.server.auth.DigestLoginModule required user_zookeeper=\"123456\"; }; QuorumLearner { org.apache.zookeeper.server.auth.DigestLoginModule required username=\"zookeeper\" password=\"123456\"; };", "quorum.auth.enableSasl=true quorum.auth.learnerRequireSasl=true quorum.auth.serverRequireSasl=true quorum.auth.learner.loginContext=QuorumLearner quorum.auth.server.loginContext=QuorumServer quorum.cnxn.threads.size=20", "su - kafka export KAFKA_OPTS=\"-Djava.security.auth.login.config=/opt/kafka/config/zookeeper-jaas.conf\"; /opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties", "Server { org.apache.zookeeper.server.auth.DigestLoginModule required user_super=\"123456\" user_kafka=\"123456\" user_someoneelse=\"123456\"; };", "requireClientAuthScheme=sasl authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider authProvider.2=org.apache.zookeeper.server.auth.SASLAuthenticationProvider authProvider.3=org.apache.zookeeper.server.auth.SASLAuthenticationProvider", "su - kafka export KAFKA_OPTS=\"-Djava.security.auth.login.config=/opt/kafka/config/zookeeper-jaas.conf\"; /opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties", "QuorumServer { org.apache.zookeeper.server.auth.DigestLoginModule required user_ <Username> =\" <Password> \"; }; QuorumLearner { org.apache.zookeeper.server.auth.DigestLoginModule required username=\" <Username> \" password=\" <Password> \"; };", "QuorumServer { org.apache.zookeeper.server.auth.DigestLoginModule required user_zookeeper=\"123456\"; }; QuorumLearner { org.apache.zookeeper.server.auth.DigestLoginModule required username=\"zookeeper\" password=\"123456\"; };", "quorum.auth.enableSasl=true quorum.auth.learnerRequireSasl=true quorum.auth.serverRequireSasl=true quorum.auth.learner.loginContext=QuorumLearner quorum.auth.server.loginContext=QuorumServer quorum.cnxn.threads.size=20", "su - kafka export KAFKA_OPTS=\"-Djava.security.auth.login.config=/opt/kafka/config/zookeeper-jaas.conf\"; /opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties", "Server { org.apache.zookeeper.server.auth.DigestLoginModule required user_super=\" <SuperUserPassword> \" user <Username1>_=\" <Password1> \" user <USername2>_=\" <Password2> \"; };", "Server { org.apache.zookeeper.server.auth.DigestLoginModule required user_super=\"123456\" user_kafka=\"123456\"; };", "requireClientAuthScheme=sasl authProvider. <IdOfBroker1> =org.apache.zookeeper.server.auth.SASLAuthenticationProvider authProvider. <IdOfBroker2> =org.apache.zookeeper.server.auth.SASLAuthenticationProvider authProvider. <IdOfBroker3> =org.apache.zookeeper.server.auth.SASLAuthenticationProvider", "requireClientAuthScheme=sasl authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider authProvider.2=org.apache.zookeeper.server.auth.SASLAuthenticationProvider authProvider.3=org.apache.zookeeper.server.auth.SASLAuthenticationProvider", "su - kafka export KAFKA_OPTS=\"-Djava.security.auth.login.config=/opt/kafka/config/zookeeper-jaas.conf\"; /opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties", "zookeeper.connect=zoo1.my-domain.com:2181,zoo2.my-domain.com:2181,zoo3.my-domain.com:2181", "zookeeper.connect=zoo1.my-domain.com:2181,zoo2.my-domain.com:2181,zoo3.my-domain.com:2181/my-cluster-1", "<listenerName>://<hostname>:<port>", "listeners=internal-1://:9092,internal-2://:9093,replication://:9094", "listeners=internal-1://:9092,internal-2://:9093 advertised.listeners=internal-1://my-broker-1.my-domain.com:1234,internal-2://my-broker-1.my-domain.com:1235", "listeners=REPLICATION://0.0.0.0:9091 inter.broker.listener.name=REPLICATION", "listeners=CONTROLLER://0.0.0.0:9090,REPLICATION://0.0.0.0:9091 control.plane.listener.name=CONTROLLER", "log.dirs=/var/lib/kafka", "log.dirs=/var/lib/kafka1,/var/lib/kafka2,/var/lib/kafka3", "broker.id=1", "Client { org.apache.kafka.common.security.plain.PlainLoginModule required username=\"kafka\" password=\"123456\"; };", "Client { org.apache.kafka.common.security.plain.PlainLoginModule required username=\" <Username> \" password=\" <Password> \"; };", "Client { org.apache.kafka.common.security.plain.PlainLoginModule required username=\"kafka\" password=\"123456\"; };", "su - kafka export KAFKA_OPTS=\"-Djava.security.auth.login.config=/opt/kafka/config/jaas.conf\"; /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties", "authorizer.class.name=kafka.security.auth.AclAuthorizer", "super.users=User:admin,User:operator", "authorizer.class.name=kafka.security.auth.AclAuthorizer", "opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --operation Read --topic myTopic --allow-principal User:user1 --allow-principal User:user2 opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --operation Describe --topic myTopic --allow-principal User:user1 --allow-principal User:user2 opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --operation Read --operation Describe --group MyConsumerGroup --allow-principal User:user1 --allow-principal User:user2", "opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --operation Describe --operation Read --topic myTopic --group MyConsumerGroup --deny-principal User:user1 --deny-host 127.0.0.1", "opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --consumer --topic myTopic --group MyConsumerGroup --allow-principal User:user1", "opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --list --topic myTopic Current ACLs for resource `Topic:myTopic`: User:user1 has Allow permission for operations: Read from hosts: * User:user2 has Allow permission for operations: Read from hosts: * User:user2 has Deny permission for operations: Read from hosts: 127.0.0.1 User:user1 has Allow permission for operations: Describe from hosts: * User:user2 has Allow permission for operations: Describe from hosts: * User:user2 has Deny permission for operations: Describe from hosts: 127.0.0.1", "opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --operation Read --topic myTopic --allow-principal User:user1 --allow-principal User:user2 opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --operation Describe --topic myTopic --allow-principal User:user1 --allow-principal User:user2 opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --operation Read --operation Describe --group MyConsumerGroup --allow-principal User:user1 --allow-principal User:user2", "opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --consumer --topic myTopic --group MyConsumerGroup --allow-principal User:user1", "opt/kafka/bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --operation Describe --operation Read --topic myTopic --group MyConsumerGroup --deny-principal User:user1 --deny-host 127.0.0.1", "zookeeper.set.acl=true", "zookeeper.set.acl=true", "zookeeper.set.acl=true", "su - kafka cd /opt/kafka KAFKA_OPTS=\"-Djava.security.auth.login.config=./config/jaas.conf\"; ./bin/zookeeper-security-migration.sh --zookeeper.acl=secure --zookeeper.connect= <ZooKeeperURL> exit", "su - kafka cd /opt/kafka KAFKA_OPTS=\"-Djava.security.auth.login.config=./config/jaas.conf\"; ./bin/zookeeper-security-migration.sh --zookeeper.acl=secure --zookeeper.connect=zoo1.my-domain.com:2181 exit", "listeners=INT1://:9092,INT2://:9093,REPLICATION://:9094", "listener.security.protocol.map=INT1:SASL_PLAINTEXT,INT2:SASL_SSL,REPLICATION:SSL", "listener.security.protocol.map=INT1:SSL,INT2:SSL,REPLICATION:SSL", "ssl.keystore.location=/path/to/keystore/server-1.jks ssl.keystore.password=123456", "listeners=INT1://:9092,INT2://:9093,REPLICATION://:9094 listener.security.protocol.map=INT1:SSL,INT2:SSL,REPLICATION:SSL Default configuration - will be used for listeners INT1 and INT2 ssl.keystore.location=/path/to/keystore/server-1.jks ssl.keystore.password=123456 Different configuration for listener REPLICATION listener.name.replication.ssl.keystore.location=/path/to/keystore/server-1.jks listener.name.replication.ssl.keystore.password=123456", "listeners=UNENCRYPTED://:9092,ENCRYPTED://:9093,REPLICATION://:9094 listener.security.protocol.map=UNENCRYPTED:PLAINTEXT,ENCRYPTED:SSL,REPLICATION:PLAINTEXT ssl.keystore.location=/path/to/keystore/server-1.jks ssl.keystore.password=123456", "ssl.truststore.location=/path/to/keystore/server-1.jks ssl.truststore.password=123456", "KAFKA_OPTS=\"-Djava.security.auth.login.config=/path/to/my/jaas.config\"; bin/kafka-server-start.sh", "sasl.enabled.mechanisms=PLAIN,SCRAM-SHA-256,SCRAM-SHA-512", "sasl.mechanism.inter.broker.protocol=PLAIN", "KafkaServer { org.apache.kafka.common.security.plain.PlainLoginModule required user_admin=\"123456\" user_user1=\"123456\" user_user2=\"123456\"; };", "KafkaServer { org.apache.kafka.common.security.plain.PlainLoginModule required username=\"admin\" password=\"123456\" user_admin=\"123456\" user_user1=\"123456\" user_user2=\"123456\"; };", "KafkaServer { org.apache.kafka.common.security.scram.ScramLoginModule required; };", "sasl.enabled.mechanisms=SCRAM-SHA-256,SCRAM-SHA-512 sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512", "bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 'SCRAM-SHA-256=[password=123456],SCRAM-SHA-512=[password=123456]' --entity-type users --entity-name user1", "bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name user1", "KafkaServer { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab=\"/etc/security/keytabs/kafka_server.keytab\" principal=\"kafka/[email protected]\"; };", "sasl.enabled.mechanisms=GSSAPI sasl.mechanism.inter.broker.protocol=GSSAPI sasl.kerberos.service.name=kafka", "KafkaServer { org.apache.kafka.common.security.plain.PlainLoginModule required user_admin=\"123456\" user_user1=\"123456\" user_user2=\"123456\"; com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab=\"/etc/security/keytabs/kafka_server.keytab\" principal=\"kafka/[email protected]\"; org.apache.kafka.common.security.scram.ScramLoginModule required; };", "ssl.truststore.location=/path/to/truststore.jks ssl.truststore.password=123456 ssl.client.auth=required", "KafkaServer { org.apache.kafka.common.security.plain.PlainLoginModule required user_admin=\"123456\" user_user1=\"123456\" user_user2=\"123456\"; };", "listeners=INSECURE://:9092,AUTHENTICATED://:9093,REPLICATION://:9094 listener.security.protocol.map=INSECURE:PLAINTEXT,AUTHENTICATED:SASL_PLAINTEXT,REPLICATION:PLAINTEXT sasl.enabled.mechanisms=PLAIN", "su - kafka export KAFKA_OPTS=\"-Djava.security.auth.login.config=/opt/kafka/config/jaas.conf\"; /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties", "KafkaServer { org.apache.kafka.common.security.scram.ScramLoginModule required; };", "listeners=INSECURE://:9092,AUTHENTICATED://:9093,REPLICATION://:9094 listener.security.protocol.map=INSECURE:PLAINTEXT,AUTHENTICATED:SASL_PLAINTEXT,REPLICATION:PLAINTEXT sasl.enabled.mechanisms=SCRAM-SHA-512", "su - kafka export KAFKA_OPTS=\"-Djava.security.auth.login.config=/opt/kafka/config/jaas.conf\"; /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties", "bin/kafka-configs.sh --bootstrap-server <broker_address> --alter --add-config 'SCRAM-SHA-512=[password= <Password> ]' --entity-type users --entity-name <Username>", "bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 'SCRAM-SHA-512=[password=123456]' --entity-type users --entity-name user1", "/opt/kafka/bin/kafka-configs.sh --bootstrap-server <broker_address> --alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name <Username>", "/opt/kafka/bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name user1", "listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER", "listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER,PLAIN", "sasl.enabled.mechanisms=OAUTHBEARER 1 listeners=CLIENT://0.0.0.0:9092 2 listener.security.protocol.map=CLIENT:SASL_PLAINTEXT 3 listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER 4 sasl.mechanism.inter.broker.protocol=OAUTHBEARER 5 inter.broker.listener.name=CLIENT 6 listener.name.client.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler 7 listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \\ 8 oauth.valid.issuer.uri=\"https://<oauth_server_address>\" \\ 9 oauth.jwks.endpoint.uri=\"https://<oauth_server_address>/jwks\" \\ 10 oauth.username.claim=\"preferred_username\" \\ 11 oauth.client.id=\"kafka-broker\" \\ 12 oauth.client.secret=\"kafka-secret\" \\ 13 oauth.token.endpoint.uri=\"https://<oauth_server_address>/token\" ; 14 listener.name.client.oauthbearer.sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 15 listener.name.client.oauthbearer.connections.max.reauth.ms=3600000 16", "listeners=REPLICATION://kafka:9091,CLIENT://kafka:9092 1 listener.security.protocol.map=REPLICATION:SSL,CLIENT:SASL_PLAINTEXT 2 listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER inter.broker.listener.name=REPLICATION listener.name.replication.ssl.keystore.password=<keystore_password> 3 listener.name.replication.ssl.truststore.password=<truststore_password> listener.name.replication.ssl.keystore.type=JKS listener.name.replication.ssl.truststore.type=JKS listener.name.replication.ssl.secure.random.implementation=SHA1PRNG 4 listener.name.replication.ssl.endpoint.identification.algorithm=HTTPS 5 listener.name.replication.ssl.keystore.location=<path_to_keystore> 6 listener.name.replication.ssl.truststore.location=<path_to_truststore> 7 listener.name.replication.ssl.client.auth=required 8 listener.name.client.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \\ 9 oauth.valid.issuer.uri=\"https://<oauth_server_address>\" oauth.jwks.endpoint.uri=\"https://<oauth_server_address>/jwks\" oauth.username.claim=\"preferred_username\" ;", "listeners=CLIENT://0.0.0.0:9092 1 listener.security.protocol.map=CLIENT:SASL_PLAINTEXT 2 listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER,PLAIN 3 sasl.mechanism.inter.broker.protocol=OAUTHBEARER 4 inter.broker.listener.name=CLIENT 5 listener.name.client.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler 6 listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \\ 7 oauth.valid.issuer.uri=\"http://<auth_server>/auth/realms/<realm>\" \\ 8 oauth.jwks.endpoint.uri=\"https://<auth_server>/auth/realms/<realm>/protocol/openid-connect/certs\" \\ 9 oauth.username.claim=\"preferred_username\" \\ 10 oauth.client.id=\"kafka-broker\" \\ 11 oauth.client.secret=\"kafka-secret\" \\ 12 oauth.token.endpoint.uri=\"https://<oauth_server_address>/token\" ; 13 listener.name.client.oauthbearer.sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 14 listener.name.client.plain.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.plain.JaasServerOauthOverPlainValidatorCallbackHandler 15 listener.name.client.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \\ 16 oauth.valid.issuer.uri=\"https://<oauth_server_address>\" \\ 17 oauth.jwks.endpoint.uri=\"https://<oauth_server_address>/jwks\" \\ 18 oauth.username.claim=\"preferred_username\" \\ 19 oauth.token.endpoint.uri=\"http://<auth_server>/auth/realms/<realm>/protocol/openid-connect/token\" ; 20 connections.max.reauth.ms=3600000 21", "listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.valid.issuer.uri=\"https://<oauth_server_address>\" \\ 1 oauth.jwks.endpoint.uri=\"https://<oauth_server_address>/jwks\" \\ 2 oauth.jwks.refresh.seconds=\"300\" \\ 3 oauth.jwks.refresh.min.pause.seconds=\"1\" \\ 4 oauth.jwks.expiry.seconds=\"360\" \\ 5 oauth.username.claim=\"preferred_username\" \\ 6 oauth.ssl.truststore.location=\"<path_to_truststore_p12_file>\" \\ 7 oauth.ssl.truststore.password=\"<truststore_password>\" \\ 8 oauth.ssl.truststore.type=\"PKCS12\" ; 9", "listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.introspection.endpoint.uri=\"https://<oauth_server_address>/introspection\" \\ 1 oauth.client.id=\"kafka-broker\" \\ 2 oauth.client.secret=\"kafka-broker-secret\" \\ 3 oauth.ssl.truststore.location=\"<path_to_truststore_p12_file>\" \\ 4 oauth.ssl.truststore.password=\"<truststore_password>\" \\ 5 oauth.ssl.truststore.type=\"PKCS12\" \\ 6 oauth.username.claim=\"preferred_username\" ; 7", "listener.name.client.oauthbearer.connections.max.reauth.ms=3600000", "sasl.enabled.mechanisms=OAUTHBEARER listeners=CLIENT://0.0.0.0:9092 listener.security.protocol.map=CLIENT:SASL_PLAINTEXT listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER sasl.mechanism.inter.broker.protocol=OAUTHBEARER inter.broker.listener.name=CLIENT listener.name.client.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required ; listener.name.client.oauthbearer.sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler", "listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.valid.issuer.uri=\"https://<oauth_server_address>/auth/realms/<realm_name>\" oauth.jwks.endpoint.uri=\"https://<oauth_server_address>/auth/realms/<realm_name>/protocol/openid-connect/certs\" oauth.jwks.refresh.seconds=\"300\" oauth.jwks.refresh.min.pause.seconds=\"1\" oauth.jwks.expiry.seconds=\"360\" oauth.username.claim=\"preferred_username\" oauth.ssl.truststore.location=\"<path_to_truststore_p12_file>\" oauth.ssl.truststore.password=\"<truststore_password>\" oauth.ssl.truststore.type=\"PKCS12\" ; listener.name.client.oauthbearer.connections.max.reauth.ms=3600000", "listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.introspection.endpoint.uri=\" https://<oauth_server_address>/auth/realms/<realm_name>/protocol/openid-connect/introspection \" #", "listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required # oauth.client.id=\"kafka-broker\" oauth.client.secret=\"kafka-broker-secret\" oauth.ssl.truststore.location=\"<path_to_truststore_p12_file>\" oauth.ssl.truststore.password=\"<truststore_password>\" oauth.ssl.truststore.type=\"PKCS12\" ;", "oauth.ssl.endpoint.identification.algorithm=\"\"", "listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required # oauth.token.endpoint.uri=\"https://<oauth_server_address>/auth/realms/<realm_name>/protocol/openid-connect/token\" \\ 1 oauth.custom.claim.check=\"@.custom == 'custom-value'\" \\ 2 oauth.scope=\"<scope>\" \\ 3 oauth.check.audience=\"true\" \\ 4 oauth.audience=\"<audience>\" \\ 5 oauth.valid.issuer.uri=\"https://https://<oauth_server_address>/auth/<realm_name>\" \\ 6 oauth.client.id=\"kafka-broker\" \\ 7 oauth.client.secret=\"kafka-broker-secret\" \\ 8 oauth.connect.timeout.seconds=60 \\ 9 oauth.read.timeout.seconds=60 \\ 10 oauth.http.retries=2 \\ 11 oauth.http.retry.pause.millis=300 \\ 12 oauth.groups.claim=\"USD.groups\" \\ 13 oauth.groups.claim.delimiter=\",\" ; 14", "listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required # oauth.check.issuer=false \\ 1 oauth.fallback.username.claim=\"<client_id>\" \\ 2 oauth.fallback.username.prefix=\"<client_account>\" \\ 3 oauth.valid.token.type=\"bearer\" \\ 4 oauth.userinfo.endpoint.uri=\"https://<oauth_server_address>/auth/realms/<realm_name>/protocol/openid-connect/userinfo\" ; 5", "<dependency> <groupId>io.strimzi</groupId> <artifactId>kafka-oauth-client</artifactId> <version>0.13.0.redhat-00015</version> </dependency>", "security.protocol=SASL_SSL 1 sasl.mechanism=OAUTHBEARER 2 ssl.truststore.location=/tmp/truststore.p12 3 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri=\" <token_endpoint_url> \" \\ 4 oauth.client.id=\" <client_id> \" \\ 5 oauth.client.secret=\" <client_secret> \" \\ 6 oauth.ssl.truststore.location=\"/tmp/oauth-truststore.p12\" \\ 7 oauth.ssl.truststore.password=\"USDSTOREPASS\" \\ 8 oauth.ssl.truststore.type=\"PKCS12\" \\ 9 oauth.scope=\" <scope> \" \\ 10 oauth.audience=\" <audience> \" ; 11", "security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri=\" <token_endpoint_url> \" oauth.client.id=\" <client_id> \" \\ 1 oauth.client.secret=\" <client_secret> \" \\ 2 oauth.password.grant.username=\" <username> \" \\ 3 oauth.password.grant.password=\" <password> \" \\ 4 oauth.ssl.truststore.location=\"/tmp/oauth-truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" oauth.scope=\" <scope> \" oauth.audience=\" <audience> \" ;", "security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri=\" <token_endpoint_url> \" oauth.access.token=\" <access_token> \" ; 1 oauth.ssl.truststore.location=\"/tmp/oauth-truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" \\", "security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri=\" <token_endpoint_url> \" oauth.client.id=\" <client_id> \" \\ 1 oauth.client.secret=\" <client_secret> \" \\ 2 oauth.refresh.token=\" <refresh_token> \" ; 3 oauth.ssl.truststore.location=\"/tmp/oauth-truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" \\", "Properties props = new Properties(); try (FileReader reader = new FileReader(\"client.properties\", StandardCharsets.UTF_8)) { props.load(reader); }", "authorizer.class.name=io.strimzi.kafka.oauth.server.authorizer.KeycloakAuthorizer principal.builder.class=io.strimzi.kafka.oauth.server.OAuthKafkaPrincipalBuilder", "strimzi.authorization.token.endpoint.uri=\"https://<auth_server_address>/auth/realms/REALM-NAME/protocol/openid-connect/token\" 1 strimzi.authorization.client.id=\"kafka\" 2", "strimzi.authorization.kafka.cluster.name=\"kafka-cluster\" 1", "strimzi.authorization.delegate.to.kafka.acl=\"false\" 1", "strimzi.authorization.ssl.truststore.location=<path_to_truststore> 1 strimzi.authorization.ssl.truststore.password=<my_truststore_password> 2 strimzi.authorization.ssl.truststore.type=JKS 3 strimzi.authorization.ssl.secure.random.implementation=SHA1PRNG 4 strimzi.authorization.ssl.endpoint.identification.algorithm=HTTPS 5", "strimzi.authorization.grants.refresh.period.seconds=\"120\" 1 strimzi.authorization.grants.refresh.pool.size=\"10\" 2 strimzi.authorization.grants.max.idle.time.seconds=\"300\" 3 strimzi.authorization.grants.gc.period.seconds=\"300\" 4 strimzi.authorization.reuse.grants=\"false\" 5", "strimzi.authorization.connect.timeout.seconds=\"60\" 1 strimzi.authorization.read.timeout.seconds=\"60\" 2 strimzi.authorization.http.retries=\"2\" 3", "oauth.enable.metrics=\"true\" 1", "authorizer.class.name: com.bisnode.kafka.authorization.OpaAuthorizer", "opa.authorizer.url=https:// OPA-ADDRESS /allow 1 opa.authorizer.allow.on.error=false 2 opa.authorizer.cache.initial.capacity=50000 3 opa.authorizer.cache.maximum.size=50000 4 opa.authorizer.cache.expire.after.seconds=600000 5 super.users=User:alice;User:bob 6" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/using_amq_streams_on_rhel/assembly-configuring-amq-streams-str
Chapter 4. Advisories related to this release
Chapter 4. Advisories related to this release The following advisories are issued to document bug fixes and CVE fixes included in this release: RHSA-2023:5734 RHSA-2023:5735 RHSA-2023:5736 RHSA-2023:5737 RHSA-2023:5739 RHSA-2023:5740 RHSA-2023:5741 RHSA-2023:5742 RHSA-2023:5743 RHSA-2023:5744 Revised on 2024-05-09 17:32:44 UTC
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.21/rn-openjdk11021-advisory_openjdk
Chapter 2. Configuring the CodeReady Workspaces installation
Chapter 2. Configuring the CodeReady Workspaces installation The following section describes configuration options to install Red Hat CodeReady Workspaces using the Operator. Additional resources Section 2.1, "Understanding the CheCluster Custom Resource" Section 2.2, "Using the OpenShift web console to configure the CheCluster Custom Resource during installation" Section 2.3, "Using the OpenShift web console to configure the CheCluster Custom Resource" Section 2.4, "Using crwctl to configure the CheCluster Custom Resource during installation" Section 2.5, "Using the CLI to configure the CheCluster Custom Resource" Section 2.6, " CheCluster Custom Resource fields reference" 2.1. Understanding the CheCluster Custom Resource A default deployment of CodeReady Workspaces consists of a CheCluster Custom Resource parameterized by the Red Hat CodeReady Workspaces Operator. The CheCluster Custom Resource is a Kubernetes object. You can configure it by editing the CheCluster Custom Resource YAML file. This file contains sections to configure each component: auth , database , server , storage . The Red Hat CodeReady Workspaces Operator translates the CheCluster Custom Resource into a config map usable by each component of the CodeReady Workspaces installation. The OpenShift platform applies the configuration to each component, and creates the necessary Pods. When OpenShift detects changes in the configuration of a component, it restarts the Pods accordingly. Example 2.1. Configuring the main properties of the CodeReady Workspaces server component Apply the CheCluster Custom Resource YAML file with suitable modifications in the server component section. The Operator generates the che ConfigMap . OpenShift detects changes in the ConfigMap and triggers a restart of the CodeReady Workspaces Pod. Additional resources Understanding Operators . Understanding Custom Resources . 2.2. Using the OpenShift web console to configure the CheCluster Custom Resource during installation To deploy CodeReady Workspaces with a suitable configuration, edit the CheCluster Custom Resource YAML file during the installation of CodeReady Workspaces. Otherwise, the CodeReady Workspaces deployment uses the default configuration parameterized by the Operator. The CheCluster Custom Resource YAML file contains sections to configure each component: auth , database , server , storage . Prerequisites Access to an administrator account on an instance of OpenShift. Procedure In the left panel, click Operators , then click Installed Operators . On the Installed Operators page, click the Red Hat CodeReady Workspaces name. On the Operator details page, in the Details tab, click the Create instance link in the Provided APIs section. This navigates you to the Create CheCluster page, which contains the configuration needed to create a CodeReady Workspaces instance, stored in the CheCluster Custom Resource. On the Create CheCluster page, click YAML view . In the YAML file, find or add the property to configure. Set the property to a suitable value: apiVersion: org.eclipse.che/v1 kind: CheCluster # ... spec: <component> : # ... <property-to-configure> : <value> Create the codeready-workspaces cluster by using the Create button at the end of the page. On the Operator details page, in the Red Hat CodeReady Workspaces Cluster tab, click the codeready-workspaces link. Navigate to the codeready-workspaces instance using the link displayed under the Red Hat CodeReady Workspaces URL output. Note The installation might take more than 5 minutes. The URL appears when the Red Hat CodeReady Workspaces installation finishes. Verification In the left panel, click Workloads , then click ConfigMaps . On the ConfigMaps page, click codeready . Navigate to the YAML tab. Verify that the YAML file contains the configured property and value. Additional resources Chapter 2, Configuring the CodeReady Workspaces installation . Section 4.1, "Advanced configuration options for the CodeReady Workspaces server component" . 2.3. Using the OpenShift web console to configure the CheCluster Custom Resource To configure a running instance of CodeReady Workspaces, edit the CheCluster Custom Resource YAML file. The CheCluster Custom Resource YAML file contains sections to configure each component: auth , database , server , storage . Prerequisites An instance of CodeReady Workspaces on OpenShift. Access to an administrator account on the instance of OpenShift and to the OpenShift web console. Procedure In the left panel, click Operators , then click Installed Operators . On the Installed Operators page, click Red Hat CodeReady Workspaces . Navigate to the Red Hat CodeReady Workspaces instance Specification tab and click codeready-workspaces . Navigate to the YAML tab. In the YAML file, find or add the property to configure. Set the property to a suitable value: apiVersion: org.eclipse.che/v1 kind: CheCluster # ... spec: <component> : # ... <property-to-configure> : <value> Click Save to apply the changes. Verification In the left panel, click Workloads , then click ConfigMaps . On the ConfigMaps page, click codeready . Navigate to the YAML tab. Verify that the YAML file contains the configured property and value. Additional resources Chapter 2, Configuring the CodeReady Workspaces installation . Section 4.1, "Advanced configuration options for the CodeReady Workspaces server component" . 2.4. Using crwctl to configure the CheCluster Custom Resource during installation To deploy CodeReady Workspaces with a suitable configuration, edit the CheCluster Custom Resource YAML file during the installation of CodeReady Workspaces. Otherwise, the CodeReady Workspaces deployment uses the default configuration parameterized by the Operator. Prerequisites Access to an administrator account on an instance of OpenShift. The crwctl tool is available. See Section 3.3.1, "Installing the crwctl CLI management tool" . Procedure Create a che-operator-cr-patch.yaml YAML file that contains the subset of the CheCluster Custom Resource to configure: spec: <component> : <property-to-configure> : <value> Deploy CodeReady Workspaces and apply the changes described in che-operator-cr-patch.yaml file: Verification Verify the value of the configured property: Additional resources Chapter 2, Configuring the CodeReady Workspaces installation . Section 4.1, "Advanced configuration options for the CodeReady Workspaces server component" . 2.5. Using the CLI to configure the CheCluster Custom Resource To configure a running instance of CodeReady Workspaces, edit the CheCluster Custom Resource YAML file. Prerequisites An instance of CodeReady Workspaces on OpenShift. Access to an administrator account on the instance of OpenShift. The oc tool is available. Procedure Edit the CheCluster Custom Resource on the cluster: Save and close the file to apply the changes. Verification Verify the value of the configured property: Additional resources Chapter 2, Configuring the CodeReady Workspaces installation . Section 4.1, "Advanced configuration options for the CodeReady Workspaces server component" . 2.6. CheCluster Custom Resource fields reference This section describes all fields available to customize the CheCluster Custom Resource. Example 2.2, "A minimal CheCluster Custom Resource example." Table 2.1, " CheCluster Custom Resource server settings, related to the CodeReady Workspaces server component." Table 2.2, " CheCluster Custom Resource database configuration settings related to the database used by CodeReady Workspaces." Table 2.3, "Custom Resource auth configuration settings related to authentication used by CodeReady Workspaces." Table 2.4, " CheCluster Custom Resource storage configuration settings related to persistent storage used by CodeReady Workspaces." Table 2.5, " CheCluster Custom Resource k8s configuration settings specific to CodeReady Workspaces installations on OpenShift." Table 2.6, " CheCluster Custom Resource metrics settings, related to the CodeReady Workspaces metrics collection used by CodeReady Workspaces." Table 2.7, " CheCluster Custom Resource status defines the observed state of CodeReady Workspaces installation" Example 2.2. A minimal CheCluster Custom Resource example. apiVersion: org.eclipse.che/v1 kind: CheCluster metadata: name: codeready-workspaces spec: auth: externalIdentityProvider: false database: externalDb: false server: selfSignedCert: false gitSelfSignedCert: false tlsSupport: true storage: pvcStrategy: 'common' pvcClaimSize: '1Gi' Table 2.1. CheCluster Custom Resource server settings, related to the CodeReady Workspaces server component. Property Description airGapContainerRegistryHostname Optional host name, or URL, to an alternate container registry to pull images from. This value overrides the container registry host name defined in all the default container images involved in a Che deployment. This is particularly useful to install Che in a restricted environment. airGapContainerRegistryOrganization Optional repository name of an alternate container registry to pull images from. This value overrides the container registry organization defined in all the default container images involved in a Che deployment. This is particularly useful to install CodeReady Workspaces in a restricted environment. allowUserDefinedWorkspaceNamespaces Deprecated. The value of this flag is ignored. Defines that a user is allowed to specify a Kubernetes namespace, or an OpenShift project, which differs from the default. It's NOT RECOMMENDED to set to true without OpenShift OAuth configured. The OpenShift infrastructure also uses this property. cheClusterRoles A comma-separated list of ClusterRoles that will be assigned to Che ServiceAccount. Each role must have app.kubernetes.io/part-of=che.eclipse.org label. Be aware that the Che Operator has to already have all permissions in these ClusterRoles to grant them. cheDebug Enables the debug mode for Che server. Defaults to false . cheFlavor Specifies a variation of the installation. The options are che for upstream Che installations, or codeready for CodeReady Workspaces installation. Override the default value only on necessary occasions. cheHost Public host name of the installed Che server. When value is omitted, the value it will be automatically set by the Operator. See the cheHostTLSSecret field. cheHostTLSSecret Name of a secret containing certificates to secure ingress or route for the custom host name of the installed Che server. The secret must have app.kubernetes.io/part-of=che.eclipse.org label. See the cheHost field. cheImage Overrides the container image used in Che deployment. This does NOT include the container image tag. Omit it or leave it empty to use the default container image provided by the Operator. cheImagePullPolicy Overrides the image pull policy used in Che deployment. Default value is Always for nightly , or latest images, and IfNotPresent in other cases. cheImageTag Overrides the tag of the container image used in Che deployment. Omit it or leave it empty to use the default image tag provided by the Operator. cheLogLevel Log level for the Che server: INFO or DEBUG . Defaults to INFO . cheServerIngress The Che server ingress custom settings. cheServerRoute The Che server route custom settings. cheWorkspaceClusterRole Custom cluster role bound to the user for the Che workspaces. The role must have app.kubernetes.io/part-of=che.eclipse.org label. The default roles are used when omitted or left blank. customCheProperties Map of additional environment variables that will be applied in the generated che ConfigMap to be used by the Che server, in addition to the values already generated from other fields of the CheCluster custom resource (CR). When customCheProperties contains a property that would be normally generated in che ConfigMap from other CR fields, the value defined in the customCheProperties is used instead. dashboardCpuLimit Overrides the CPU limit used in the dashboard deployment. In cores. (500m = .5 cores). Default to 500m. dashboardCpuRequest Overrides the CPU request used in the dashboard deployment. In cores. (500m = .5 cores). Default to 100m. dashboardImage Overrides the container image used in the dashboard deployment. This includes the image tag. Omit it or leave it empty to use the default container image provided by the Operator. dashboardImagePullPolicy Overrides the image pull policy used in the dashboard deployment. Default value is Always for nightly , or latest images, and IfNotPresent in other cases. dashboardIngress Dashboard ingress custom settings. dashboardMemoryLimit Overrides the memory limit used in the dashboard deployment. Defaults to 256Mi. dashboardMemoryRequest Overrides the memory request used in the dashboard deployment. Defaults to 16Mi. dashboardRoute Dashboard route custom settings. devfileRegistryCpuLimit Overrides the CPU limit used in the devfile registry deployment. In cores. (500m = .5 cores). Default to 500m. devfileRegistryCpuRequest Overrides the CPU request used in the devfile registry deployment. In cores. (500m = .5 cores). Default to 100m. devfileRegistryImage Overrides the container image used in the devfile registry deployment. This includes the image tag. Omit it or leave it empty to use the default container image provided by the Operator. devfileRegistryIngress The devfile registry ingress custom settings. devfileRegistryMemoryLimit Overrides the memory limit used in the devfile registry deployment. Defaults to 256Mi. devfileRegistryMemoryRequest Overrides the memory request used in the devfile registry deployment. Defaults to 16Mi. devfileRegistryPullPolicy Overrides the image pull policy used in the devfile registry deployment. Default value is Always for nightly , or latest images, and IfNotPresent in other cases. devfileRegistryRoute The devfile registry route custom settings. devfileRegistryUrl Deprecated in favor of externalDevfileRegistries fields. disableInternalClusterSVCNames Disable internal cluster SVC names usage to communicate between components to speed up the traffic and avoid proxy issues. externalDevfileRegistries External devfile registries, that serves sample, ready-to-use devfiles. Configure this in addition to a dedicated devfile registry (when externalDevfileRegistry is false ) or instead of it (when externalDevfileRegistry is true ) externalDevfileRegistry Instructs the Operator on whether to deploy a dedicated devfile registry server. By default, a dedicated devfile registry server is started. When externalDevfileRegistry is true , no such dedicated server will be started by the Operator and configure at least one devfile registry with externalDevfileRegistries field. externalPluginRegistry Instructs the Operator on whether to deploy a dedicated plugin registry server. By default, a dedicated plugin registry server is started. When externalPluginRegistry is true , no such dedicated server will be started by the Operator and you will have to manually set the pluginRegistryUrl field. gitSelfSignedCert When enabled, the certificate from che-git-self-signed-cert ConfigMap will be propagated to the Che components and provide particular configuration for Git. Note, the che-git-self-signed-cert ConfigMap must have app.kubernetes.io/part-of=che.eclipse.org label. nonProxyHosts List of hosts that will be reached directly, bypassing the proxy. Specify wild card domain use the following form .<DOMAIN> and | as delimiter, for example: localhost|.my.host.com|123.42.12.32 Only use when configuring a proxy is required. Operator respects OpenShift cluster wide proxy configuration and no additional configuration is required, but defining nonProxyHosts in a custom resource leads to merging non proxy hosts lists from the cluster proxy configuration and ones defined in the custom resources. See the doc https://docs.openshift.com/container-platform/4.4/networking/enable-cluster-wide-proxy.html . See also the proxyURL fields. pluginRegistryCpuLimit Overrides the CPU limit used in the plugin registry deployment. In cores. (500m = .5 cores). Default to 500m. pluginRegistryCpuRequest Overrides the CPU request used in the plugin registry deployment. In cores. (500m = .5 cores). Default to 100m. pluginRegistryImage Overrides the container image used in the plugin registry deployment. This includes the image tag. Omit it or leave it empty to use the default container image provided by the Operator. pluginRegistryIngress Plugin registry ingress custom settings. pluginRegistryMemoryLimit Overrides the memory limit used in the plugin registry deployment. Defaults to 256Mi. pluginRegistryMemoryRequest Overrides the memory request used in the plugin registry deployment. Defaults to 16Mi. pluginRegistryPullPolicy Overrides the image pull policy used in the plugin registry deployment. Default value is Always for nightly , or latest images, and IfNotPresent in other cases. pluginRegistryRoute Plugin registry route custom settings. pluginRegistryUrl Public URL of the plugin registry that serves sample ready-to-use devfiles. Set this ONLY when a use of an external devfile registry is needed. See the externalPluginRegistry field. By default, this will be automatically calculated by the Operator. proxyPassword Password of the proxy server. Only use when proxy configuration is required. See the proxyURL , proxyUser and proxySecret fields. proxyPort Port of the proxy server. Only use when configuring a proxy is required. See also the proxyURL and nonProxyHosts fields. proxySecret The secret that contains user and password for a proxy server. When the secret is defined, the proxyUser and proxyPassword are ignored. The secret must have app.kubernetes.io/part-of=che.eclipse.org label. proxyURL URL (protocol+host name) of the proxy server. This drives the appropriate changes in the JAVA_OPTS and https(s)_proxy variables in the Che server and workspaces containers. Only use when configuring a proxy is required. Operator respects OpenShift cluster wide proxy configuration and no additional configuration is required, but defining proxyUrl in a custom resource leads to overrides the cluster proxy configuration with fields proxyUrl , proxyPort , proxyUser and proxyPassword from the custom resource. See the doc https://docs.openshift.com/container-platform/4.4/networking/enable-cluster-wide-proxy.html . See also the proxyPort and nonProxyHosts fields. proxyUser User name of the proxy server. Only use when configuring a proxy is required. See also the proxyURL , proxyPassword and proxySecret fields. selfSignedCert Deprecated. The value of this flag is ignored. The Che Operator will automatically detect whether the router certificate is self-signed and propagate it to other components, such as the Che server. serverCpuLimit Overrides the CPU limit used in the Che server deployment In cores. (500m = .5 cores). Default to 1. serverCpuRequest Overrides the CPU request used in the Che server deployment In cores. (500m = .5 cores). Default to 100m. serverExposureStrategy Sets the server and workspaces exposure type. Possible values are multi-host , single-host , default-host . Defaults to multi-host , which creates a separate ingress, or OpenShift routes, for every required endpoint. single-host makes Che exposed on a single host name with workspaces exposed on subpaths. Read the docs to learn about the limitations of this approach. Also consult the singleHostExposureType property to further configure how the Operator and the Che server make that happen on Kubernetes. default-host exposes the Che server on the host of the cluster. Read the docs to learn about the limitations of this approach. serverMemoryLimit Overrides the memory limit used in the Che server deployment. Defaults to 1Gi. serverMemoryRequest Overrides the memory request used in the Che server deployment. Defaults to 512Mi. serverTrustStoreConfigMapName Name of the ConfigMap with public certificates to add to Java trust store of the Che server. This is often required when adding the OpenShift OAuth provider, which has HTTPS endpoint signed with self-signed cert. The Che server must be aware of its CA cert to be able to request it. This is disabled by default. The Config Map must have app.kubernetes.io/part-of=che.eclipse.org label. singleHostGatewayConfigMapLabels The labels that need to be present in the ConfigMaps representing the gateway configuration. singleHostGatewayConfigSidecarImage The image used for the gateway sidecar that provides configuration to the gateway. Omit it or leave it empty to use the default container image provided by the Operator. singleHostGatewayImage The image used for the gateway in the single host mode. Omit it or leave it empty to use the default container image provided by the Operator. tlsSupport Deprecated. Instructs the Operator to deploy Che in TLS mode. This is enabled by default. Disabling TLS sometimes cause malfunction of some Che components. useInternalClusterSVCNames Deprecated in favor of disableInternalClusterSVCNames . workspaceNamespaceDefault Defines Kubernetes default namespace in which user's workspaces are created for a case when a user does not override it. It's possible to use <username> , <userid> and <workspaceid> placeholders, such as che-workspace-<username>. In that case, a new namespace will be created for each user or workspace. workspacesDefaultPlugins Default plug-ins applied to Devworkspaces. Table 2.2. CheCluster Custom Resource database configuration settings related to the database used by CodeReady Workspaces. Property Description chePostgresContainerResources PostgreSQL container custom settings chePostgresDb PostgreSQL database name that the Che server uses to connect to the DB. Defaults to dbche . chePostgresHostName PostgreSQL Database host name that the Che server uses to connect to. Defaults is postgres . Override this value ONLY when using an external database. See field externalDb . In the default case it will be automatically set by the Operator. chePostgresPassword PostgreSQL password that the Che server uses to connect to the DB. When omitted or left blank, it will be set to an automatically generated value. chePostgresPort PostgreSQL Database port that the Che server uses to connect to. Defaults to 5432. Override this value ONLY when using an external database. See field externalDb . In the default case it will be automatically set by the Operator. chePostgresSecret The secret that contains PostgreSQL`user` and password that the Che server uses to connect to the DB. When the secret is defined, the chePostgresUser and chePostgresPassword are ignored. When the value is omitted or left blank, the one of following scenarios applies: 1. chePostgresUser and chePostgresPassword are defined, then they will be used to connect to the DB. 2. chePostgresUser or chePostgresPassword are not defined, then a new secret with the name che-postgres-secret will be created with default value of pgche for user and with an auto-generated value for password . The secret must have app.kubernetes.io/part-of=che.eclipse.org label. chePostgresUser PostgreSQL user that the Che server uses to connect to the DB. Defaults to pgche . externalDb Instructs the Operator on whether to deploy a dedicated database. By default, a dedicated PostgreSQL database is deployed as part of the Che installation. When externalDb is true , no dedicated database will be deployed by the Operator and you will need to provide connection details to the external DB you are about to use. See also all the fields starting with: chePostgres . postgresImage Overrides the container image used in the PostgreSQL database deployment. This includes the image tag. Omit it or leave it empty to use the default container image provided by the Operator. postgresImagePullPolicy Overrides the image pull policy used in the PostgreSQL database deployment. Default value is Always for nightly , or latest images, and IfNotPresent in other cases. postgresVersion Indicates a PostgreSQL version image to use. Allowed values are: 9.6 and 13.3 . Migrate your PostgreSQL database to switch from one version to another. pvcClaimSize Size of the persistent volume claim for database. Defaults to 1Gi . To update pvc storageclass that provisions it must support resize when CodeReady Workspaces has been already deployed. Table 2.3. Custom Resource auth configuration settings related to authentication used by CodeReady Workspaces. Property Description debug Debug internal identity provider. externalIdentityProvider Instructs the Operator on whether or not to deploy a dedicated Identity Provider (Keycloak or RH SSO instance). Instructs the Operator on whether to deploy a dedicated Identity Provider (Keycloak or RH-SSO instance). By default, a dedicated Identity Provider server is deployed as part of the Che installation. When externalIdentityProvider is true , no dedicated identity provider will be deployed by the Operator and you will need to provide details about the external identity provider you are about to use. See also all the other fields starting with: identityProvider . gatewayAuthenticationSidecarImage Gateway sidecar responsible for authentication when NativeUserMode is enabled. See oauth2-proxy or openshift/oauth-proxy . gatewayAuthorizationSidecarImage Gateway sidecar responsible for authorization when NativeUserMode is enabled. See kube-rbac-proxy or openshift/kube-rbac-proxy gatewayHeaderRewriteSidecarImage Deprecated. The value of this flag is ignored. Sidecar functionality is now implemented in Traefik plugin. identityProviderAdminUserName Overrides the name of the Identity Provider administrator user. Defaults to admin . identityProviderClientId Name of a Identity provider, Keycloak or RH-SSO, client-id that is used for Che. Override this when an external Identity Provider is in use. See the externalIdentityProvider field. When omitted or left blank, it is set to the value of the flavour field suffixed with -public . identityProviderContainerResources Identity provider container custom settings. identityProviderImage Overrides the container image used in the Identity Provider, Keycloak or RH-SSO, deployment. This includes the image tag. Omit it or leave it empty to use the default container image provided by the Operator. identityProviderImagePullPolicy Overrides the image pull policy used in the Identity Provider, Keycloak or RH-SSO, deployment. Default value is Always for nightly , or latest images, and IfNotPresent in other cases. identityProviderIngress Ingress custom settings. identityProviderPassword Overrides the password of Keycloak administrator user. Override this when an external Identity Provider is in use. See the externalIdentityProvider field. When omitted or left blank, it is set to an auto-generated password. identityProviderPostgresPassword Password for a Identity Provider, Keycloak or RH-SSO, to connect to the database. Override this when an external Identity Provider is in use. See the externalIdentityProvider field. When omitted or left blank, it is set to an auto-generated password. identityProviderPostgresSecret The secret that contains password for the Identity Provider, Keycloak or RH-SSO, to connect to the database. When the secret is defined, the identityProviderPostgresPassword is ignored. When the value is omitted or left blank, the one of following scenarios applies: 1. identityProviderPostgresPassword is defined, then it will be used to connect to the database. 2. identityProviderPostgresPassword is not defined, then a new secret with the name che-identity-postgres-secret will be created with an auto-generated value for password . The secret must have app.kubernetes.io/part-of=che.eclipse.org label. identityProviderRealm Name of a Identity provider, Keycloak or RH-SSO, realm that is used for Che. Override this when an external Identity Provider is in use. See the externalIdentityProvider field. When omitted or left blank, it is set to the value of the flavour field. identityProviderRoute Route custom settings. identityProviderSecret The secret that contains user and password for Identity Provider. When the secret is defined, the identityProviderAdminUserName and identityProviderPassword are ignored. When the value is omitted or left blank, the one of following scenarios applies: 1. identityProviderAdminUserName and identityProviderPassword are defined, then they will be used. 2. identityProviderAdminUserName or identityProviderPassword are not defined, then a new secret with the name che-identity-secret will be created with default value admin for user and with an auto-generated value for password . The secret must have app.kubernetes.io/part-of=che.eclipse.org label. identityProviderURL Public URL of the Identity Provider server (Keycloak / RH-SSO server). Set this ONLY when a use of an external Identity Provider is needed. See the externalIdentityProvider field. By default, this will be automatically calculated and set by the Operator. initialOpenShiftOAuthUser For operating with the OpenShift OAuth authentication, create a new user account since the kubeadmin can not be used. If the value is true, then a new OpenShift OAuth user will be created for the HTPasswd identity provider. If the value is false and the user has already been created, then it will be removed. If value is an empty, then do nothing. The user's credentials are stored in the openshift-oauth-user-credentials secret in 'openshift-config' namespace by Operator. Note that this solution is Openshift 4 platform-specific. nativeUserMode Enables native user mode. Currently works only on OpenShift and DevWorkspace engine. Native User mode uses OpenShift OAuth directly as identity provider, without Keycloak. oAuthClientName Name of the OpenShift OAuthClient resource used to setup identity federation on the OpenShift side. Auto-generated when left blank. See also the OpenShiftoAuth field. oAuthSecret Name of the secret set in the OpenShift OAuthClient resource used to setup identity federation on the OpenShift side. Auto-generated when left blank. See also the OAuthClientName field. openShiftoAuth Enables the integration of the identity provider (Keycloak / RHSSO) with OpenShift OAuth. Empty value on OpenShift by default. This will allow users to directly login with their OpenShift user through the OpenShift login, and have their workspaces created under personal OpenShift namespaces. WARNING: the kubeadmin user is NOT supported, and logging through it will NOT allow accessing the Che Dashboard. updateAdminPassword Forces the default admin Che user to update password on first login. Defaults to false . Table 2.4. CheCluster Custom Resource storage configuration settings related to persistent storage used by CodeReady Workspaces. Property Description postgresPVCStorageClassName Storage class for the Persistent Volume Claim dedicated to the PostgreSQL database. When omitted or left blank, a default storage class is used. preCreateSubPaths Instructs the Che server to start a special Pod to pre-create a sub-path in the Persistent Volumes. Defaults to false , however it will need to enable it according to the configuration of your Kubernetes cluster. pvcClaimSize Size of the persistent volume claim for workspaces. Defaults to 10Gi . pvcJobsImage Overrides the container image used to create sub-paths in the Persistent Volumes. This includes the image tag. Omit it or leave it empty to use the default container image provided by the Operator. See also the preCreateSubPaths field. pvcStrategy Persistent volume claim strategy for the Che server. This Can be:`common` (all workspaces PVCs in one volume), per-workspace (one PVC per workspace for all declared volumes) and unique (one PVC per declared volume). Defaults to common . workspacePVCStorageClassName Storage class for the Persistent Volume Claims dedicated to the Che workspaces. When omitted or left blank, a default storage class is used. Table 2.5. CheCluster Custom Resource k8s configuration settings specific to CodeReady Workspaces installations on OpenShift. Property Description ingressClass Ingress class that will define the which controller will manage ingresses. Defaults to nginx . NB: This drives the kubernetes.io/ingress.class annotation on Che-related ingresses. ingressDomain Global ingress domain for a Kubernetes cluster. This MUST be explicitly specified: there are no defaults. ingressStrategy Strategy for ingress creation. Options are: multi-host (host is explicitly provided in ingress), single-host (host is provided, path-based rules) and default-host (no host is provided, path-based rules). Defaults to multi-host Deprecated in favor of serverExposureStrategy in the server section, which defines this regardless of the cluster type. When both are defined, the serverExposureStrategy option takes precedence. securityContextFsGroup The FSGroup in which the Che Pod and workspace Pods containers runs in. Default value is 1724 . securityContextRunAsUser ID of the user the Che Pod and workspace Pods containers run as. Default value is 1724 . singleHostExposureType When the serverExposureStrategy is set to single-host , the way the server, registries and workspaces are exposed is further configured by this property. The possible values are native , which means that the server and workspaces are exposed using ingresses on K8s or gateway where the server and workspaces are exposed using a custom gateway based on Traefik . All the endpoints whether backed by the ingress or gateway route always point to the subpaths on the same domain. Defaults to native . tlsSecretName Name of a secret that will be used to setup ingress TLS termination when TLS is enabled. When the field is empty string, the default cluster certificate will be used. See also the tlsSupport field. Table 2.6. CheCluster Custom Resource metrics settings, related to the CodeReady Workspaces metrics collection used by CodeReady Workspaces. Property Description enable Enables metrics the Che server endpoint. Default to true . Table 2.7. CheCluster Custom Resource status defines the observed state of CodeReady Workspaces installation Property Description cheClusterRunning Status of a Che installation. Can be Available , Unavailable , or Available, Rolling Update in Progress . cheURL Public URL to the Che server. cheVersion Current installed Che version. dbProvisioned Indicates that a PostgreSQL instance has been correctly provisioned or not. devfileRegistryURL Public URL to the devfile registry. devworkspaceStatus The status of the Devworkspace subsystem gitHubOAuthProvisioned Indicates whether an Identity Provider instance, Keycloak or RH-SSO, has been configured to integrate with the GitHub OAuth. helpLink A URL that points to some URL where to find help related to the current Operator status. keycloakProvisioned Indicates whether an Identity Provider instance, Keycloak or RH-SSO, has been provisioned with realm, client and user. keycloakURL Public URL to the Identity Provider server, Keycloak or RH-SSO,. message A human readable message indicating details about why the Pod is in this condition. openShiftOAuthUserCredentialsSecret OpenShift OAuth secret in openshift-config namespace that contains user credentials for HTPasswd identity provider. openShiftoAuthProvisioned Indicates whether an Identity Provider instance, Keycloak or RH-SSO, has been configured to integrate with the OpenShift OAuth. pluginRegistryURL Public URL to the plugin registry. reason A brief CamelCase message indicating details about why the Pod is in this state.
[ "apiVersion: org.eclipse.che/v1 kind: CheCluster spec: <component> : # <property-to-configure> : <value>", "apiVersion: org.eclipse.che/v1 kind: CheCluster spec: <component> : # <property-to-configure> : <value>", "spec: <component> : <property-to-configure> : <value>", "{prod-cli} server:deploy --che-operator-cr-patch-yaml=che-operator-cr-patch.yaml --platform <chosen-platform>", "oc get configmap che -o jsonpath='{.data. <configured-property> }' -n openshift-workspaces", "oc edit checluster/eclipse-che -n openshift-workspaces", "oc get configmap che -o jsonpath='{.data. <configured-property> }' -n openshift-workspaces", "apiVersion: org.eclipse.che/v1 kind: CheCluster metadata: name: codeready-workspaces spec: auth: externalIdentityProvider: false database: externalDb: false server: selfSignedCert: false gitSelfSignedCert: false tlsSupport: true storage: pvcStrategy: 'common' pvcClaimSize: '1Gi'" ]
https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.15/html/installation_guide/configuring-the-codeready-workspaces-installation_crw
Chapter 7. Creating an instance with a VDPA interface
Chapter 7. Creating an instance with a VDPA interface You can create an instance with a VDPA interface by requesting a port for your instance that has a vNIC type of VDPA. Limitations You cannot suspend or live migrate an instance that has a VDPA interface. You cannot detach the VDPA interface from an instance and then reattach it to the instance. Procedure Create a network that is mapped to the physical network: Create a subnet for the network: Create a port from a VDPA-enabled NIC: Create an instance, specifying the NIC port to use: An "ACTIVE" status in the output indicates that you have successfully created the instance on a host that can provide the requested VDPA interface.
[ "openstack network create vdpa_network --provider-physical-network tenant --provider-network-type vlan --provider-segment 1337", "openstack subnet create vdpa_subnet --network vdpa_net1 --subnet-range 192.0.2.0/24 --dhcp", "openstack port create vdpa_direct_port --network vdpa_network --vnic-type vdpa \\", "openstack server create vdpa_instance --flavor cirros256 --image cirros-0.3.5-x86_64-disk --nic port-id=vdpa_direct_port --wait" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/creating_and_managing_instances/proc_creating-an-instance-with-a-vdpa-interface_osp
Using alt-java
Using alt-java Red Hat build of OpenJDK 21 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/using_alt-java/index
22.4. Restricting Access to Services and Hosts Based on How Users Authenticate
22.4. Restricting Access to Services and Hosts Based on How Users Authenticate The authentication mechanisms supported by IdM vary in their authentication strength. For example, authentication using a one-time password (OTP) in combination with a standard password is considered safer than authentication using a standard password only. This section shows how to limit access to services and hosts based on how the user authenticates. For example, you can configure: services critical to security, such as VPN, to require a strong authentication method noncritical services, such as local logins, to allow authentication using a weaker, but more convenient authentication method Figure 22.8. Example of Authenticating Using Different Methods Authentication Indicators Access to services and hosts is defined by authentication indicators : Indicators included in a service or host entry define what authentication methods the user can use to access that service or host. Indicators included in the user's ticket-granting ticket (TGT) show what authentication method was used to obtain the ticket. If the indicator in the principal does not match the indicator in the TGT, the user is denied access. 22.4.1. Configuring a Host or a Service to Require a Specific Authentication Method To configure a host or a service using: the web UI, see the section called "Web UI: Configuring a Host or a Service to Require a Specific Authentication Method" the command line, see the section called "Command Line: Configuring a Host or a Service to Require a Specific Authentication Method" Web UI: Configuring a Host or a Service to Require a Specific Authentication Method Select Identity Hosts or Identity Services . Click the name of the required host or service. Under Authentication indicators , select the required authentication method. For example, selecting OTP ensures that only users who used a valid OTP code with their password will be allowed to access the host or service. If you select both OTP and RADIUS , either OTP or RADIUS will be sufficient to allow access. Click Save at the top of the page. Command Line: Configuring a Host or a Service to Require a Specific Authentication Method Optional. Use the ipa host-find or ipa service-find commands to identify the host or service. Use the ipa host-mod or the ipa service-mod command with the --auth-ind option to add the required authentication indicator. For a list of the values accepted by --auth-ind , see the output of the ipa host-mod --help or ipa service-mod --help commands. For example, --auth-ind=otp ensures that only users who used a valid OTP code with their password will be allowed to access the host or service: If you add indicators for both OTP and RADIUS, either OTP or RADIUS will be sufficient to allow access. 22.4.2. Changing the Kerberos Authentication Indicator By default, Identity Management (IdM) uses the pkinit indicator for certificate mapping for Kerberos authentication using the PKINIT pre-authentication plug-in. If you need to change the authentication provider the Kerberos Distribution Center (KDC) inserts in to a ticket-granting ticket (TGT), modify the configuration on all IdM masters that provide PKINIT functionality as follows: In the /var/kerberos/krb5kdc/kdc.conf file, add the pkinit_indicator parameter to the [kdcdefaults] section: You can set the indicator the following values: otp for two factor authentication radius for RADIUS-based authentication pkinit for smart card authentication Restart the krb5kdc service:
[ "ipa host-mod server.example.com --auth-ind=otp --------------------------------------------------------- Modified host \"server.example.com\" --------------------------------------------------------- Host name: server.example.com Authentication Indicators: otp", "pkinit_indicator = indicator", "systemctl restart krb5kdc" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/auth-indicators
Chapter 13. Daemon Images
Chapter 13. Daemon Images 13.1. Apache HTTP Server 13.1.1. Description The rhscl/httpd-24-rhel7 image provides an Apache HTTP 2.4 Server. The image can be used as a base image for other applications based on Apache HTTP web server. 13.1.2. Access To pull the rhscl/httpd-24-rhel7 image, run the following command as root : The rhscl/httpd-24-rhel7 image supports using the S2I tool. 13.1.3. Configuration and Usage The Apache HTTP Server container image supports the following configuration variables, which can be set by using the -e option with the podman run command: Variable Name Description HTTPD_LOG_TO_VOLUME By default, httpd logs into standard output, so the logs are accessible by using the podman logs command. When HTTPD_LOG_TO_VOLUME is set, httpd logs into /var/log/httpd24 , which can be mounted to host system using the container volumes. This option is allowed allowed when the container is run as UID 0 . HTTPD_MPM This variable can be set to change the default Multi-Processing Module (MPM) from the package default MPM. If you want to run the image and mount the log files into /wwwlogs on the host as a container volume, execute the following command: To run an image using the event MPM (rather than the default prefork ), execute the following command: You can also set the following mount points by passing the -v /host:/container option to the podman run command: Volume Mount Point Description /var/www Apache HTTP Server data directory /var/log/httpd24 Apache HTTP Server log directory (available only when running as root) When mouting a directory from the host into the container, ensure that the mounted directory has the appropriate permissions and that the owner and group of the directory matches the user UID or name which is running inside the container. Note The rhscl/httpd-24-rhel7 container image now uses 1001 as the default UID to work correctly within the source-to-image strategy in OpenShift. Additionally, the container image listens on port 8080 by default. Previously, the rhscl/httpd-24-rhel7 container image listened on port 80 by default and ran as UID 0 . To run the rhscl/httpd-24-rhel7 container image as UID 0 , specify the -u 0 option of the podman run command: 13.2. nginx 13.2.1. Description The rhscl/nginx-120-rhel7 image provides an nginx 1.20 server and a reverse proxy server; the image can be used as a base image for other applications based on the nginx 1.20 web server, the rhscl/nginx-118-rhel7 image provides nginx 1.18. 13.2.2. Access To pull the rhscl/nginx-120-rhel7 image, run the following command as root : To pull the rhscl/nginx-118-rhel7 image, run the following command as root : 13.2.3. Configuration The nginx container images support the following configuration variable, which can be set by using the -e option with the podman run command: Variable Name Description NGINX_LOG_TO_VOLUME By default, nginx logs into standard output, so the logs are accessible by using the podman logs command. When NGINX_LOG_TO_VOLUME is set, nginx logs into /var/opt/rh/rh-nginx120/log/nginx/ or /var/opt/rh/rh-nginx120/log/nginx/ , which can be mounted to host system using the container volumes. The rhscl/nginx-120-rhel7 and rhscl/nginx-118-rhel7 images support using the S2I tool. 13.3. Varnish Cache 13.3.1. Description The rhscl/varnish-6-rhel7 image provides Varnish Cache 6.0, an HTTP reverse proxy. 13.3.2. Access To pull the rhscl/varnish-6-rhel7 image, run the following command as root : 13.3.3. Configuration No further configuration is required. The Red Hat Software Collections Varnish Cache images support using the S2I tool. Note that the default.vcl configuration file in the directory accessed by S2I needs to be in the VCL format.
[ "podman pull registry.redhat.io/rhscl/httpd-24-rhel7", "podman run -d -u 0 -e HTTPD_LOG_TO_VOLUME=1 --name httpd -v /wwwlogs:/var/log/httpd24:Z rhscl/httpd-24-rhel7", "podman run -d -e HTTPD_MPM=event --name httpd rhscl/httpd-24-rhel7", "run -u 0 rhscl/httpd-24-rhel7", "podman pull registry.redhat.io/rhscl/nginx-120-rhel7", "podman pull registry.redhat.io/rhscl/nginx-118-rhel7", "podman pull registry.redhat.io/rhscl/varnish-6-rhel7" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/using_red_hat_software_collections_container_images/daemon-images
Chapter 9. Installing Using Anaconda
Chapter 9. Installing Using Anaconda This chapter describes an installation using the graphical user interface of anaconda . 9.1. The Text Mode Installation Program User Interface Important Installing in text mode does not prevent you from using a graphical interface on your system once it is installed. Apart from the graphical installer, anaconda also includes a text-based installer. If one of the following situations occurs, the installation program uses text mode: The installation system fails to identify the display hardware on your computer You choose the text mode installation from the boot menu While text mode installations are not explicitly documented, those using the text mode installation program can easily follow the GUI installation instructions. However, because text mode presents you with a simpler, more streamlined installation process, certain options that are available in graphical mode are not also available in text mode. These differences are noted in the description of the installation process in this guide, and include: configuring advanced storage methods such as LVM, RAID, FCoE, zFCP, and iSCSI. customizing the partition layout customizing the bootloader layout selecting packages during installation configuring the installed system with firstboot If you choose to install Red Hat Enterprise Linux in text mode, you can still configure your system to use a graphical interface after installation. Refer to Section 35.3, "Switching to a Graphical Login" for instructions. To configure options not available in text mode, consider using a boot option. For example, the linux ip option can be used to configure network settings. Refer to Section 28.1, "Configuring the Installation System at the Boot Menu" for instructions.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch-guimode-x86
Chapter 2. Changes
Chapter 2. Changes Review these changes carefully before upgrading. 2.1. RH SSO 7.4 The following changes have occurred from Red Hat Single Sign-On 7.3 to Red Hat Single Sign-On 7.4. 2.1.1. Upgrade to EAP 7.3 The Red Hat Single Sign-On server was upgraded to use EAP 7.3 as the underlying container. This change does not directly involve any specific Red Hat Single Sign-On server functionality, but a few changes relate to the migration. 2.1.1.1. Dependency updates The dependencies were updated to the versions used by EAP 7.3 server. For example, the Infinispan component version is now 9.3.1.Final. 2.1.1.2. Configuration changes There are a few configuration changes in the standalone(-ha).xml and domain.xml files. Follow the Upgrading the Red Hat Single Sign-On server section to handle the migration of configuration files automatically. 2.1.1.3. Cross-Datacenter replication changes You will need to upgrade RHDG to version 7.3. The older version may still work, but it is not tested so it is not guaranteed to work. 2.1.2. Authentication flows changes We did some refactoring and improvements related to the authentication flows, which requires attention during migration. 2.1.2.1. REQUIRED and ALTERNATIVE executions not supported at same authentication flow Previously, it was possible to have REQUIRED and ALTERNATIVE executions in the same authentication flow at the same level. There were some issues with this approach and we did the refactoring in the Authentication SPI, which means that this is no longer valid. If ALTERNATIVE and REQUIRED executions are configured at the same level, the ALTERNATIVE executions are considered disabled. So when migrating to this version, your existing authentication flows will be migrated but retain the behavior of the version. If an authentication flow contains ALTERNATIVE executions at the same level as REQUIRED executions, the ALTERNATIVE executions are added to the separate REQUIRED subflow. This strategy should ensure the same or similar behavior of the each authentication flow as in the version. However, you may review the configuration of your authentication flow and double check that it works as expected. This recommendation applies in particular for customized authentication flows with custom authenticator implementations. 2.1.2.2. OPTIONAL execution requirement removed Regarding migration, the most important change is removing support for the OPTIONAL requirement from authentication executions and replacing it with the CONDITIONAL requirement, which allows more flexibility. OPTIONAL authenticators configured in the version are replaced with the CONDITIONAL subflows. These subflows have the Condition - User Configured condition configured as first execution, and the previously OPTIONAL authenticator (for example OTP Form) as second execution. For the user, the behavior during authentication matches the behavior of the version. 2.1.2.3. SPI Changes Some changes exist in the Java Authentication SPI and Credential Provider SPI. The interface Authenticator is not changed, but you may be affected if you develop advanced authenticators that introduce some new credential types (subclasses of CredentialModel). Changes exist on the CredentialProvider interface and introduction of some new interfaces such as CredentialValidator. Also, you may be affected if your authenticator supported the OPTIONAL execution requirement. It is recommended that you double check the latest authentication examples in the server development guide for more details. 2.1.2.4. Freemarker template changes Changes exist in the freemarker templates. You may be affected if you have your own theme with custom freemarker templates for login forms or some account forms, especially for the forms related to OTP. We recommend that you review the changes in the Freemarker templates in this version and align your templates according to it. 2.1.3. Duplicated top level groups This release fixes a problem that could create duplicated top level groups in the realm. Nevertheless the existence of duplicated groups makes the upgrade process fail. The Red Hat Single Sign-On server can be affected by this issue if it is using a H2, MariaDB, MySQL or PostgreSQL database. Before launching the upgrade, check if the server contains duplicated top level groups. For example the following SQL query can be executed at database level to list them: Only one top level group can exist in each realm with the same name. Duplicates should be reviewed and deleted before the upgrade. The error in the upgrade contains the message Change Set META-INF/jpa-changelog-9.0.1.xml::9.0.1- KEYCLOAK-12579-add-not-null-constraint::keycloak failed. 2.1.4. User credentials changes We added more flexibility around storing user credentials. Among other things, every user can have multiple credentials of the same type, such as multiple OTP credentials. Some changes exist in the database schema in relation to that, however the credentials from the version are updated to the new format. Users can still log in with the passwords or OTP credentials defined in the version. 2.1.5. New optional client scope We have added a microprofile-jwt optional client scope to handle the claims defined in the MicroProfile/JWT Auth Specification. This new client scope defines protocol mappers to set the username of the authenticated user to the upn claim and to set the realm roles to the groups claim. 2.1.6. Improved handling of user locale A number of improvements have been made to how the locale for the login page is selected, as well as when the locale is updated for a user. See the Server Administration Guide for more details. 2.1.7. Legacy promise in JavaScript adapter You no longer need to set promiseType in the JavaScript adapter, and both are available at the same time. It is recommended to update applications to use native promise API (then and catch) as soon as possible, as the legacy API (success and error) will be removed at some point. 2.1.8. Deploying Scripts to the Server Until now, administrators were allowed to upload scripts to the server through the Red Hat Single Sign-On Admin Console and the RESTful Admin API. This capability is now disabled. Users should deploy scripts directly to the server. For more details, review the JavaScript Providers . 2.1.9. Client Credentials in the JavaScript adapter In the releases, developers were allowed to provide client credentials to the JavaScript adapter. For now on, this capability was removed, because client-side apps are not safe to keep secrets. Ability to propagate prompt=none to default IDP We have added a switch in the OIDC identity provider configuration named Accepts prompt=none forward from client to identify IDPs that are able to handle forwarded requests that include the prompt=none query parameter. Until now, when receiving an auth request with prompt=none, a realm would return a login_required error if the user is not authenticated in the realm without checking if the user has been authenticated by an IDP. From now on, if a default IDP can be determined for the auth request (either by the use of the kc_idp_hint query param or by setting up a default IDP for the realm) and if the Accepts prompt=none forward from client switch has been enabled for the IDP, the auth request is forwarded to the IDP to check if the user has been authenticated there. It is important to note that this switch is only taken into account if a default IDP is specified, in which case we know where to forward the auth request without having to prompt the user to select an IDP. If a default IDP cannot be determined, we cannot assume which one will be used to fulfill the auth request so the request forwarding is not performed. 2.1.10. New Default Hostname provider The request and fixed hostname providers have been replaced with a new default hostname provider. The request and fixed hostname providers are now deprecated and we recommend that you switch to the default hostname provider as soon as possible. 2.1.11. Deprecated or removed features Certain features have a change in status. 2.1.11.1. Deprecated methods in token representation Java classes In the year 2038, an int is no longer able to hold the value of seconds since 1970, as such we are working on updating these to long values. In token representation there is a further issue. An int will by default result in 0 in the JSON representation, while it should not be included. See the JavaDocs Documentation for further details on exact methods that have been deprecated and replacement methods. 2.1.11.2. Uploading scripts Upload of scripts through admin rest endpoints/console is deprecated. It will be removed at a future release. 2.1.12. Authorization Services Drools Policy The Authorization Services Drools Policy has been removed. 2.1.13. Changes of default configuration values Reduced default HTTP socket read timeout The default read timeout for the HTTP and HTTPS listeners has been reduced from 120 to 30 seconds. Increased default JDBC connection pool size The default connection pool size of the default H2 JDBC datasource has been increased from 20 to 100 connections. It is recommended to set a sufficient pool size for the production datasource as well. 2.1.13.1. Upgrading Configuration The configuration changes affect standalone(-ha).xml and domain.xml files. Follow the Upgrading the Red Hat Single Sign-On server section to handle the migration of configuration files automatically. 2.1.13.2. Client Credentials Grant without refresh token by default From this Red Hat Single Sign-On version, the OAuth2 Client Credentials Grant endpoint does not issue refresh tokens by default. This behavior is aligned with the OAuth2 specification. As a side-effect of this change, there is no user session created on the Red Hat Single Sign-On server side after successful Client Credentials authentication, which results in improved performance and memory consumption. Clients that use Client Credentials Grant are encouraged to stop using refresh tokens and instead always authenticate at every request with grant_type=client_credentials instead of using refresh_token as grant type. In relation to this situation, Red Hat Single Sign-On has support for revocation of access tokens in the OAuth2 Revocation Endpoint, hence clients are allowed to revoke individual access tokens if needed. For the backwards compatibility, there is a possibility to stick to the behavior of old versions. When this option is used, the refresh token will be still issued after a successful authentication with the Client Credentials Grant and also the user session will be created. This capability can be enabled for the particular client in the Red Hat Single Sign-On admin console, in client details in the section with OpenID Connect Compatibility Modes with the switch Use Refresh Tokens For Client Credentials Grant . 2.1.13.3. Valid Request URIs If you use the OpenID Connect parameter request_uri , a requirement exists that your client needs to have Valid Request URIs configured. This parameter can be configured through the admin console on the client details page or through the admin REST API or client registration API. Valid Request URIs need to contain the list of Request URI values that are permitted for the particular client. This is to avoid SSRF attacks. There is a possibility to use wildcards or relative paths similarly such as the Valid Redirect URIs option, however for security purposes, we typically recommend to use as specific value as possible. 2.2. RH-SSO 7.3 The following changes have occurred from RH-SSO 7.2 to RH-SSO 7.3. 2.2.1. Changes to Authorization Services We added support for UMA 2.0. This version of the UMA specification introduced some important changes on how permissions are obtained from the server. Here are the main changes introduced by UMA 2.0 support. See Authorization Services Guide for details. Authorization API was removed Prior to UMA 2.0 (UMA 1.0), client applications were using the Authorization API to obtain permissions from the server in the format of a RPT. The new version of UMA specification has removed the Authorization API which was also removed from Red Hat Single Sign-On. In UMA 2.0, RPTs can now be obtained from the token endpoint by using a specific grant type. See Authorization Services Guide for details. Entitlement API was removed With the introduction of UMA 2.0, we decided to leverage the token endpoint and UMA grant type to allow obtaining RPTs from Red Hat Single Sign-On and avoid having different APIs. The functionality provided by the Entitlement API was kept the same and is still possible to obtain permissions for a set of one or more resources and scopes or all permissions from the server in case no resource or scope is provided. See Authorization Services Guide for details. Changes to UMA Discovery Endpoint UMA Discovery document changed, see Authorization Services Guide for details. Changes to Red Hat Single Sign-On Authorization JavaScript Adapter The Red Hat Single Sign-On Authorization JavaScript Adapter (keycloak-authz.js) changed in order to comply with the changes introduced by UMA 2.0 while keeping the same behavior as before. The main change is on how you invoke both authorization and entitlement methods which now expect a specific object type representing an authorization request. This new object type provides more flexibility on how permissions can be obtained from the server by supporting the different parameters supported by the UMA grant type. See Authorization Services Guide for details. Changes to Red Hat Single Sign-On Authorization Client Java API When upgrading to the new version of Red Hat Single Sign-On Authorization Client Java API, you'll notice that some representation classes were moved to a different package in org.keycloak:keycloak-core . 2.2.2. Client Templates changed to Client Scopes We added support for Client Scopes, which requires some attention during migration. Client Templates changed to Client Scopes Client Templates were changed to Client Scopes. If you had any Client Templates, their protocol mappers and role scope mappings will be preserved. Spaces replaced in the names Client templates with the space character in the name were renamed by replacing spaces with an underscore, because spaces are not allowed in the name of client scopes. For example, a client template my template will be changed to client scope my_template . Linking Client Scopes to Clients For clients which had the client template, the corresponding client scope is now added as Default Client Scope to the client. So protocol mappers and role scope mappings will be preserved on the client. Realm Default Client Scopes not linked with existing clients During the migration, the list of built-in client scopes is added to each realm as well as list of Realm Default Client Scopes . However, existing clients are NOT upgraded and new client scopes are NOT automatically added to them. Also all the protocol mappers and role scope mappings are kept on existing clients. In the new version, when you create a new client, it automatically has Realm Default Client Scopes attached to it and it does not have any protocol mappers attached to it. We did not change existing clients during migration as it would be impossible to properly detect customizations, which you will have for protocol mappers of the clients, for example. If you want to update existing clients (remove protocol mappers from them and link them with client scopes), you will need to do it manually. Consents need to be confirmed again The client scopes change required the refactoring of consents. Consents now point to client scopes, not to roles or protocol mappers. Because of this change, the previously confirmed persistent consents by users are not valid anymore and users need to confirm the consent page again after the migration. Some configuration switches removed The switch Scope Param Required was removed from Role Detail. The switches Consent Required and Consent Text were removed from the Protocol Mapper details. Those switches were replaced by the Client Scope feature. 2.2.3. New default client scopes We have added new realm default client scopes roles and web-origins . These client scopes contain protocol mappers to add the roles of the user and allowed web origins to the token. During migration, these client scopes should be automatically added to all the OpenID Connect clients as default client scopes. Hence no setup should be required after database migration is finished. 2.2.3.1. Protocol mapper SPI addition Related to this, there is a small addition in the (unsupported) Protocol Mappers SPI. You can be affected only if you implemented a custom ProtocolMapper. There is a new getPriority() method on the ProtocolMapper interface. The method has the default implementation set to return 0. If your protocol mapper implementation relies on the roles in the access token realmAccess or resourceAccess properties, you may need to increase the priority of your mapper. 2.2.3.2. Audience resolving Audiences of all the clients, for which authenticated user has at least one client role in the token, are automatically added to the aud claim in the access token now. On the other hand, an access token may not automatically contain the audience of the frontend client, for which it was issued. Read the Server Administration Guide for more details. 2.2.4. Upgrade to EAP 7.2 The Red Hat Single Sign-On server was upgraded to use EAP 7.2 as the underlying container. This does not directly involve any specific Red Hat Single Sign-On server functionality, but there are few changes related to the migration, which worth mentioning. Dependency updates The dependencies were updated to the versions used by EAP 7.2 server. For example, Infinispan is now 9.3.1.Final. Configuration changes There are few configuration changes in the standalone(-ha).xml and domain.xml files. You should follow the Section 3.1.2, "Upgrading Red Hat Single Sign-On server" section to handle the migration of configuration files automatically. Cross-Datacenter Replication changes You will need to upgrade RHDG server to version 7.3. The older version may still work, but it is not guaranteed as we don't test it anymore. There is a need to add protocolVersion property with the value 2.6 to the configuration of the remote-store element in the Red Hat Single Sign-On configuration. This is required as there is a need to downgrade the version of HotRod protocol to be compatible with the version used by RHDG 7.3. 2.2.5. Hostname configuration In versions it was recommended to use a filter to specify permitted hostnames. It is now possible to set a fixed hostname which makes it easier to make sure the valid hostname is used and also allows internal applications to invoke Red Hat Single Sign-On through an alternative URL, for example an internal IP address. It is recommended that you switch to this approach in production. 2.2.6. JavaScipt Adapter Promise To use native JavaScript promise with the JavaScript adapter it is now required to set promiseType to native in the init options. In the past if native promise was available a wrapper was returned that provided both the legacy Keycloak promise and the native promise. This was causing issues as the error handler was not always set prior to the native error event, which resulted in Uncaught (in promise) error. 2.2.7. Microsoft Identity Provider updated to use the Microsoft Graph API The Microsoft Identity Provider implementation in Red Hat Single Sign-On used to rely on the Live SDK endpoints for authorization and obtaining the user profile. From November 2018 onwards, Microsoft is removing support for the Live SDK API in favor of the new Microsoft Graph API. The Red Hat Single Sign-On identity provider has been updated to use the new endpoints so if this integration is in use make sure you upgrade to the latest Red Hat Single Sign-On version. Legacy client applications registered under "Live SDK applications" won't work with the Microsoft Graph endpoints due to changes in the id format of the applications. If you run into an error saying that the application identifier was not found in the directory, you will have to register the client application again in the Microsoft Application Registration portal to obtain a new application id. 2.2.8. Google Identity Provider updated to use Google Sign-in authentication system The Google Identity Provider implementation in Red Hat Single Sign-On used to rely on the Google+ API endpoints endpoints for authorization and obtaining the user profile. From March 2019 onwards, Google is removing support for the Google+ API in favor of the new Google Sign-in authentication system. The Red Hat Single Sign-On identity provider has been updated to use the new endpoints so if this integration is in use make sure you upgrade to the latest Red Hat Single Sign-On version. If you run into an error saying that the application identifier was not found in the directory, you will have to register the client application again in the Google API Console portal to obtain a new application id and secret. It is possible that you will need to adjust custom mappers for non-standard claims that were provided by Google+ user information endpoint and are provided under different name by Google Sign-in API. Please consult Google documentation for the most up-to-date information on available claims. 2.2.9. LinkedIn Social Broker Updated to Version 2 of LinkedIn APIs Accordingly with LinkedIn, all developers need to migrate to version 2.0 of their APIs and OAuth 2.0. As such, we have updated our LinkedIn Social Broker. Existing deployments using this broker may start experiencing errors when fetching user's profile using version 2 of LinkedIn APIs. This error may be related with the lack of permissions granted to the client application used to configure the broker which may not be authorized to access the Profile API or request specific OAuth2 scopes during the authentication process. Even for newly created LinkedIn client applications, you need to make sure that the client is able to request the r_liteprofile and r_emailaddress OAuth2 scopes, at least, as well that the client application can fetch current member's profile from the https://api.linkedin.com/v2/me endpoint. Due to these privacy restrictions imposed by LinkedIn in regards to access to member's information and the limited set of claims returned by the current member's Profile API, the LinkedIn Social Broker is now using the member's email address as the default username. That means that the r_emailaddress is always set when sending authorization requests during the authentication. 2.3. RH-SSO 7.2 The following changes have occurred from RH-SSO 7.1 to RH-SSO 7.2. 2.3.1. New Password Hashing algorithms We have added two new password hashing algorithms (pbkdf2-sha256 and pbkdf2-sha512). New realms will use the pbkdf2-sha256 hashing algorithm with 27500 hashing iterations. Since pbkdf2-sha256 is slightly faster than pbkdf2 the iterations was increased to 27500 from 20000. Existing realms are upgraded if the password policy contains the default value for the hashing algorithm (not specified) and iteration (20000). If you have changed the hashing iterations, you need to manually change to pbkdf2-sha256 if you'd like to use the more secure hashing algorithm. 2.3.2. ID Token requires scope=openid In RH-SSO 7.0, the ID Token was returned regardless if scope=openid query parameter was present or not in authorization request. This is incorrect according to the OpenID Connect specification. In RH-SSO 7.1, we added this query parameter to adapters, but left the old behavior to accommodate migration. In RH-SSO 7.2, this behavior has changed and the scope=openid query parameter is now required to mark the request as an OpenID Connect request. If this query parameter is omitted the ID Token will not be generated. 2.3.3. Microsoft SQL Server requires extra dependency Microsoft JDBC Driver 6.0 requires additional dependency added to the JDBC driver module. If you observe an NoClassDefFoundError error when using Microsoft SQL Server please add the following dependency to your JDBC driver module.xml file: <module name="javax.xml.bind.api"/> 2.3.4. Added session_state parameter to OpenID Connect Authentication Response The OpenID Connect Session Management specification requires that the parameter session_state is present in the OpenID Connect Authentication Response. In RH-SSO 7.1, we did not have this parameter, but now Red Hat Single Sign-On adds this parameter by default, as required by the specification. However, some OpenID Connect / OAuth2 adapters, and especially older Red Hat Single Sign-On adapters (such as RH-SSO 7.1 and older), may have issues with this new parameter. For example, the parameter will be always present in the browser URL after successful authentication to the client application. If you use RH-SSO 7.1 or a legacy OAuth2 / OpenID Connect adapter, it may be useful to disable adding the session_state parameter to the authentication response. This can be done for the particular client in the Red Hat Single Sign-On admin console, in client details in the section with OpenID Connect Compatibility Modes , described in Section 4.1, "Compatibility with older adapters" . There is the Exclude Session State From Authentication Response switch, which can be turned on to prevent adding the session_state parameter to the Authentication Response. 2.3.5. Microsoft Identity Provider updated to use the Microsoft Graph API The Microsoft Identity Provider implementation in Red Hat Single Sign-On up to version 7.2.4 relies on the Live SDK endpoints for authorization and obtaining the user profile. From November 2018 onwards, Microsoft is removing support for the Live SDK API in favor of the new Microsoft Graph API. The Red Hat Single Sign-On identity provider has been updated to use the new endpoints so if this integration is in use make sure you upgrade to Red Hat Single Sign-On version 7.2.5 or later. Legacy client applications registered under "Live SDK applications" won't work with the Microsoft Graph endpoints due to changes in the id format of the applications. If you run into an error saying that the application identifier was not found in the directory, you will have to register the client application again in the Microsoft Application Registration portal to obtain a new application id. 2.3.6. Google Identity Provider updated to use Google Sign-in authentication system The Google Identity Provider implementation in Red Hat Single Sign-On up to version 7.2.5 relies on the Google+ API endpoints endpoints for authorization and obtaining the user profile. From March 2019 onwards, Google is removing support for the Google+ API in favor of the new Google Sign-in authentication system. The Red Hat Single Sign-On identity provider has been updated to use the new endpoints so if this integration is in use make sure you upgrade to Red Hat Single Sign-On version 7.2.6 or later. If you run into an error saying that the application identifier was not found in the directory, you will have to register the client application again in the Google API Console portal to obtain a new application id and secret. It is possible that you will need to adjust custom mappers for non-standard claims that were provided by Google+ user information endpoint and are provided under different name by Google Sign-in API. Please consult Google documentation for the most up-to-date information on available claims. 2.3.7. LinkedIn Social Broker Updated to Version 2 of LinkedIn APIs Accordingly with LinkedIn, all developers need to migrate to version 2.0 of their APIs and OAuth 2.0. As such, we have updated our LinkedIn Social Broker so if this integration is in use make sure you upgrade to Red Hat Single Sign-On version 7.2.6 or later. Existing deployments using this broker may start experiencing errors when fetching user's profile using version 2 of LinkedIn APIs. This error may be related with the lack of permissions granted to the client application used to configure the broker which may not be authorized to access the Profile API or request specific OAuth2 scopes during the authentication process. Even for newly created LinkedIn client applications, you need to make sure that the client is able to request the r_liteprofile and r_emailaddress OAuth2 scopes, at least, as well that the client application can fetch current member's profile from the https://api.linkedin.com/v2/me endpoint. Due to these privacy restrictions imposed by LinkedIn in regards to access to member's information and the limited set of claims returned by the current member's Profile API, the LinkedIn Social Broker is now using the member's email address as the default username. That means that the r_emailaddress is always set when sending authorization requests during the authentication. 2.4. RH-SSO 7.1 The following changes have occurred from RH-SSO 7.0 to RH-SSO 7.1. 2.4.1. Realm Keys For RH-SSO 7.0, only one set of keys could be associated with a realm. This meant that when changing the keys, all current cookies and tokens would be invalidated and all users would have to re-authenticate. For RH-SSO 7.1, support for multiple keys for one realm has been added. At any given time, one set of keys is the active set used for creating signatures, but there can be multiple keys used to verify signatures. This means that old cookies and tokens can be verified, then refreshed with the new signatures, allowing users to remain authenticated when keys are changed. There are also some changes to how keys are managed through the Admin Console and Admin REST API; for more details see Realm Keys in the Server Administration Guide. To allow seamless key rotation you must remove hard-coded keys from client adapters. The client adapters will automatically retrieve keys from the server as long as the realm key is not specified. Client adapters will also retrieve new keys automatically when keys are rotated. 2.4.2. Client Redirect URI Matching For RH-SSO 7.0, query parameters are ignored when matching valid redirect URIs for a client. For RH-SSO 7.1, query parameters are no longer ignored. If you need to include query parameters in the redirect URI you must specify the query parameters in the valid redirect URI for the client (for example, https://hostname/app/login?foo=bar) or use a wildcard (for example, https://hostname/app/login/*). Fragments are also no longer permitted in Valid Redirect URIs (that is, https://hostname/app#fragment). 2.4.3. Automatically Redirect to Identity Provider For RH-SSO 7.1, identity providers cannot be set as the default authentication provider. To automatically redirect to an identity provider for RH-SSO 7.1, you must now configure the identity provider redirector. For more information see Default Identity Provider in the Server Administration Guide . If you previously had an identity provider with the default authentication provider option set, this value is automatically used as the value for the identity provider redirector when the server is upgraded to RH-SSO 7.1. 2.4.4. Admin REST API For RH-SSO 7.0, paginated endpoints in the Admin REST API return all results if the maxResults query parameter was not specified. This could cause issues with a temporary high load and requests timing out when a large number of results were returned (for example, users). For RH-SSO 7.1, a maximum of 100 results are returned if a value for maxResults is not specified. You can return all results by specifying maxResults as -1. 2.4.5. Server Configuration For RH-SSO 7.0, server configuration is split between the keycloak-server.json file and the standalone/domain.xml or domain.xml file. For RH-SSO 7.1, the keycloak-server.json file has been removed and all server configuration is done through the standalone.xml or domain.xml file. The upgrading procedure for RH-SSO 7.1 automatically migrates the server configuration from the keycloak-server.json file to the standalone.xml or domain.xml file. 2.4.6. Key Encryption Algorithm in SAML Assertions For RH-SSO 7.1, keys in SAML assertions and documents are now encrypted using the RSA-OAEP encryption scheme. To use encrypted assertions, ensure your service providers support this encryption scheme. In the event that you have service providers that do not support RSA-OAEP, RH-SSO can be configured to use the legacy RSA-v1.5 encryption scheme by starting the server with the system property "keycloak.saml.key_trans.rsa_v1.5" set to true. If you do this, you should upgrade your service providers as soon as possible to be able to revert to the more secure RSA-OAEP encryption scheme.
[ "SELECT REALM_ID, NAME, COUNT(*) FROM KEYCLOAK_GROUP WHERE PARENT_GROUP is NULL GROUP BY REALM_ID, NAME HAVING COUNT(*) > 1;", "One of the main changes introduced by this release is that you are no longer required to exchange access tokens with RPTs in order to access resources protected by a resource server (when not using UMA). Depending on how the policy enforcer is configured on the resource server side, you can just send regular access tokens as a bearer token and permissions will still be enforced.", "<module name=\"javax.xml.bind.api\"/>" ]
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/upgrading_guide/release_changes
4.4. Smart Cards
4.4. Smart Cards Authentication based on smart cards is an alternative to password-based authentication. User credentials are stored on the smart card, and special software and hardware is then used to access them. In order to authenticate using a smart card, the user must place the smart card into a smart card reader and then supply the PIN code for the smart card. Important The following sections describe how to configure a single system for smart card authentication with local users by using the pam_pkcs11 and pam_krb5 packages. Note that these packages are now deprecated, as described in Deprecated Functionality in the 7.4 Release Notes . To configure smart card authentication centrally, use the enhanced smart card functionality provided by the System Security Services Daemon (SSSD). For details, see Smart-card Authentication in Identity Management in the Linux Domain Identity, Authentication, and Policy Guide . 4.4.1. Configuring Smart Cards Using authconfig Once the Enable smart card support option is selected, additional controls for configuring behavior of smart cards appear. Figure 4.3. Smart Card Options Note that smart card login for Red Hat Enterprise Linux servers and workstations is not enabled by default and must be enabled in the system settings. Note Using single sign-on when logging into Red Hat Enterprise Linux requires these packages: nss-tools nss-pam-ldapd esc pam_pkcs11 pam_krb5 opensc pcsc-lite-ccid gdm authconfig authconfig-gtk krb5-libs krb5-workstation krb5-pkinit pcsc-lite pcsc-lite-libs 4.4.1.1. Enabling Smart Card Authentication from the UI Log into the system as root. Download the root CA certificates for the network in base 64 format, and install them on the server. The certificates are installed in the appropriate system database using the certutil command. For example: Note Do not be concerned that the imported certificate is not displayed in the authconfig UI later during the process. You cannot see the certificate in the UI; it is obtained from the /etc/pki/nssdb/ directory during authentication. In the top menu, select the Application menu, select Sundry , and then click Authentication . Open the Advanced Options tab. Click the Enable Smart Card Support check box. There are two behaviors that can be configured for smart cards: The Card removal action menu sets the response that the system takes if the smart card is removed during an active session. The Ignore option means that the system continues functioning as normal if the smart card is removed, while Lock immediately locks the screen. The Require smart card for login check box sets whether a smart card is required for logins. When this option is selected, all other methods of authentication are blocked. Warning Do not select this until after you have successfully logged in using a smart card. By default, the mechanisms to check whether a certificate has been revoked (Online Certificate Status Protocol, or OCSP, responses) are disabled. To validate whether a certificate has been revoked before its expiration period, enable OCSP checking by adding the ocsp_on option to the cert_policy directive. Open the pam_pkcs11.conf file. Change every cert_policy line so that it contains the ocsp_on option. Note Because of the way the file is parsed, there must be a space between cert_policy and the equals sign. Otherwise, parsing the parameter fails. If the smart card has not yet been enrolled (set up with personal certificates and keys), enroll the smart card. If the smart card is a CAC card, create the .k5login file in the CAC user's home directory. The .k5login file is required to have the Microsoft Principal Name on the CAC card. Add the following line to the /etc/pam.d/smartcard-auth and /etc/pam.d/system-auth files: If the OpenSC module does not work as expected, use the module from the coolkey package: /usr/lib64/pkcs11/libcoolkeypk11.so . In this case, consider contacting Red Hat Technical Support or filing a Bugzilla report about the problem. Configure the /etc/krb5.conf file. The settings vary depending on whether you are using a CAC card or a Gemalto 64K card. With CAC cards, specify all the root certificates related to the CAC card usage in pkinit_anchors . In the following example /etc/krb5.conf file for configuring a CAC card, EXAMPLE.COM is the realm name for the CAC cards, and kdc.server.hostname.com is the KDC server host name. In the following example /etc/krb5.conf file for configuring a Gemalto 64K card, EXAMPLE.COM is the realm created on the KDC server, kdc-ca.pem is the CA certificate, and kdc.server.hostname.com is the KDC server host name. Note When a smart card is inserted, the pklogin_finder utility, when run in debug mode, first maps the login ID to the certificates on the card and then attempts to output information about the validity of certificates: The command is useful for diagnosing problems with using a smart card to log into the system. 4.4.1.2. Configuring Smart Card Authentication from the Command Line All that is required to use smart cards with a system is to set the --enablesmartcard option: There are other configuration options for smart cards, such as changing the default smart card module, setting the behavior of the system when the smart card is removed, and requiring smart cards for login. A value of 0 instructs the system to lock out a user immediately if the smart card is removed; a setting of 1 ignores it if the smart card is removed: Once smart card authentication has been successfully configured and tested, then the system can be configured to require smart card authentication for users rather than simple password-based authentication. Warning Do not use the --enablerequiresmartcard option until you have successfully authenticated to the system using a smart card. Otherwise, users may be unable to log into the system. 4.4.2. Smart Card Authentication in Identity Management Red Hat Identity Management supports smart card authentication for IdM users. For more information, see the Smart-card Authentication in Identity Management section in the Linux Domain Identity, Authentication, and Policy Guide . If you want to start using smart card authentication, see the hardware requirements: Smart-card support in RHEL 7.4+ .
[ "certutil -A -d /etc/pki/nssdb -n \"root CA cert\" -t \"CT,C,C\" -i /tmp/ca_cert.crt", "vim /etc/pam_pkcs11/pam_pkcs11.conf", "cert_policy = ca, ocsp_on, signature;", "auth optional pam_krb5.so use_first_pass no_subsequent_prompt preauth_options=X509_user_identity=PKCS11:/usr/lib64/pkcs11/opensc-pkcs11.so", "[logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 1h renew_lifetime = 6h forwardable = true default_realm = EXAMPLE.COM [realms] EXAMPLE.COM = { kdc = kdc.server.hostname.com admin_server = kdc.server.hostname.com pkinit_anchors = FILE:/etc/pki/nssdb/ca_cert.pem pkinit_anchors = FILE:/etc/pki/nssdb/CAC_CA_cert.pem pkinit_anchors = FILE:/etc/pki/nssdb/CAC_CA_email_cert.pem pkinit_anchors = FILE:/etc/pki/nssdb/CAC_root_ca_cert.pem pkinit_cert_match = CAC card specific information } [domain_realm] EXAMPLE.COM = EXAMPLE.COM .EXAMPLE.COM = EXAMPLE.COM .kdc.server.hostname.com = EXAMPLE.COM kdc.server.hostname.com = EXAMPLE.COM [appdefaults] pam = { debug = true ticket_lifetime = 1h renew_lifetime = 3h forwardable = true krb4_convert = false mappings = username on the CAC card Principal name on the card }", "[logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 15m renew_lifetime = 6h forwardable = true default_realm = EXAMPLE.COM [realms] EXAMPLE.COM = { kdc = kdc.server.hostname.com admin_server = kdc.server.hostname.com pkinit_anchors = FILE:/etc/pki/nssdb/kdc-ca.pem pkinit_cert_match = <KU>digitalSignature pkinit_kdc_hostname = kdc.server.hostname.com } [domain_realm] EXAMPLE.COM = EXAMPLE.COM .EXAMPLE.COM = EXAMPLE.COM .kdc.server.hostname.com = EXAMPLE.COM kdc.server.hostname.com = EXAMPLE.COM [appdefaults] pam = { debug = true ticket_lifetime = 1h renew_lifetime = 3h forwardable = true krb4_convert = false }", "pklogin_finder debug", "authconfig --enablesmartcard --update", "authconfig --enablesmartcard --smartcardaction=0 --update", "authconfig --enablerequiresmartcard --update" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system-level_authentication_guide/smartcards
Chapter 3. Service Mesh 1.x
Chapter 3. Service Mesh 1.x 3.1. Service Mesh Release Notes Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . 3.1.1. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . 3.1.2. Introduction to Red Hat OpenShift Service Mesh Red Hat OpenShift Service Mesh addresses a variety of problems in a microservice architecture by creating a centralized point of control in an application. It adds a transparent layer on existing distributed applications without requiring any changes to the application code. Microservice architectures split the work of enterprise applications into modular services, which can make scaling and maintenance easier. However, as an enterprise application built on a microservice architecture grows in size and complexity, it becomes difficult to understand and manage. Service Mesh can address those architecture problems by capturing or intercepting traffic between services and can modify, redirect, or create new requests to other services. Service Mesh, which is based on the open source Istio project , provides an easy way to create a network of deployed services that provides discovery, load balancing, service-to-service authentication, failure recovery, metrics, and monitoring. A service mesh also provides more complex operational functionality, including A/B testing, canary releases, access control, and end-to-end authentication. Note Red Hat OpenShift Service Mesh 3 is generally available. For more information, see Red Hat OpenShift Service Mesh 3.0 . 3.1.3. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. The must-gather tool enables you to collect diagnostic information about your OpenShift Container Platform cluster, including virtual machines and other data related to Red Hat OpenShift Service Mesh. For prompt support, supply diagnostic information for both OpenShift Container Platform and Red Hat OpenShift Service Mesh. 3.1.3.1. About the must-gather tool The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues, including: Resource definitions Service logs By default, the oc adm must-gather command uses the default plugin image and writes into ./must-gather.local . Alternatively, you can collect specific information by running the command with the appropriate arguments as described in the following sections: To collect data related to one or more specific features, use the --image argument with an image, as listed in a following section. For example: USD oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.18.0 To collect the audit logs, use the -- /usr/bin/gather_audit_logs argument, as described in a following section. For example: USD oc adm must-gather -- /usr/bin/gather_audit_logs Note Audit logs are not collected as part of the default set of information to reduce the size of the files. When you run oc adm must-gather , a new pod with a random name is created in a new project on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local in the current working directory. For example: NAMESPACE NAME READY STATUS RESTARTS AGE ... openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s ... Optionally, you can run the oc adm must-gather command in a specific namespace by using the --run-namespace option. For example: USD oc adm must-gather --run-namespace <namespace> \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.18.0 3.1.3.2. Prerequisites Access to the cluster as a user with the cluster-admin role. The OpenShift Container Platform CLI ( oc ) installed. 3.1.3.3. About collecting service mesh data You can use the oc adm must-gather CLI command to collect information about your cluster, including features and objects associated with Red Hat OpenShift Service Mesh. Prerequisites Access to the cluster as a user with the cluster-admin role. The OpenShift Container Platform CLI ( oc ) installed. Procedure To collect Red Hat OpenShift Service Mesh data with must-gather , you must specify the Red Hat OpenShift Service Mesh image. USD oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.6 To collect Red Hat OpenShift Service Mesh data for a specific Service Mesh control plane namespace with must-gather , you must specify the Red Hat OpenShift Service Mesh image and namespace. In this example, after gather, replace <namespace> with your Service Mesh control plane namespace, such as istio-system . USD oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.6 gather <namespace> This creates a local directory that contains the following items: The Istio Operator namespace and its child objects All control plane namespaces and their children objects All namespaces and their children objects that belong to any service mesh All Istio custom resource definitions (CRD) All Istio CRD objects, such as VirtualServices, in a given namespace All Istio webhooks 3.1.4. Red Hat OpenShift Service Mesh supported configurations The following are the only supported configurations for the Red Hat OpenShift Service Mesh: OpenShift Container Platform version 4.6 or later. Note OpenShift Online and Red Hat OpenShift Dedicated are not supported for Red Hat OpenShift Service Mesh. The deployment must be contained within a single OpenShift Container Platform cluster that is not federated. This release of Red Hat OpenShift Service Mesh is only available on OpenShift Container Platform x86_64. This release only supports configurations where all Service Mesh components are contained in the OpenShift Container Platform cluster in which it operates. It does not support management of microservices that reside outside of the cluster, or in a multi-cluster scenario. This release only supports configurations that do not integrate external services such as virtual machines. For additional information about Red Hat OpenShift Service Mesh lifecycle and supported configurations, refer to the Support Policy . 3.1.4.1. Supported configurations for Kiali on Red Hat OpenShift Service Mesh The Kiali observability console is only supported on the two most recent releases of the Chrome, Edge, Firefox, or Safari browsers. 3.1.4.2. Supported Mixer adapters This release only supports the following Mixer adapter: 3scale Istio Adapter 3.1.5. New Features Red Hat OpenShift Service Mesh provides a number of key capabilities uniformly across a network of services: Traffic Management - Control the flow of traffic and API calls between services, make calls more reliable, and make the network more robust in the face of adverse conditions. Service Identity and Security - Provide services in the mesh with a verifiable identity and provide the ability to protect service traffic as it flows over networks of varying degrees of trustworthiness. Policy Enforcement - Apply organizational policy to the interaction between services, ensure access policies are enforced and resources are fairly distributed among consumers. Policy changes are made by configuring the mesh, not by changing application code. Telemetry - Gain understanding of the dependencies between services and the nature and flow of traffic between them, providing the ability to quickly identify issues. 3.1.5.1. New features Red Hat OpenShift Service Mesh 1.1.18.2 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs). 3.1.5.1.1. Component versions included in Red Hat OpenShift Service Mesh version 1.1.18.2 Component Version Istio 1.4.10 Jaeger 1.30.2 Kiali 1.12.21.1 3scale Istio Adapter 1.0.0 3.1.5.2. New features Red Hat OpenShift Service Mesh 1.1.18.1 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs). 3.1.5.2.1. Component versions included in Red Hat OpenShift Service Mesh version 1.1.18.1 Component Version Istio 1.4.10 Jaeger 1.30.2 Kiali 1.12.20.1 3scale Istio Adapter 1.0.0 3.1.5.3. New features Red Hat OpenShift Service Mesh 1.1.18 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs). 3.1.5.3.1. Component versions included in Red Hat OpenShift Service Mesh version 1.1.18 Component Version Istio 1.4.10 Jaeger 1.24.0 Kiali 1.12.18 3scale Istio Adapter 1.0.0 3.1.5.4. New features Red Hat OpenShift Service Mesh 1.1.17.1 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs). 3.1.5.4.1. Change in how Red Hat OpenShift Service Mesh handles URI fragments Red Hat OpenShift Service Mesh contains a remotely exploitable vulnerability, CVE-2021-39156 , where an HTTP request with a fragment (a section in the end of a URI that begins with a # character) in the URI path could bypass the Istio URI path-based authorization policies. For instance, an Istio authorization policy denies requests sent to the URI path /user/profile . In the vulnerable versions, a request with URI path /user/profile#section1 bypasses the deny policy and routes to the backend (with the normalized URI path /user/profile%23section1 ), possibly leading to a security incident. You are impacted by this vulnerability if you use authorization policies with DENY actions and operation.paths , or ALLOW actions and operation.notPaths . With the mitigation, the fragment part of the request's URI is removed before the authorization and routing. This prevents a request with a fragment in its URI from bypassing authorization policies which are based on the URI without the fragment part. 3.1.5.4.2. Required update for authorization policies Istio generates hostnames for both the hostname itself and all matching ports. For instance, a virtual service or Gateway for a host of "httpbin.foo" generates a config matching "httpbin.foo and httpbin.foo:*". However, exact match authorization policies only match the exact string given for the hosts or notHosts fields. Your cluster is impacted if you have AuthorizationPolicy resources using exact string comparison for the rule to determine hosts or notHosts . You must update your authorization policy rules to use prefix match instead of exact match. For example, replacing hosts: ["httpbin.com"] with hosts: ["httpbin.com:*"] in the first AuthorizationPolicy example. First example AuthorizationPolicy using prefix match apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: action: DENY rules: - from: - source: namespaces: ["dev"] to: - operation: hosts: ["httpbin.com","httpbin.com:*"] Second example AuthorizationPolicy using prefix match apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: default spec: action: DENY rules: - to: - operation: hosts: ["httpbin.example.com:*"] 3.1.5.5. New features Red Hat OpenShift Service Mesh 1.1.17 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 3.1.5.6. New features Red Hat OpenShift Service Mesh 1.1.16 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 3.1.5.7. New features Red Hat OpenShift Service Mesh 1.1.15 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 3.1.5.8. New features Red Hat OpenShift Service Mesh 1.1.14 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. Important There are manual steps that must be completed to address CVE-2021-29492 and CVE-2021-31920. 3.1.5.8.1. Manual updates required by CVE-2021-29492 and CVE-2021-31920 Istio contains a remotely exploitable vulnerability where an HTTP request path with multiple slashes or escaped slash characters ( %2F or %5C ) could potentially bypass an Istio authorization policy when path-based authorization rules are used. For example, assume an Istio cluster administrator defines an authorization DENY policy to reject the request at path /admin . A request sent to the URL path //admin will NOT be rejected by the authorization policy. According to RFC 3986 , the path //admin with multiple slashes should technically be treated as a different path from the /admin . However, some backend services choose to normalize the URL paths by merging multiple slashes into a single slash. This can result in a bypass of the authorization policy ( //admin does not match /admin ), and a user can access the resource at path /admin in the backend; this would represent a security incident. Your cluster is impacted by this vulnerability if you have authorization policies using ALLOW action + notPaths field or DENY action + paths field patterns. These patterns are vulnerable to unexpected policy bypasses. Your cluster is NOT impacted by this vulnerability if: You don't have authorization policies. Your authorization policies don't define paths or notPaths fields. Your authorization policies use ALLOW action + paths field or DENY action + notPaths field patterns. These patterns could only cause unexpected rejection instead of policy bypasses. The upgrade is optional for these cases. Note The Red Hat OpenShift Service Mesh configuration location for path normalization is different from the Istio configuration. 3.1.5.8.2. Updating the path normalization configuration Istio authorization policies can be based on the URL paths in the HTTP request. Path normalization , also known as URI normalization, modifies and standardizes the incoming requests' paths so that the normalized paths can be processed in a standard way. Syntactically different paths may be equivalent after path normalization. Istio supports the following normalization schemes on the request paths before evaluating against the authorization policies and routing the requests: Table 3.1. Normalization schemes Option Description Example Notes NONE No normalization is done. Anything received by Envoy will be forwarded exactly as-is to any backend service. ../%2Fa../b is evaluated by the authorization policies and sent to your service. This setting is vulnerable to CVE-2021-31920. BASE This is currently the option used in the default installation of Istio. This applies the normalize_path option on Envoy proxies, which follows RFC 3986 with extra normalization to convert backslashes to forward slashes. /a/../b is normalized to /b . \da is normalized to /da . This setting is vulnerable to CVE-2021-31920. MERGE_SLASHES Slashes are merged after the BASE normalization. /a//b is normalized to /a/b . Update to this setting to mitigate CVE-2021-31920. DECODE_AND_MERGE_SLASHES The strictest setting when you allow all traffic by default. This setting is recommended, with the caveat that you must thoroughly test your authorization policies routes. Percent-encoded slash and backslash characters ( %2F , %2f , %5C and %5c ) are decoded to / or \ , before the MERGE_SLASHES normalization. /a%2fb is normalized to /a/b . Update to this setting to mitigate CVE-2021-31920. This setting is more secure, but also has the potential to break applications. Test your applications before deploying to production. The normalization algorithms are conducted in the following order: Percent-decode %2F , %2f , %5C and %5c . The RFC 3986 and other normalization implemented by the normalize_path option in Envoy. Merge slashes. Warning While these normalization options represent recommendations from HTTP standards and common industry practices, applications may interpret a URL in any way it chooses to. When using denial policies, ensure that you understand how your application behaves. 3.1.5.8.3. Path normalization configuration examples Ensuring Envoy normalizes request paths to match your backend services' expectations is critical to the security of your system. The following examples can be used as a reference for you to configure your system. The normalized URL paths, or the original URL paths if NONE is selected, will be: Used to check against the authorization policies. Forwarded to the backend application. Table 3.2. Configuration examples If your application... Choose... Relies on the proxy to do normalization BASE , MERGE_SLASHES or DECODE_AND_MERGE_SLASHES Normalizes request paths based on RFC 3986 and does not merge slashes. BASE Normalizes request paths based on RFC 3986 and merges slashes, but does not decode percent-encoded slashes. MERGE_SLASHES Normalizes request paths based on RFC 3986 , decodes percent-encoded slashes, and merges slashes. DECODE_AND_MERGE_SLASHES Processes request paths in a way that is incompatible with RFC 3986 . NONE 3.1.5.8.4. Configuring your SMCP for path normalization To configure path normalization for Red Hat OpenShift Service Mesh, specify the following in your ServiceMeshControlPlane . Use the configuration examples to help determine the settings for your system. SMCP v1 pathNormalization spec: global: pathNormalization: <option> 3.1.5.9. New features Red Hat OpenShift Service Mesh 1.1.13 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 3.1.5.10. New features Red Hat OpenShift Service Mesh 1.1.12 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 3.1.5.11. New features Red Hat OpenShift Service Mesh 1.1.11 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 3.1.5.12. New features Red Hat OpenShift Service Mesh 1.1.10 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 3.1.5.13. New features Red Hat OpenShift Service Mesh 1.1.9 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 3.1.5.14. New features Red Hat OpenShift Service Mesh 1.1.8 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 3.1.5.15. New features Red Hat OpenShift Service Mesh 1.1.7 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 3.1.5.16. New features Red Hat OpenShift Service Mesh 1.1.6 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 3.1.5.17. New features Red Hat OpenShift Service Mesh 1.1.5 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. This release also added support for configuring cipher suites. 3.1.5.18. New features Red Hat OpenShift Service Mesh 1.1.4 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. Note There are manual steps that must be completed to address CVE-2020-8663. 3.1.5.18.1. Manual updates required by CVE-2020-8663 The fix for CVE-2020-8663 : envoy: Resource exhaustion when accepting too many connections added a configurable limit on downstream connections. The configuration option for this limit must be configured to mitigate this vulnerability. Important These manual steps are required to mitigate this CVE whether you are using the 1.1 version or the 1.0 version of Red Hat OpenShift Service Mesh. This new configuration option is called overload.global_downstream_max_connections , and it is configurable as a proxy runtime setting. Perform the following steps to configure limits at the Ingress Gateway. Procedure Create a file named bootstrap-override.json with the following text to force the proxy to override the bootstrap template and load runtime configuration from disk: Create a secret from the bootstrap-override.json file, replacing <SMCPnamespace> with the namespace where you created the service mesh control plane (SMCP): USD oc create secret generic -n <SMCPnamespace> gateway-bootstrap --from-file=bootstrap-override.json Update the SMCP configuration to activate the override. Updated SMCP configuration example #1 apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: gateways: istio-ingressgateway: env: ISTIO_BOOTSTRAP_OVERRIDE: /var/lib/istio/envoy/custom-bootstrap/bootstrap-override.json secretVolumes: - mountPath: /var/lib/istio/envoy/custom-bootstrap name: custom-bootstrap secretName: gateway-bootstrap To set the new configuration option, create a secret that has the desired value for the overload.global_downstream_max_connections setting. The following example uses a value of 10000 : USD oc create secret generic -n <SMCPnamespace> gateway-settings --from-literal=overload.global_downstream_max_connections=10000 Update the SMCP again to mount the secret in the location where Envoy is looking for runtime configuration: Updated SMCP configuration example #2 apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: template: default #Change the version to "v1.0" if you are on the 1.0 stream. version: v1.1 istio: gateways: istio-ingressgateway: env: ISTIO_BOOTSTRAP_OVERRIDE: /var/lib/istio/envoy/custom-bootstrap/bootstrap-override.json secretVolumes: - mountPath: /var/lib/istio/envoy/custom-bootstrap name: custom-bootstrap secretName: gateway-bootstrap # below is the new secret mount - mountPath: /var/lib/istio/envoy/runtime name: gateway-settings secretName: gateway-settings 3.1.5.18.2. Upgrading from Elasticsearch 5 to Elasticsearch 6 When updating from Elasticsearch 5 to Elasticsearch 6, you must delete your Jaeger instance, then recreate the Jaeger instance because of an issue with certificates. Re-creating the Jaeger instance triggers creating a new set of certificates. If you are using persistent storage the same volumes can be mounted for the new Jaeger instance as long as the Jaeger name and namespace for the new Jaeger instance are the same as the deleted Jaeger instance. Procedure if Jaeger is installed as part of Red Hat Service Mesh Determine the name of your Jaeger custom resource file: USD oc get jaeger -n istio-system You should see something like the following: NAME AGE jaeger 3d21h Copy the generated custom resource file into a temporary directory: USD oc get jaeger jaeger -oyaml -n istio-system > /tmp/jaeger-cr.yaml Delete the Jaeger instance: USD oc delete jaeger jaeger -n istio-system Recreate the Jaeger instance from your copy of the custom resource file: USD oc create -f /tmp/jaeger-cr.yaml -n istio-system Delete the copy of the generated custom resource file: USD rm /tmp/jaeger-cr.yaml Procedure if Jaeger not installed as part of Red Hat Service Mesh Before you begin, create a copy of your Jaeger custom resource file. Delete the Jaeger instance by deleting the custom resource file: USD oc delete -f <jaeger-cr-file> For example: USD oc delete -f jaeger-prod-elasticsearch.yaml Recreate your Jaeger instance from the backup copy of your custom resource file: USD oc create -f <jaeger-cr-file> Validate that your Pods have restarted: USD oc get pods -n jaeger-system -w 3.1.5.19. New features Red Hat OpenShift Service Mesh 1.1.3 This release of Red Hat OpenShift Service Mesh addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 3.1.5.20. New features Red Hat OpenShift Service Mesh 1.1.2 This release of Red Hat OpenShift Service Mesh addresses a security vulnerability. 3.1.5.21. New features Red Hat OpenShift Service Mesh 1.1.1 This release of Red Hat OpenShift Service Mesh adds support for a disconnected installation. 3.1.5.22. New features Red Hat OpenShift Service Mesh 1.1.0 This release of Red Hat OpenShift Service Mesh adds support for Istio 1.4.6 and Jaeger 1.17.1. 3.1.5.22.1. Manual updates from 1.0 to 1.1 If you are updating from Red Hat OpenShift Service Mesh 1.0 to 1.1, you must update the ServiceMeshControlPlane resource to update the control plane components to the new version. In the web console, click the Red Hat OpenShift Service Mesh Operator. Click the Project menu and choose the project where your ServiceMeshControlPlane is deployed from the list, for example istio-system . Click the name of your control plane, for example basic-install . Click YAML and add a version field to the spec: of your ServiceMeshControlPlane resource. For example, to update to Red Hat OpenShift Service Mesh 1.1.0, add version: v1.1 . The version field specifies the version of Service Mesh to install and defaults to the latest available version. Note Note that support for Red Hat OpenShift Service Mesh v1.0 ended in October, 2020. You must upgrade to either v1.1 or v2.0. 3.1.6. Deprecated features Some features available in releases have been deprecated or removed. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. 3.1.6.1. Deprecated features Red Hat OpenShift Service Mesh 1.1.5 The following custom resources were deprecated in release 1.1.5 and were removed in release 1.1.12 Policy - The Policy resource is deprecated and will be replaced by the PeerAuthentication resource in a future release. MeshPolicy - The MeshPolicy resource is deprecated and will be replaced by the PeerAuthentication resource in a future release. v1alpha1 RBAC API -The v1alpha1 RBAC policy is deprecated by the v1beta1 AuthorizationPolicy . RBAC (Role Based Access Control) defines ServiceRole and ServiceRoleBinding objects. ServiceRole ServiceRoleBinding RbacConfig - RbacConfig implements the Custom Resource Definition for controlling Istio RBAC behavior. ClusterRbacConfig (versions prior to Red Hat OpenShift Service Mesh 1.0) ServiceMeshRbacConfig (Red Hat OpenShift Service Mesh version 1.0 and later) In Kiali, the login and LDAP strategies are deprecated. A future version will introduce authentication using OpenID providers. The following components are also deprecated in this release and will be replaced by the Istiod component in a future release. Mixer - access control and usage policies Pilot - service discovery and proxy configuration Citadel - certificate generation Galley - configuration validation and distribution 3.1.7. Known issues These limitations exist in Red Hat OpenShift Service Mesh: Red Hat OpenShift Service Mesh does not support IPv6 , as it is not supported by the upstream Istio project, nor fully supported by OpenShift Container Platform. Graph layout - The layout for the Kiali graph can render differently, depending on your application architecture and the data to display (number of graph nodes and their interactions). Because it is difficult if not impossible to create a single layout that renders nicely for every situation, Kiali offers a choice of several different layouts. To choose a different layout, you can choose a different Layout Schema from the Graph Settings menu. The first time you access related services such as Jaeger and Grafana, from the Kiali console, you must accept the certificate and re-authenticate using your OpenShift Container Platform login credentials. This happens due to an issue with how the framework displays embedded pages in the console. 3.1.7.1. Service Mesh known issues These are the known issues in Red Hat OpenShift Service Mesh: Jaeger/Kiali Operator upgrade blocked with operator pending When upgrading the Jaeger or Kiali Operators with Service Mesh 1.0.x installed, the operator status shows as Pending. Workaround: See the linked Knowledge Base article for more information. Istio-14743 Due to limitations in the version of Istio that this release of Red Hat OpenShift Service Mesh is based on, there are several applications that are currently incompatible with Service Mesh. See the linked community issue for details. MAISTRA-858 The following Envoy log messages describing deprecated options and configurations associated with Istio 1.1.x are expected: [2019-06-03 07:03:28.943][19][warning][misc] [external/envoy/source/common/protobuf/utility.cc:129] Using deprecated option 'envoy.api.v2.listener.Filter.config'. This configuration will be removed from Envoy soon. [2019-08-12 22:12:59.001][13][warning][misc] [external/envoy/source/common/protobuf/utility.cc:174] Using deprecated option 'envoy.api.v2.Listener.use_original_dst' from file lds.proto. This configuration will be removed from Envoy soon. MAISTRA-806 Evicted Istio Operator Pod causes mesh and CNI not to deploy. Workaround: If the istio-operator pod is evicted while deploying the control pane, delete the evicted istio-operator pod. MAISTRA-681 When the control plane has many namespaces, it can lead to performance issues. MAISTRA-465 The Maistra Operator fails to create a service for operator metrics. MAISTRA-453 If you create a new project and deploy pods immediately, sidecar injection does not occur. The operator fails to add the maistra.io/member-of before the pods are created, therefore the pods must be deleted and recreated for sidecar injection to occur. MAISTRA-158 Applying multiple gateways referencing the same hostname will cause all gateways to stop functioning. 3.1.7.2. Kiali known issues Note New issues for Kiali should be created in the OpenShift Service Mesh project with the Component set to Kiali . These are the known issues in Kiali: KIALI-2206 When you are accessing the Kiali console for the first time, and there is no cached browser data for Kiali, the "View in Grafana" link on the Metrics tab of the Kiali Service Details page redirects to the wrong location. The only way you would encounter this issue is if you are accessing Kiali for the first time. KIALI-507 Kiali does not support Internet Explorer 11. This is because the underlying frameworks do not support Internet Explorer. To access the Kiali console, use one of the two most recent versions of the Chrome, Edge, Firefox or Safari browser. 3.1.8. Fixed issues The following issues been resolved in the current release: 3.1.8.1. Service Mesh fixed issues MAISTRA-2371 Handle tombstones in listerInformer. The updated cache codebase was not handling tombstones when translating the events from the namespace caches to the aggregated cache, leading to a panic in the go routine. OSSM-542 Galley is not using the new certificate after rotation. OSSM-99 Workloads generated from direct pod without labels may crash Kiali. OSSM-93 IstioConfigList can't filter by two or more names. OSSM-92 Cancelling unsaved changes on the VS/DR YAML edit page does not cancel the changes. OSSM-90 Traces not available on the service details page. MAISTRA-1649 Headless services conflict when in different namespaces. When deploying headless services within different namespaces the endpoint configuration is merged and results in invalid Envoy configurations being pushed to the sidecars. MAISTRA-1541 Panic in kubernetesenv when the controller is not set on owner reference. If a pod has an ownerReference which does not specify the controller, this will cause a panic within the kubernetesenv cache.go code. MAISTRA-1352 Cert-manager Custom Resource Definitions (CRD) from the control plane installation have been removed for this release and future releases. If you have already installed Red Hat OpenShift Service Mesh, the CRDs must be removed manually if cert-manager is not being used. MAISTRA-1001 Closing HTTP/2 connections could lead to segmentation faults in istio-proxy . MAISTRA-932 Added the requires metadata to add dependency relationship between Jaeger Operator and OpenShift Elasticsearch Operator. Ensures that when the Jaeger Operator is installed, it automatically deploys the OpenShift Elasticsearch Operator if it is not available. MAISTRA-862 Galley dropped watches and stopped providing configuration to other components after many namespace deletions and re-creations. MAISTRA-833 Pilot stopped delivering configuration after many namespace deletions and re-creations. MAISTRA-684 The default Jaeger version in the istio-operator is 1.12.0, which does not match Jaeger version 1.13.1 that shipped in Red Hat OpenShift Service Mesh 0.12.TechPreview. MAISTRA-622 In Maistra 0.12.0/TP12, permissive mode does not work. The user has the option to use Plain text mode or Mutual TLS mode, but not permissive. MAISTRA-572 Jaeger cannot be used with Kiali. In this release Jaeger is configured to use the OAuth proxy, but is also only configured to work through a browser and does not allow service access. Kiali cannot properly communicate with the Jaeger endpoint and it considers Jaeger to be disabled. See also TRACING-591 . MAISTRA-357 In OpenShift 4 Beta on AWS, it is not possible, by default, to access a TCP or HTTPS service through the ingress gateway on a port other than port 80. The AWS load balancer has a health check that verifies if port 80 on the service endpoint is active. Without a service running on port 80, the load balancer health check fails. MAISTRA-348 OpenShift 4 Beta on AWS does not support ingress gateway traffic on ports other than 80 or 443. If you configure your ingress gateway to handle TCP traffic with a port number other than 80 or 443, you have to use the service hostname provided by the AWS load balancer rather than the OpenShift router as a workaround. MAISTRA-193 Unexpected console info messages are visible when health checking is enabled for citadel. Bug 1821432 Toggle controls in OpenShift Container Platform Control Resource details page do not update the CR correctly. UI Toggle controls in the Service Mesh Control Plane (SMCP) Overview page in the OpenShift Container Platform web console sometimes update the wrong field in the resource. To update a ServiceMeshControlPlane resource, edit the YAML content directly or update the resource from the command line instead of clicking the toggle controls. 3.1.8.2. Kiali fixed issues KIALI-3239 If a Kiali Operator pod has failed with a status of "Evicted" it blocks the Kiali operator from deploying. The workaround is to delete the Evicted pod and redeploy the Kiali operator. KIALI-3118 After changes to the ServiceMeshMemberRoll, for example adding or removing projects, the Kiali pod restarts and then displays errors on the Graph page while the Kiali pod is restarting. KIALI-3096 Runtime metrics fail in Service Mesh. There is an OAuth filter between the Service Mesh and Prometheus, requiring a bearer token to be passed to Prometheus before access is granted. Kiali has been updated to use this token when communicating to the Prometheus server, but the application metrics are currently failing with 403 errors. KIALI-3070 This bug only affects custom dashboards, not the default dashboards. When you select labels in metrics settings and refresh the page, your selections are retained in the menu but your selections are not displayed on the charts. KIALI-2686 When the control plane has many namespaces, it can lead to performance issues. 3.2. Understanding Service Mesh Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . Red Hat OpenShift Service Mesh provides a platform for behavioral insight and operational control over your networked microservices in a service mesh. With Red Hat OpenShift Service Mesh, you can connect, secure, and monitor microservices in your OpenShift Container Platform environment. 3.2.1. What is Red Hat OpenShift Service Mesh? A service mesh is the network of microservices that make up applications in a distributed microservice architecture and the interactions between those microservices. When a Service Mesh grows in size and complexity, it can become harder to understand and manage. Based on the open source Istio project, Red Hat OpenShift Service Mesh adds a transparent layer on existing distributed applications without requiring any changes to the service code. You add Red Hat OpenShift Service Mesh support to services by deploying a special sidecar proxy to relevant services in the mesh that intercepts all network communication between microservices. You configure and manage the Service Mesh using the Service Mesh control plane features. Red Hat OpenShift Service Mesh gives you an easy way to create a network of deployed services that provide: Discovery Load balancing Service-to-service authentication Failure recovery Metrics Monitoring Red Hat OpenShift Service Mesh also provides more complex operational functions including: A/B testing Canary releases Access control End-to-end authentication 3.2.2. Red Hat OpenShift Service Mesh Architecture Red Hat OpenShift Service Mesh is logically split into a data plane and a control plane: The data plane is a set of intelligent proxies deployed as sidecars. These proxies intercept and control all inbound and outbound network communication between microservices in the service mesh. Sidecar proxies also communicate with Mixer, the general-purpose policy and telemetry hub. Envoy proxy intercepts all inbound and outbound traffic for all services in the service mesh. Envoy is deployed as a sidecar to the relevant service in the same pod. The control plane manages and configures proxies to route traffic, and configures Mixers to enforce policies and collect telemetry. Mixer enforces access control and usage policies (such as authorization, rate limits, quotas, authentication, and request tracing) and collects telemetry data from the Envoy proxy and other services. Pilot configures the proxies at runtime. Pilot provides service discovery for the Envoy sidecars, traffic management capabilities for intelligent routing (for example, A/B tests or canary deployments), and resiliency (timeouts, retries, and circuit breakers). Citadel issues and rotates certificates. Citadel provides strong service-to-service and end-user authentication with built-in identity and credential management. You can use Citadel to upgrade unencrypted traffic in the service mesh. Operators can enforce policies based on service identity rather than on network controls using Citadel. Galley ingests the service mesh configuration, then validates, processes, and distributes the configuration. Galley protects the other service mesh components from obtaining user configuration details from OpenShift Container Platform. Red Hat OpenShift Service Mesh also uses the istio-operator to manage the installation of the control plane. An Operator is a piece of software that enables you to implement and automate common activities in your OpenShift Container Platform cluster. It acts as a controller, allowing you to set or change the desired state of objects in your cluster. 3.2.3. Understanding Kiali Kiali provides visibility into your service mesh by showing you the microservices in your service mesh, and how they are connected. 3.2.3.1. Kiali overview Kiali provides observability into the Service Mesh running on OpenShift Container Platform. Kiali helps you define, validate, and observe your Istio service mesh. It helps you to understand the structure of your service mesh by inferring the topology, and also provides information about the health of your service mesh. Kiali provides an interactive graph view of your namespace in real time that provides visibility into features like circuit breakers, request rates, latency, and even graphs of traffic flows. Kiali offers insights about components at different levels, from Applications to Services and Workloads, and can display the interactions with contextual information and charts on the selected graph node or edge. Kiali also provides the ability to validate your Istio configurations, such as gateways, destination rules, virtual services, mesh policies, and more. Kiali provides detailed metrics, and a basic Grafana integration is available for advanced queries. Distributed tracing is provided by integrating Jaeger into the Kiali console. Kiali is installed by default as part of the Red Hat OpenShift Service Mesh. 3.2.3.2. Kiali architecture Kiali is based on the open source Kiali project . Kiali is composed of two components: the Kiali application and the Kiali console. Kiali application (back end) - This component runs in the container application platform and communicates with the service mesh components, retrieves and processes data, and exposes this data to the console. The Kiali application does not need storage. When deploying the application to a cluster, configurations are set in ConfigMaps and secrets. Kiali console (front end) - The Kiali console is a web application. The Kiali application serves the Kiali console, which then queries the back end for data to present it to the user. In addition, Kiali depends on external services and components provided by the container application platform and Istio. Red Hat Service Mesh (Istio) - Istio is a Kiali requirement. Istio is the component that provides and controls the service mesh. Although Kiali and Istio can be installed separately, Kiali depends on Istio and will not work if it is not present. Kiali needs to retrieve Istio data and configurations, which are exposed through Prometheus and the cluster API. Prometheus - A dedicated Prometheus instance is included as part of the Red Hat OpenShift Service Mesh installation. When Istio telemetry is enabled, metrics data are stored in Prometheus. Kiali uses this Prometheus data to determine the mesh topology, display metrics, calculate health, show possible problems, and so on. Kiali communicates directly with Prometheus and assumes the data schema used by Istio Telemetry. Prometheus is an Istio dependency and a hard dependency for Kiali, and many of Kiali's features will not work without Prometheus. Cluster API - Kiali uses the API of the OpenShift Container Platform (cluster API) to fetch and resolve service mesh configurations. Kiali queries the cluster API to retrieve, for example, definitions for namespaces, services, deployments, pods, and other entities. Kiali also makes queries to resolve relationships between the different cluster entities. The cluster API is also queried to retrieve Istio configurations like virtual services, destination rules, route rules, gateways, quotas, and so on. Jaeger - Jaeger is optional, but is installed by default as part of the Red Hat OpenShift Service Mesh installation. When you install the distributed tracing platform (Jaeger) as part of the default Red Hat OpenShift Service Mesh installation, the Kiali console includes a tab to display distributed tracing data. Note that tracing data will not be available if you disable Istio's distributed tracing feature. Also note that user must have access to the namespace where the Service Mesh control plane is installed to view tracing data. Grafana - Grafana is optional, but is installed by default as part of the Red Hat OpenShift Service Mesh installation. When available, the metrics pages of Kiali display links to direct the user to the same metric in Grafana. Note that user must have access to the namespace where the Service Mesh control plane is installed to view links to the Grafana dashboard and view Grafana data. 3.2.3.3. Kiali features The Kiali console is integrated with Red Hat Service Mesh and provides the following capabilities: Health - Quickly identify issues with applications, services, or workloads. Topology - Visualize how your applications, services, or workloads communicate via the Kiali graph. Metrics - Predefined metrics dashboards let you chart service mesh and application performance for Go, Node.js. Quarkus, Spring Boot, Thorntail and Vert.x. You can also create your own custom dashboards. Tracing - Integration with Jaeger lets you follow the path of a request through various microservices that make up an application. Validations - Perform advanced validations on the most common Istio objects (Destination Rules, Service Entries, Virtual Services, and so on). Configuration - Optional ability to create, update and delete Istio routing configuration using wizards or directly in the YAML editor in the Kiali Console. 3.2.4. Understanding Jaeger Every time a user takes an action in an application, a request is executed by the architecture that may require dozens of different services to participate to produce a response. The path of this request is a distributed transaction. Jaeger lets you perform distributed tracing, which follows the path of a request through various microservices that make up an application. Distributed tracing is a technique that is used to tie the information about different units of work together-usually executed in different processes or hosts-to understand a whole chain of events in a distributed transaction. Distributed tracing lets developers visualize call flows in large service oriented architectures. It can be invaluable in understanding serialization, parallelism, and sources of latency. Jaeger records the execution of individual requests across the whole stack of microservices, and presents them as traces. A trace is a data/execution path through the system. An end-to-end trace is comprised of one or more spans. A span represents a logical unit of work in Jaeger that has an operation name, the start time of the operation, and the duration. Spans may be nested and ordered to model causal relationships. 3.2.4.1. Distributed tracing overview As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use the Red Hat OpenShift distributed tracing platform for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. With the distributed tracing platform, you can perform the following functions: Monitor distributed transactions Optimize performance and latency Perform root cause analysis 3.2.4.2. Distributed tracing architecture The distributed tracing platform (Jaeger) is based on the open source Jaeger project . The distributed tracing platform (Jaeger) is made up of several components that work together to collect, store, and display tracing data. Jaeger Client (Tracer, Reporter, instrumented application, client libraries)- Jaeger clients are language specific implementations of the OpenTracing API. They can be used to instrument applications for distributed tracing either manually or with a variety of existing open source frameworks, such as Camel (Fuse), Spring Boot (RHOAR), MicroProfile (RHOAR/Thorntail), Wildfly (EAP), and many more, that are already integrated with OpenTracing. Jaeger Agent (Server Queue, Processor Workers) - The Jaeger agent is a network daemon that listens for spans sent over User Datagram Protocol (UDP), which it batches and sends to the collector. The agent is meant to be placed on the same host as the instrumented application. This is typically accomplished by having a sidecar in container environments like Kubernetes. Jaeger Collector (Queue, Workers) - Similar to the Agent, the Collector is able to receive spans and place them in an internal queue for processing. This allows the collector to return immediately to the client/agent instead of waiting for the span to make its way to the storage. Storage (Data Store) - Collectors require a persistent storage backend. Jaeger has a pluggable mechanism for span storage. Note that for this release, the only supported storage is Elasticsearch. Query (Query Service) - Query is a service that retrieves traces from storage. Ingester (Ingester Service) - Jaeger can use Apache Kafka as a buffer between the collector and the actual backing storage (Elasticsearch). Ingester is a service that reads data from Kafka and writes to another storage backend (Elasticsearch). Jaeger Console - Jaeger provides a user interface that lets you visualize your distributed tracing data. On the Search page, you can find traces and explore details of the spans that make up an individual trace. 3.2.4.3. Red Hat OpenShift distributed tracing platform features Red Hat OpenShift distributed tracing platform provides the following capabilities: Integration with Kiali - When properly configured, you can view distributed tracing platform data from the Kiali console. High scalability - The distributed tracing platform back end is designed to have no single points of failure and to scale with the business needs. Distributed Context Propagation - Enables you to connect data from different components together to create a complete end-to-end trace. Backwards compatibility with Zipkin - Red Hat OpenShift distributed tracing platform has APIs that enable it to be used as a drop-in replacement for Zipkin, but Red Hat is not supporting Zipkin compatibility in this release. 3.2.5. steps Prepare to install Red Hat OpenShift Service Mesh in your OpenShift Container Platform environment. 3.3. Service Mesh and Istio differences Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . An installation of Red Hat OpenShift Service Mesh differs from upstream Istio community installations in multiple ways. The modifications to Red Hat OpenShift Service Mesh are sometimes necessary to resolve issues, provide additional features, or to handle differences when deploying on OpenShift Container Platform. The current release of Red Hat OpenShift Service Mesh differs from the current upstream Istio community release in the following ways: 3.3.1. Multitenant installations Whereas upstream Istio takes a single tenant approach, Red Hat OpenShift Service Mesh supports multiple independent control planes within the cluster. Red Hat OpenShift Service Mesh uses a multitenant operator to manage the control plane lifecycle. Red Hat OpenShift Service Mesh installs a multitenant control plane by default. You specify the projects that can access the Service Mesh, and isolate the Service Mesh from other control plane instances. 3.3.1.1. Multitenancy versus cluster-wide installations The main difference between a multitenant installation and a cluster-wide installation is the scope of privileges used by istod. The components no longer use cluster-scoped Role Based Access Control (RBAC) resource ClusterRoleBinding . Every project in the ServiceMeshMemberRoll members list will have a RoleBinding for each service account associated with the control plane deployment and each control plane deployment will only watch those member projects. Each member project has a maistra.io/member-of label added to it, where the member-of value is the project containing the control plane installation. Red Hat OpenShift Service Mesh configures each member project to ensure network access between itself, the control plane, and other member projects by creating a NetworkPolicy resource in each member project allowing ingress to all pods from the other members and the control plane. If you remove a member from Service Mesh, this NetworkPolicy resource is deleted from the project. Note This also restricts ingress to only member projects. If you require ingress from non-member projects, you need to create a NetworkPolicy to allow that traffic through. 3.3.1.2. Cluster scoped resources Upstream Istio has two cluster scoped resources that it relies on. The MeshPolicy and the ClusterRbacConfig . These are not compatible with a multitenant cluster and have been replaced as described below. ServiceMeshPolicy replaces MeshPolicy for configuration of control-plane-wide authentication policies. This must be created in the same project as the control plane. ServicemeshRbacConfig replaces ClusterRbacConfig for configuration of control-plane-wide role based access control. This must be created in the same project as the control plane. 3.3.2. Differences between Istio and Red Hat OpenShift Service Mesh An installation of Red Hat OpenShift Service Mesh differs from an installation of Istio in multiple ways. The modifications to Red Hat OpenShift Service Mesh are sometimes necessary to resolve issues, provide additional features, or to handle differences when deploying on OpenShift Container Platform. 3.3.2.1. Command line tool The command line tool for Red Hat OpenShift Service Mesh is oc. Red Hat OpenShift Service Mesh does not support istioctl. 3.3.2.2. Automatic injection The upstream Istio community installation automatically injects the sidecar into pods within the projects you have labeled. Red Hat OpenShift Service Mesh does not automatically inject the sidecar to any pods, but requires you to opt in to injection using an annotation without labeling projects. This method requires fewer privileges and does not conflict with other OpenShift capabilities such as builder pods. To enable automatic injection you specify the sidecar.istio.io/inject annotation as described in the Automatic sidecar injection section. 3.3.2.3. Istio Role Based Access Control features Istio Role Based Access Control (RBAC) provides a mechanism you can use to control access to a service. You can identify subjects by user name or by specifying a set of properties and apply access controls accordingly. The upstream Istio community installation includes options to perform exact header matches, match wildcards in headers, or check for a header containing a specific prefix or suffix. Red Hat OpenShift Service Mesh extends the ability to match request headers by using a regular expression. Specify a property key of request.regex.headers with a regular expression. Upstream Istio community matching request headers example apiVersion: "rbac.istio.io/v1alpha1" kind: ServiceRoleBinding metadata: name: httpbin-client-binding namespace: httpbin spec: subjects: - user: "cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account" properties: request.headers[<header>]: "value" Red Hat OpenShift Service Mesh matching request headers by using regular expressions apiVersion: "rbac.istio.io/v1alpha1" kind: ServiceRoleBinding metadata: name: httpbin-client-binding namespace: httpbin spec: subjects: - user: "cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account" properties: request.regex.headers[<header>]: "<regular expression>" 3.3.2.4. OpenSSL Red Hat OpenShift Service Mesh replaces BoringSSL with OpenSSL. OpenSSL is a software library that contains an open source implementation of the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. The Red Hat OpenShift Service Mesh Proxy binary dynamically links the OpenSSL libraries (libssl and libcrypto) from the underlying Red Hat Enterprise Linux operating system. 3.3.2.5. Component modifications A maistra-version label has been added to all resources. All Ingress resources have been converted to OpenShift Route resources. Grafana, Tracing (Jaeger), and Kiali are enabled by default and exposed through OpenShift routes. Godebug has been removed from all templates The istio-multi ServiceAccount and ClusterRoleBinding have been removed, as well as the istio-reader ClusterRole. 3.3.2.6. Envoy, Secret Discovery Service, and certificates Red Hat OpenShift Service Mesh does not support QUIC-based services. Deployment of TLS certificates using the Secret Discovery Service (SDS) functionality of Istio is not currently supported in Red Hat OpenShift Service Mesh. The Istio implementation depends on a nodeagent container that uses hostPath mounts. 3.3.2.7. Istio Container Network Interface (CNI) plugin Red Hat OpenShift Service Mesh includes CNI plugin, which provides you with an alternate way to configure application pod networking. The CNI plugin replaces the init-container network configuration eliminating the need to grant service accounts and projects access to Security Context Constraints (SCCs) with elevated privileges. 3.3.2.8. Routes for Istio Gateways OpenShift routes for Istio Gateways are automatically managed in Red Hat OpenShift Service Mesh. Every time an Istio Gateway is created, updated or deleted inside the service mesh, an OpenShift route is created, updated or deleted. A Red Hat OpenShift Service Mesh control plane component called Istio OpenShift Routing (IOR) synchronizes the gateway route. For more information, see Automatic route creation. 3.3.2.8.1. Catch-all domains Catch-all domains ("*") are not supported. If one is found in the Gateway definition, Red Hat OpenShift Service Mesh will create the route, but will rely on OpenShift to create a default hostname. This means that the newly created route will not be a catch all ("*") route, instead it will have a hostname in the form <route-name>[-<project>].<suffix> . See the OpenShift documentation for more information about how default hostnames work and how a cluster administrator can customize it. 3.3.2.8.2. Subdomains Subdomains (e.g.: "*.domain.com") are supported. However this ability does not come enabled by default in OpenShift Container Platform. This means that Red Hat OpenShift Service Mesh will create the route with the subdomain, but it will only be in effect if OpenShift Container Platform is configured to enable it. 3.3.2.8.3. Transport layer security Transport Layer Security (TLS) is supported. This means that, if the Gateway contains a tls section, the OpenShift Route will be configured to support TLS. Additional resources Automatic route creation 3.3.3. Kiali and service mesh Installing Kiali via the Service Mesh on OpenShift Container Platform differs from community Kiali installations in multiple ways. These modifications are sometimes necessary to resolve issues, provide additional features, or to handle differences when deploying on OpenShift Container Platform. Kiali has been enabled by default. Ingress has been enabled by default. Updates have been made to the Kiali ConfigMap. Updates have been made to the ClusterRole settings for Kiali. Do not edit the ConfigMap, because your changes might be overwritten by the Service Mesh or Kiali Operators. Files that the Kiali Operator manages have a kiali.io/ label or annotation. Updating the Operator files should be restricted to those users with cluster-admin privileges. If you use Red Hat OpenShift Dedicated, updating the Operator files should be restricted to those users with dedicated-admin privileges. 3.3.4. Distributed tracing and service mesh Installing the distributed tracing platform (Jaeger) with the Service Mesh on OpenShift Container Platform differs from community Jaeger installations in multiple ways. These modifications are sometimes necessary to resolve issues, provide additional features, or to handle differences when deploying on OpenShift Container Platform. Distributed tracing has been enabled by default for Service Mesh. Ingress has been enabled by default for Service Mesh. The name for the Zipkin port name has changed to jaeger-collector-zipkin (from http ) Jaeger uses Elasticsearch for storage by default when you select either the production or streaming deployment option. The community version of Istio provides a generic "tracing" route. Red Hat OpenShift Service Mesh uses a "jaeger" route that is installed by the Red Hat OpenShift distributed tracing platform (Jaeger) Operator and is already protected by OAuth. Red Hat OpenShift Service Mesh uses a sidecar for the Envoy proxy, and Jaeger also uses a sidecar, for the Jaeger agent. These two sidecars are configured separately and should not be confused with each other. The proxy sidecar creates spans related to the pod's ingress and egress traffic. The agent sidecar receives the spans emitted by the application and sends them to the Jaeger Collector. 3.4. Preparing to install Service Mesh Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . Before you can install Red Hat OpenShift Service Mesh, review the installation activities, ensure that you meet the prerequisites: 3.4.1. Prerequisites Possess an active OpenShift Container Platform subscription on your Red Hat account. If you do not have a subscription, contact your sales representative for more information. Review the OpenShift Container Platform 4.18 overview . Install OpenShift Container Platform 4.18. Install OpenShift Container Platform 4.18 on AWS Install OpenShift Container Platform 4.18 on AWS with user-provisioned infrastructure Install OpenShift Container Platform 4.18 on bare metal Install OpenShift Container Platform 4.18 on vSphere Note If you are installing Red Hat OpenShift Service Mesh on a restricted network , follow the instructions for your chosen OpenShift Container Platform infrastructure. Install the version of the OpenShift Container Platform command line utility (the oc client tool) that matches your OpenShift Container Platform version and add it to your path. If you are using OpenShift Container Platform 4.18, see About the OpenShift CLI . 3.4.2. Red Hat OpenShift Service Mesh supported configurations The following are the only supported configurations for the Red Hat OpenShift Service Mesh: OpenShift Container Platform version 4.6 or later. Note OpenShift Online and Red Hat OpenShift Dedicated are not supported for Red Hat OpenShift Service Mesh. The deployment must be contained within a single OpenShift Container Platform cluster that is not federated. This release of Red Hat OpenShift Service Mesh is only available on OpenShift Container Platform x86_64. This release only supports configurations where all Service Mesh components are contained in the OpenShift Container Platform cluster in which it operates. It does not support management of microservices that reside outside of the cluster, or in a multi-cluster scenario. This release only supports configurations that do not integrate external services such as virtual machines. For additional information about Red Hat OpenShift Service Mesh lifecycle and supported configurations, refer to the Support Policy . 3.4.2.1. Supported configurations for Kiali on Red Hat OpenShift Service Mesh The Kiali observability console is only supported on the two most recent releases of the Chrome, Edge, Firefox, or Safari browsers. 3.4.2.2. Supported Mixer adapters This release only supports the following Mixer adapter: 3scale Istio Adapter 3.4.3. Service Mesh Operators overview Red Hat OpenShift Service Mesh requires the use of the Red Hat OpenShift Service Mesh Operator which allows you to connect, secure, control, and observe the microservices that comprise your applications. You can also install other Operators to enhance your service mesh experience. Warning Do not install Community versions of the Operators. Community Operators are not supported. The following Operator is required: Red Hat OpenShift Service Mesh Operator Allows you to connect, secure, control, and observe the microservices that comprise your applications. It also defines and monitors the ServiceMeshControlPlane resources that manage the deployment, updating, and deletion of the Service Mesh components. It is based on the open source Istio project. The following Operators are optional: Kiali Operator provided by Red Hat Provides observability for your service mesh. You can view configurations, monitor traffic, and analyze traces in a single console. It is based on the open source Kiali project. Red Hat OpenShift distributed tracing platform (Tempo) Provides distributed tracing to monitor and troubleshoot transactions in complex distributed systems. It is based on the open source Grafana Tempo project. The following optional Operators are deprecated: Important Starting with Red Hat OpenShift Service Mesh 2.5, Red Hat OpenShift distributed tracing platform (Jaeger) and OpenShift Elasticsearch Operator are deprecated and will be removed in a future release. Red Hat will provide bug fixes and support for these features during the current release lifecycle, but these features will no longer receive enhancements and will be removed. As an alternative to Red Hat OpenShift distributed tracing platform (Jaeger), you can use Red Hat OpenShift distributed tracing platform (Tempo) instead. Red Hat OpenShift distributed tracing platform (Jaeger) Provides distributed tracing to monitor and troubleshoot transactions in complex distributed systems. It is based on the open source Jaeger project. OpenShift Elasticsearch Operator Provides database storage for tracing and logging with the distributed tracing platform (Jaeger). It is based on the open source Elasticsearch project. 3.4.4. steps Install Red Hat OpenShift Service Mesh in your OpenShift Container Platform environment. 3.5. Installing Service Mesh Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . Installing the Service Mesh involves installing the OpenShift Elasticsearch, Jaeger, Kiali and Service Mesh Operators, creating and managing a ServiceMeshControlPlane resource to deploy the control plane, and creating a ServiceMeshMemberRoll resource to specify the namespaces associated with the Service Mesh. Note Mixer's policy enforcement is disabled by default. You must enable it to run policy tasks. See Update Mixer policy enforcement for instructions on enabling Mixer policy enforcement. Note Multi-tenant control plane installations are the default configuration. Note The Service Mesh documentation uses istio-system as the example project, but you can deploy the service mesh to any project. 3.5.1. Prerequisites Follow the Preparing to install Red Hat OpenShift Service Mesh process. An account with the cluster-admin role. The Service Mesh installation process uses the OperatorHub to install the ServiceMeshControlPlane custom resource definition within the openshift-operators project. The Red Hat OpenShift Service Mesh defines and monitors the ServiceMeshControlPlane related to the deployment, update, and deletion of the control plane. Starting with Red Hat OpenShift Service Mesh 1.1.18.2, you must install the OpenShift Elasticsearch Operator, the Jaeger Operator, and the Kiali Operator before the Red Hat OpenShift Service Mesh Operator can install the control plane. 3.5.2. Installing the OpenShift Elasticsearch Operator The default Red Hat OpenShift distributed tracing platform (Jaeger) deployment uses in-memory storage because it is designed to be installed quickly for those evaluating Red Hat OpenShift distributed tracing platform, giving demonstrations, or using Red Hat OpenShift distributed tracing platform (Jaeger) in a test environment. If you plan to use Red Hat OpenShift distributed tracing platform (Jaeger) in production, you must install and configure a persistent storage option, in this case, Elasticsearch. Prerequisites You have access to the OpenShift Container Platform web console. You have access to the cluster as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Warning Do not install Community versions of the Operators. Community Operators are not supported. Note If you have already installed the OpenShift Elasticsearch Operator as part of OpenShift Logging, you do not need to install the OpenShift Elasticsearch Operator again. The Red Hat OpenShift distributed tracing platform (Jaeger) Operator creates the Elasticsearch instance using the installed OpenShift Elasticsearch Operator. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Operators OperatorHub . Type Elasticsearch into the filter box to locate the OpenShift Elasticsearch Operator. Click the OpenShift Elasticsearch Operator provided by Red Hat to display information about the Operator. Click Install . On the Install Operator page, select the stable Update Channel. This automatically updates your Operator as new versions are released. Accept the default All namespaces on the cluster (default) . This installs the Operator in the default openshift-operators-redhat project and makes the Operator available to all projects in the cluster. Note The Elasticsearch installation requires the openshift-operators-redhat namespace for the OpenShift Elasticsearch Operator. The other Red Hat OpenShift distributed tracing platform Operators are installed in the openshift-operators namespace. Accept the default Automatic approval strategy. By accepting the default, when a new version of this Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select Manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. Note The Manual approval strategy requires a user with appropriate credentials to approve the Operator install and subscription process. Click Install . On the Installed Operators page, select the openshift-operators-redhat project. Wait for the InstallSucceeded status of the OpenShift Elasticsearch Operator before continuing. 3.5.3. Installing the Red Hat OpenShift distributed tracing platform Operator You can install the Red Hat OpenShift distributed tracing platform Operator through the OperatorHub . By default, the Operator is installed in the openshift-operators project. Prerequisites You have access to the OpenShift Container Platform web console. You have access to the cluster as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. If you require persistent storage, you must install the OpenShift Elasticsearch Operator before installing the Red Hat OpenShift distributed tracing platform Operator. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. If you use Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Navigate to Operators OperatorHub . Search for the Red Hat OpenShift distributed tracing platform Operator by entering distributed tracing platform in the search field. Select the Red Hat OpenShift distributed tracing platform Operator, which is provided by Red Hat , to display information about the Operator. Click Install . For the Update channel on the Install Operator page, select stable to automatically update the Operator when new versions are released. Accept the default All namespaces on the cluster (default) . This installs the Operator in the default openshift-operators project and makes the Operator available to all projects in the cluster. Accept the default Automatic approval strategy. Note If you accept this default, the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of this Operator when a new version of the Operator becomes available. If you select Manual updates, the OLM creates an update request when a new version of the Operator becomes available. To update the Operator to the new version, you must then manually approve the update request as a cluster administrator. The Manual approval strategy requires a cluster administrator to manually approve Operator installation and subscription. Click Install . Navigate to Operators Installed Operators . On the Installed Operators page, select the openshift-operators project. Wait for the Succeeded status of the Red Hat OpenShift distributed tracing platform Operator before continuing. 3.5.4. Installing the Kiali Operator You must install the Kiali Operator for the Red Hat OpenShift Service Mesh Operator to install the Service Mesh control plane. Warning Do not install Community versions of the Operators. Community Operators are not supported. Prerequisites Access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators OperatorHub . Type Kiali into the filter box to find the Kiali Operator. Click the Kiali Operator provided by Red Hat to display information about the Operator. Click Install . On the Operator Installation page, select the stable Update Channel. Select All namespaces on the cluster (default) . This installs the Operator in the default openshift-operators project and makes the Operator available to all projects in the cluster. Select the Automatic Approval Strategy. Note The Manual approval strategy requires a user with appropriate credentials to approve the Operator install and subscription process. Click Install . The Installed Operators page displays the Kiali Operator's installation progress. 3.5.5. Installing the Operators To install Red Hat OpenShift Service Mesh, you must install the Red Hat OpenShift Service Mesh Operator. Repeat the procedure for each additional Operator you want to install. Additional Operators include: Kiali Operator provided by Red Hat Tempo Operator Deprecated additional Operators include: Important Starting with Red Hat OpenShift Service Mesh 2.5, Red Hat OpenShift distributed tracing platform (Jaeger) and OpenShift Elasticsearch Operator are deprecated and will be removed in a future release. Red Hat will provide bug fixes and support for these features during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Red Hat OpenShift distributed tracing platform (Jaeger), you can use Red Hat OpenShift distributed tracing platform (Tempo) instead. Red Hat OpenShift distributed tracing platform (Jaeger) OpenShift Elasticsearch Operator Note If you have already installed the OpenShift Elasticsearch Operator as part of OpenShift Logging, you do not need to install the OpenShift Elasticsearch Operator again. The Red Hat OpenShift distributed tracing platform (Jaeger) Operator creates the Elasticsearch instance using the installed OpenShift Elasticsearch Operator. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. In the OpenShift Container Platform web console, click Operators OperatorHub . Type the name of the Operator into the filter box and select the Red Hat version of the Operator. Community versions of the Operators are not supported. Click Install . On the Install Operator page for each Operator, accept the default settings. Click Install . Wait until the Operator installs before repeating the steps for the Operator you want to install. The Red Hat OpenShift Service Mesh Operator installs in the openshift-operators namespace and is available for all namespaces in the cluster. The Kiali Operator provided by Red Hat installs in the openshift-operators namespace and is available for all namespaces in the cluster. The Tempo Operator installs in the openshift-tempo-operator namespace and is available for all namespaces in the cluster. The Red Hat OpenShift distributed tracing platform (Jaeger) installs in the openshift-distributed-tracing namespace and is available for all namespaces in the cluster. Important Starting with Red Hat OpenShift Service Mesh 2.5, Red Hat OpenShift distributed tracing platform (Jaeger) is deprecated and will be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Red Hat OpenShift distributed tracing platform (Jaeger), you can use Red Hat OpenShift distributed tracing platform (Tempo) instead. The OpenShift Elasticsearch Operator installs in the openshift-operators-redhat namespace and is available for all namespaces in the cluster. Important Starting with Red Hat OpenShift Service Mesh 2.5, OpenShift Elasticsearch Operator is deprecated and will be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. Verification After all you have installed all four Operators, click Operators Installed Operators to verify that your Operators are installed. 3.5.6. Deploying the Red Hat OpenShift Service Mesh control plane The ServiceMeshControlPlane resource defines the configuration to be used during installation. You can deploy the default configuration provided by Red Hat or customize the ServiceMeshControlPlane file to fit your business needs. You can deploy the Service Mesh control plane by using the OpenShift Container Platform web console or from the command line using the oc client tool. 3.5.6.1. Deploying the control plane from the web console Follow this procedure to deploy the Red Hat OpenShift Service Mesh control plane by using the web console. In this example, istio-system is the name of the control plane project. Prerequisites The Red Hat OpenShift Service Mesh Operator must be installed. Review the instructions for how to customize the Red Hat OpenShift Service Mesh installation. An account with the cluster-admin role. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Create a project named istio-system . Navigate to Home Projects . Click Create Project . Enter istio-system in the Name field. Click Create . Navigate to Operators Installed Operators . If necessary, select istio-system from the Project menu. You may have to wait a few moments for the Operators to be copied to the new project. Click the Red Hat OpenShift Service Mesh Operator. Under Provided APIs , the Operator provides links to create two resource types: A ServiceMeshControlPlane resource A ServiceMeshMemberRoll resource Under Istio Service Mesh Control Plane click Create ServiceMeshControlPlane . On the Create Service Mesh Control Plane page, modify the YAML for the default ServiceMeshControlPlane template as needed. Note For additional information about customizing the control plane, see customizing the Red Hat OpenShift Service Mesh installation. For production, you must change the default Jaeger template. Click Create to create the control plane. The Operator creates pods, services, and Service Mesh control plane components based on your configuration parameters. Click the Istio Service Mesh Control Plane tab. Click the name of the new control plane. Click the Resources tab to see the Red Hat OpenShift Service Mesh control plane resources the Operator created and configured. 3.5.6.2. Deploying the control plane from the CLI Follow this procedure to deploy the Red Hat OpenShift Service Mesh control plane the command line. Prerequisites The Red Hat OpenShift Service Mesh Operator must be installed. Review the instructions for how to customize the Red Hat OpenShift Service Mesh installation. An account with the cluster-admin role. Access to the OpenShift CLI ( oc ). Procedure Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443 Create a project named istio-system . USD oc new-project istio-system Create a ServiceMeshControlPlane file named istio-installation.yaml using the example found in "Customize the Red Hat OpenShift Service Mesh installation". You can customize the values as needed to match your use case. For production deployments you must change the default Jaeger template. Run the following command to deploy the control plane: USD oc create -n istio-system -f istio-installation.yaml Execute the following command to see the status of the control plane installation. USD oc get smcp -n istio-system The installation has finished successfully when the STATUS column is ComponentsReady . Run the following command to watch the progress of the Pods during the installation process: You should see output similar to the following: Example output NAME READY STATUS RESTARTS AGE grafana-7bf5764d9d-2b2f6 2/2 Running 0 28h istio-citadel-576b9c5bbd-z84z4 1/1 Running 0 28h istio-egressgateway-5476bc4656-r4zdv 1/1 Running 0 28h istio-galley-7d57b47bb7-lqdxv 1/1 Running 0 28h istio-ingressgateway-dbb8f7f46-ct6n5 1/1 Running 0 28h istio-pilot-546bf69578-ccg5x 2/2 Running 0 28h istio-policy-77fd498655-7pvjw 2/2 Running 0 28h istio-sidecar-injector-df45bd899-ctxdt 1/1 Running 0 28h istio-telemetry-66f697d6d5-cj28l 2/2 Running 0 28h jaeger-896945cbc-7lqrr 2/2 Running 0 11h kiali-78d9c5b87c-snjzh 1/1 Running 0 22h prometheus-6dff867c97-gr2n5 2/2 Running 0 28h For a multitenant installation, Red Hat OpenShift Service Mesh supports multiple independent control planes within the cluster. You can create reusable configurations with ServiceMeshControlPlane templates. For more information, see Creating control plane templates . 3.5.7. Creating the Red Hat OpenShift Service Mesh member roll The ServiceMeshMemberRoll lists the projects that belong to the Service Mesh control plane. Only projects listed in the ServiceMeshMemberRoll are affected by the control plane. A project does not belong to a service mesh until you add it to the member roll for a particular control plane deployment. You must create a ServiceMeshMemberRoll resource named default in the same project as the ServiceMeshControlPlane , for example istio-system . 3.5.7.1. Creating the member roll from the web console You can add one or more projects to the Service Mesh member roll from the web console. In this example, istio-system is the name of the Service Mesh control plane project. Prerequisites An installed, verified Red Hat OpenShift Service Mesh Operator. List of existing projects to add to the service mesh. Procedure Log in to the OpenShift Container Platform web console. If you do not already have services for your mesh, or you are starting from scratch, create a project for your applications. It must be different from the project where you installed the Service Mesh control plane. Navigate to Home Projects . Enter a name in the Name field. Click Create . Navigate to Operators Installed Operators . Click the Project menu and choose the project where your ServiceMeshControlPlane resource is deployed from the list, for example istio-system . Click the Red Hat OpenShift Service Mesh Operator. Click the Istio Service Mesh Member Roll tab. Click Create ServiceMeshMemberRoll Click Members , then enter the name of your project in the Value field. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource. Click Create . 3.5.7.2. Creating the member roll from the CLI You can add a project to the ServiceMeshMemberRoll from the command line. Prerequisites An installed, verified Red Hat OpenShift Service Mesh Operator. List of projects to add to the service mesh. Access to the OpenShift CLI ( oc ). Procedure Log in to the OpenShift Container Platform CLI. USD oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443 If you do not already have services for your mesh, or you are starting from scratch, create a project for your applications. It must be different from the project where you installed the Service Mesh control plane. USD oc new-project <your-project> To add your projects as members, modify the following example YAML. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource. In this example, istio-system is the name of the Service Mesh control plane project. Example servicemeshmemberroll-default.yaml apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name Run the following command to upload and create the ServiceMeshMemberRoll resource in the istio-system namespace. USD oc create -n istio-system -f servicemeshmemberroll-default.yaml Run the following command to verify the ServiceMeshMemberRoll was created successfully. USD oc get smmr -n istio-system default The installation has finished successfully when the STATUS column is Configured . 3.5.8. Adding or removing projects from the service mesh You can add or remove projects from an existing Service Mesh ServiceMeshMemberRoll resource using the web console. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource. The ServiceMeshMemberRoll resource is deleted when its corresponding ServiceMeshControlPlane resource is deleted. 3.5.8.1. Adding or removing projects from the member roll using the web console Prerequisites An installed, verified Red Hat OpenShift Service Mesh Operator. An existing ServiceMeshMemberRoll resource. Name of the project with the ServiceMeshMemberRoll resource. Names of the projects you want to add or remove from the mesh. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators Installed Operators . Click the Project menu and choose the project where your ServiceMeshControlPlane resource is deployed from the list, for example istio-system . Click the Red Hat OpenShift Service Mesh Operator. Click the Istio Service Mesh Member Roll tab. Click the default link. Click the YAML tab. Modify the YAML to add or remove projects as members. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource. Click Save . Click Reload . 3.5.8.2. Adding or removing projects from the member roll using the CLI You can modify an existing Service Mesh member roll using the command line. Prerequisites An installed, verified Red Hat OpenShift Service Mesh Operator. An existing ServiceMeshMemberRoll resource. Name of the project with the ServiceMeshMemberRoll resource. Names of the projects you want to add or remove from the mesh. Access to the OpenShift CLI ( oc ). Procedure Log in to the OpenShift Container Platform CLI. Edit the ServiceMeshMemberRoll resource. USD oc edit smmr -n <controlplane-namespace> Modify the YAML to add or remove projects as members. You can add any number of projects, but a project can only belong to one ServiceMeshMemberRoll resource. Example servicemeshmemberroll-default.yaml apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system #control plane project spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name 3.5.9. Manual updates If you choose to update manually, the Operator Lifecycle Manager (OLM) controls the installation, upgrade, and role-based access control (RBAC) of Operators in a cluster. OLM runs by default in OpenShift Container Platform. OLM uses CatalogSources, which use the Operator Registry API, to query for available Operators as well as upgrades for installed Operators. For more information about how OpenShift Container Platform handled upgrades, refer to the Operator Lifecycle Manager documentation. 3.5.9.1. Updating sidecar proxies In order to update the configuration for sidecar proxies the application administrator must restart the application pods. If your deployment uses automatic sidecar injection, you can update the pod template in the deployment by adding or modifying an annotation. Run the following command to redeploy the pods: USD oc patch deployment/<deployment> -p '{"spec":{"template":{"metadata":{"annotations":{"kubectl.kubernetes.io/restartedAt": "'`date -Iseconds`'"}}}}}' If your deployment does not use automatic sidecar injection, you must manually update the sidecars by modifying the sidecar container image specified in the deployment or pod, and then restart the pods. 3.5.10. steps Prepare to deploy applications on Red Hat OpenShift Service Mesh. 3.6. Customizing security in a Service Mesh Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . If your service mesh application is constructed with a complex array of microservices, you can use Red Hat OpenShift Service Mesh to customize the security of the communication between those services. The infrastructure of OpenShift Container Platform along with the traffic management features of Service Mesh can help you manage the complexity of your applications and provide service and identity security for microservices. 3.6.1. Enabling mutual Transport Layer Security (mTLS) Mutual Transport Layer Security (mTLS) is a protocol where two parties authenticate each other. It is the default mode of authentication in some protocols (IKE, SSH) and optional in others (TLS). mTLS can be used without changes to the application or service code. The TLS is handled entirely by the service mesh infrastructure and between the two sidecar proxies. By default, Red Hat OpenShift Service Mesh is set to permissive mode, where the sidecars in Service Mesh accept both plain-text traffic and connections that are encrypted using mTLS. If a service in your mesh is communicating with a service outside the mesh, strict mTLS could break communication between those services. Use permissive mode while you migrate your workloads to Service Mesh. 3.6.1.1. Enabling strict mTLS across the mesh If your workloads do not communicate with services outside your mesh and communication will not be interrupted by only accepting encrypted connections, you can enable mTLS across your mesh quickly. Set spec.istio.global.mtls.enabled to true in your ServiceMeshControlPlane resource. The operator creates the required resources. apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true 3.6.1.1.1. Configuring sidecars for incoming connections for specific services You can also configure mTLS for individual services or namespaces by creating a policy. apiVersion: "authentication.istio.io/v1alpha1" kind: "Policy" metadata: name: default namespace: <NAMESPACE> spec: peers: - mtls: {} 3.6.1.2. Configuring sidecars for outgoing connections Create a destination rule to configure Service Mesh to use mTLS when sending requests to other services in the mesh. apiVersion: "networking.istio.io/v1alpha3" kind: "DestinationRule" metadata: name: "default" namespace: <CONTROL_PLANE_NAMESPACE>> spec: host: "*.local" trafficPolicy: tls: mode: ISTIO_MUTUAL 3.6.1.3. Setting the minimum and maximum protocol versions If your environment has specific requirements for encrypted traffic in your service mesh, you can control the cryptographic functions that are allowed by setting the spec.security.controlPlane.tls.minProtocolVersion or spec.security.controlPlane.tls.maxProtocolVersion in your ServiceMeshControlPlane resource. Those values, configured in your control plane resource, define the minimum and maximum TLS version used by mesh components when communicating securely over TLS. apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: tls: minProtocolVersion: TLSv1_2 maxProtocolVersion: TLSv1_3 The default is TLS_AUTO and does not specify a version of TLS. Table 3.3. Valid values Value Description TLS_AUTO default TLSv1_0 TLS version 1.0 TLSv1_1 TLS version 1.1 TLSv1_2 TLS version 1.2 TLSv1_3 TLS version 1.3 3.6.2. Configuring cipher suites and ECDH curves Cipher suites and Elliptic-curve Diffie-Hellman (ECDH curves) can help you secure your service mesh. You can define a comma separated list of cipher suites using spec.istio.global.tls.cipherSuites and ECDH curves using spec.istio.global.tls.ecdhCurves in your ServiceMeshControlPlane resource. If either of these attributes are empty, then the default values are used. The cipherSuites setting is effective if your service mesh uses TLS 1.2 or earlier. It has no effect when negotiating with TLS 1.3. Set your cipher suites in the comma separated list in order of priority. For example, ecdhCurves: CurveP256, CurveP384 sets CurveP256 as a higher priority than CurveP384 . Note You must include either TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 or TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 when you configure the cipher suite. HTTP/2 support requires at least one of these cipher suites. The supported cipher suites are: TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA TLS_RSA_WITH_AES_128_GCM_SHA256 TLS_RSA_WITH_AES_256_GCM_SHA384 TLS_RSA_WITH_AES_128_CBC_SHA256 TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA The supported ECDH Curves are: CurveP256 CurveP384 CurveP521 X25519 3.6.3. Adding an external certificate authority key and certificate By default, Red Hat OpenShift Service Mesh generates self-signed root certificate and key, and uses them to sign the workload certificates. You can also use the user-defined certificate and key to sign workload certificates, with user-defined root certificate. This task demonstrates an example to plug certificates and key into Service Mesh. Prerequisites You must have installed Red Hat OpenShift Service Mesh with mutual TLS enabled to configure certificates. This example uses the certificates from the Maistra repository . For production, use your own certificates from your certificate authority. You must deploy the Bookinfo sample application to verify the results with these instructions. 3.6.3.1. Adding an existing certificate and key To use an existing signing (CA) certificate and key, you must create a chain of trust file that includes the CA certificate, key, and root certificate. You must use the following exact file names for each of the corresponding certificates. The CA certificate is called ca-cert.pem , the key is ca-key.pem , and the root certificate, which signs ca-cert.pem , is called root-cert.pem . If your workload uses intermediate certificates, you must specify them in a cert-chain.pem file. Add the certificates to Service Mesh by following these steps. Save the example certificates from the Maistra repo locally and replace <path> with the path to your certificates. Create a secret cacert that includes the input files ca-cert.pem , ca-key.pem , root-cert.pem and cert-chain.pem . USD oc create secret generic cacerts -n istio-system --from-file=<path>/ca-cert.pem \ --from-file=<path>/ca-key.pem --from-file=<path>/root-cert.pem \ --from-file=<path>/cert-chain.pem In the ServiceMeshControlPlane resource set global.mtls.enabled to true and security.selfSigned set to false . Service Mesh reads the certificates and key from the secret-mount files. apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true security: selfSigned: false To make sure the workloads add the new certificates promptly, delete the secrets generated by Service Mesh, named istio.* . In this example, istio.default . Service Mesh issues new certificates for the workloads. USD oc delete secret istio.default 3.6.3.2. Verifying your certificates Use the Bookinfo sample application to verify your certificates are mounted correctly. First, retrieve the mounted certificates. Then, verify the certificates mounted on the pod. Store the pod name in the variable RATINGSPOD . USD RATINGSPOD=`oc get pods -l app=ratings -o jsonpath='{.items[0].metadata.name}'` Run the following commands to retrieve the certificates mounted on the proxy. USD oc exec -it USDRATINGSPOD -c istio-proxy -- /bin/cat /etc/certs/root-cert.pem > /tmp/pod-root-cert.pem The file /tmp/pod-root-cert.pem contains the root certificate propagated to the pod. USD oc exec -it USDRATINGSPOD -c istio-proxy -- /bin/cat /etc/certs/cert-chain.pem > /tmp/pod-cert-chain.pem The file /tmp/pod-cert-chain.pem contains the workload certificate and the CA certificate propagated to the pod. Verify the root certificate is the same as the one specified by the Operator. Replace <path> with the path to your certificates. USD openssl x509 -in <path>/root-cert.pem -text -noout > /tmp/root-cert.crt.txt USD openssl x509 -in /tmp/pod-root-cert.pem -text -noout > /tmp/pod-root-cert.crt.txt USD diff /tmp/root-cert.crt.txt /tmp/pod-root-cert.crt.txt Expect the output to be empty. Verify the CA certificate is the same as the one specified by Operator. Replace <path> with the path to your certificates. USD sed '0,/^-----END CERTIFICATE-----/d' /tmp/pod-cert-chain.pem > /tmp/pod-cert-chain-ca.pem USD openssl x509 -in <path>/ca-cert.pem -text -noout > /tmp/ca-cert.crt.txt USD openssl x509 -in /tmp/pod-cert-chain-ca.pem -text -noout > /tmp/pod-cert-chain-ca.crt.txt USD diff /tmp/ca-cert.crt.txt /tmp/pod-cert-chain-ca.crt.txt Expect the output to be empty. Verify the certificate chain from the root certificate to the workload certificate. Replace <path> with the path to your certificates. USD head -n 21 /tmp/pod-cert-chain.pem > /tmp/pod-cert-chain-workload.pem USD openssl verify -CAfile <(cat <path>/ca-cert.pem <path>/root-cert.pem) /tmp/pod-cert-chain-workload.pem Example output /tmp/pod-cert-chain-workload.pem: OK 3.6.3.3. Removing the certificates To remove the certificates you added, follow these steps. Remove the secret cacerts . USD oc delete secret cacerts -n istio-system Redeploy Service Mesh with a self-signed root certificate in the ServiceMeshControlPlane resource. apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true security: selfSigned: true 3.7. Traffic management Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . You can control the flow of traffic and API calls between services in Red Hat OpenShift Service Mesh. For example, some services in your service mesh may need to communicate within the mesh and others may need to be hidden. Manage the traffic to hide specific backend services, expose services, create testing or versioning deployments, or add a security layer on a set of services. 3.7.1. Using gateways You can use a gateway to manage inbound and outbound traffic for your mesh to specify which traffic you want to enter or leave the mesh. Gateway configurations are applied to standalone Envoy proxies that are running at the edge of the mesh, rather than sidecar Envoy proxies running alongside your service workloads. Unlike other mechanisms for controlling traffic entering your systems, such as the Kubernetes Ingress APIs, Red Hat OpenShift Service Mesh gateways use the full power and flexibility of traffic routing. The Red Hat OpenShift Service Mesh gateway resource can use layer 4-6 load balancing properties, such as ports, to expose and configure Red Hat OpenShift Service Mesh TLS settings. Instead of adding application-layer traffic routing (L7) to the same API resource, you can bind a regular Red Hat OpenShift Service Mesh virtual service to the gateway and manage gateway traffic like any other data plane traffic in a service mesh. Gateways are primarily used to manage ingress traffic, but you can also configure egress gateways. An egress gateway lets you configure a dedicated exit node for the traffic leaving the mesh. This enables you to limit which services have access to external networks, which adds security control to your service mesh. You can also use a gateway to configure a purely internal proxy. Gateway example A gateway resource describes a load balancer operating at the edge of the mesh receiving incoming or outgoing HTTP/TCP connections. The specification describes a set of ports that should be exposed, the type of protocol to use, SNI configuration for the load balancer, and so on. The following example shows a sample gateway configuration for external HTTPS ingress traffic: apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: ext-host-gwy spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 443 name: https protocol: HTTPS hosts: - ext-host.example.com tls: mode: SIMPLE serverCertificate: /tmp/tls.crt privateKey: /tmp/tls.key This gateway configuration lets HTTPS traffic from ext-host.example.com into the mesh on port 443, but doesn't specify any routing for the traffic. To specify routing and for the gateway to work as intended, you must also bind the gateway to a virtual service. You do this using the virtual service's gateways field, as shown in the following example: apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: virtual-svc spec: hosts: - ext-host.example.com gateways: - ext-host-gwy You can then configure the virtual service with routing rules for the external traffic. 3.7.2. Configuring an ingress gateway An ingress gateway is a load balancer operating at the edge of the mesh that receives incoming HTTP/TCP connections. It configures exposed ports and protocols but does not include any traffic routing configuration. Traffic routing for ingress traffic is instead configured with routing rules, the same way as for internal service requests. The following steps show how to create a gateway and configure a VirtualService to expose a service in the Bookinfo sample application to outside traffic for paths /productpage and /login . Procedure Create a gateway to accept traffic. Create a YAML file, and copy the following YAML into it. Gateway example gateway.yaml apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" Apply the YAML file. USD oc apply -f gateway.yaml Create a VirtualService object to rewrite the host header. Create a YAML file, and copy the following YAML into it. Virtual service example apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - "*" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080 Apply the YAML file. USD oc apply -f vs.yaml Test that the gateway and VirtualService have been set correctly. Set the Gateway URL. export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}') Set the port number. In this example, istio-system is the name of the Service Mesh control plane project. export TARGET_PORT=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.port.targetPort}') Test a page that has been explicitly exposed. curl -s -I "USDGATEWAY_URL/productpage" The expected result is 200 . 3.7.3. Managing ingress traffic In Red Hat OpenShift Service Mesh, the Ingress Gateway enables features such as monitoring, security, and route rules to apply to traffic that enters the cluster. Use a Service Mesh gateway to expose a service outside of the service mesh. 3.7.3.1. Determining the ingress IP and ports Ingress configuration differs depending on if your environment supports an external load balancer. An external load balancer is set in the ingress IP and ports for the cluster. To determine if your cluster's IP and ports are configured for external load balancers, run the following command. In this example, istio-system is the name of the Service Mesh control plane project. USD oc get svc istio-ingressgateway -n istio-system That command returns the NAME , TYPE , CLUSTER-IP , EXTERNAL-IP , PORT(S) , and AGE of each item in your namespace. If the EXTERNAL-IP value is set, your environment has an external load balancer that you can use for the ingress gateway. If the EXTERNAL-IP value is <none> , or perpetually <pending> , your environment does not provide an external load balancer for the ingress gateway. 3.7.3.1.1. Determining ingress ports with a load balancer Follow these instructions if your environment has an external load balancer. Procedure Run the following command to set the ingress IP and ports. This command sets a variable in your terminal. USD export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}') Run the following command to set the ingress port. USD export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}') Run the following command to set the secure ingress port. USD export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].port}') Run the following command to set the TCP ingress port. USD export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="tcp")].port}') Note In some environments, the load balancer may be exposed using a hostname instead of an IP address. For that case, the ingress gateway's EXTERNAL-IP value is not an IP address. Instead, it is a hostname, and the command fails to set the INGRESS_HOST environment variable. In that case, use the following command to correct the INGRESS_HOST value: USD export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') 3.7.3.1.2. Determining ingress ports without a load balancer If your environment does not have an external load balancer, determine the ingress ports and use a node port instead. Procedure Set the ingress ports. USD export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}') Run the following command to set the secure ingress port. USD export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}') Run the following command to set the TCP ingress port. USD export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="tcp")].nodePort}') 3.7.4. Automatic route creation OpenShift routes for Istio Gateways are automatically managed in Red Hat OpenShift Service Mesh. Every time an Istio Gateway is created, updated or deleted inside the service mesh, an OpenShift route is created, updated or deleted. 3.7.4.1. Enabling Automatic Route Creation A Red Hat OpenShift Service Mesh control plane component called Istio OpenShift Routing (IOR) synchronizes the gateway route. Enable IOR as part of the control plane deployment. If the Gateway contains a TLS section, the OpenShift Route will be configured to support TLS. In the ServiceMeshControlPlane resource, add the ior_enabled parameter and set it to true . For example, see the following resource snippet: spec: istio: gateways: istio-egressgateway: autoscaleEnabled: false autoscaleMin: 1 autoscaleMax: 5 istio-ingressgateway: autoscaleEnabled: false autoscaleMin: 1 autoscaleMax: 5 ior_enabled: true 3.7.4.2. Subdomains Red Hat OpenShift Service Mesh creates the route with the subdomain, but OpenShift Container Platform must be configured to enable it. Subdomains, for example *.domain.com , are supported but not by default. Configure an OpenShift Container Platform wildcard policy before configuring a wildcard host Gateway. For more information, see the "Links" section. If the following gateway is created: apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: gateway1 spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - www.bookinfo.com - bookinfo.example.com Then, the following OpenShift Routes are created automatically. You can check that the routes are created with the following command. USD oc -n <control_plane_namespace> get routes Expected output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD gateway1-lvlfn bookinfo.example.com istio-ingressgateway <all> None gateway1-scqhv www.bookinfo.com istio-ingressgateway <all> None If the gateway is deleted, Red Hat OpenShift Service Mesh deletes the routes. However, routes created manually are never modified by Red Hat OpenShift Service Mesh. 3.7.5. Understanding service entries A service entry adds an entry to the service registry that Red Hat OpenShift Service Mesh maintains internally. After you add the service entry, the Envoy proxies send traffic to the service as if it is a service in your mesh. Service entries allow you to do the following: Manage traffic for services that run outside of the service mesh. Redirect and forward traffic for external destinations (such as, APIs consumed from the web) or traffic to services in legacy infrastructure. Define retry, timeout, and fault injection policies for external destinations. Run a mesh service in a Virtual Machine (VM) by adding VMs to your mesh. Note Add services from a different cluster to the mesh to configure a multicluster Red Hat OpenShift Service Mesh mesh on Kubernetes. Service entry examples The following example is a mesh-external service entry that adds the ext-resource external dependency to the Red Hat OpenShift Service Mesh service registry: apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: svc-entry spec: hosts: - ext-svc.example.com ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS Specify the external resource using the hosts field. You can qualify it fully or use a wildcard prefixed domain name. You can configure virtual services and destination rules to control traffic to a service entry in the same way you configure traffic for any other service in the mesh. For example, the following destination rule configures the traffic route to use mutual TLS to secure the connection to the ext-svc.example.com external service that is configured using the service entry: apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: ext-res-dr spec: host: ext-svc.example.com trafficPolicy: tls: mode: MUTUAL clientCertificate: /etc/certs/myclientcert.pem privateKey: /etc/certs/client_private_key.pem caCertificates: /etc/certs/rootcacerts.pem 3.7.6. Using VirtualServices You can route requests dynamically to multiple versions of a microservice through Red Hat OpenShift Service Mesh with a virtual service. With virtual services, you can: Address multiple application services through a single virtual service. If your mesh uses Kubernetes, for example, you can configure a virtual service to handle all services in a specific namespace. A virtual service enables you to turn a monolithic application into a service consisting of distinct microservices with a seamless consumer experience. Configure traffic rules in combination with gateways to control ingress and egress traffic. 3.7.6.1. Configuring VirtualServices Requests are routed to services within a service mesh with virtual services. Each virtual service consists of a set of routing rules that are evaluated in order. Red Hat OpenShift Service Mesh matches each given request to the virtual service to a specific real destination within the mesh. Without virtual services, Red Hat OpenShift Service Mesh distributes traffic using least requests load balancing between all service instances. With a virtual service, you can specify traffic behavior for one or more hostnames. Routing rules in the virtual service tell Red Hat OpenShift Service Mesh how to send the traffic for the virtual service to appropriate destinations. Route destinations can be versions of the same service or entirely different services. Procedure Create a YAML file using the following example to route requests to different versions of the Bookinfo sample application service depending on which user connects to the application. Example VirtualService.yaml apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: end-user: exact: jason route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v3 Run the following command to apply VirtualService.yaml , where VirtualService.yaml is the path to the file. USD oc apply -f <VirtualService.yaml> 3.7.6.2. VirtualService configuration reference Parameter Description The hosts field lists the virtual service's destination address to which the routing rules apply. This is the address(es) that are used to send requests to the service. The virtual service hostname can be an IP address, a DNS name, or a short name that resolves to a fully qualified domain name. The http section contains the virtual service's routing rules which describe match conditions and actions for routing HTTP/1.1, HTTP2, and gRPC traffic sent to the destination as specified in the hosts field. A routing rule consists of the destination where you want the traffic to go and any specified match conditions. The first routing rule in the example has a condition that begins with the match field. In this example, this routing applies to all requests from the user jason . Add the headers , end-user , and exact fields to select the appropriate requests. The destination field in the route section specifies the actual destination for traffic that matches this condition. Unlike the virtual service's host, the destination's host must be a real destination that exists in the Red Hat OpenShift Service Mesh service registry. This can be a mesh service with proxies or a non-mesh service added using a service entry. In this example, the hostname is a Kubernetes service name: 3.7.7. Understanding destination rules Destination rules are applied after virtual service routing rules are evaluated, so they apply to the traffic's real destination. Virtual services route traffic to a destination. Destination rules configure what happens to traffic at that destination. By default, Red Hat OpenShift Service Mesh uses a least requests load balancing policy, where the service instance in the pool with the least number of active connections receives the request. Red Hat OpenShift Service Mesh also supports the following models, which you can specify in destination rules for requests to a particular service or service subset. Random: Requests are forwarded at random to instances in the pool. Weighted: Requests are forwarded to instances in the pool according to a specific percentage. Least requests: Requests are forwarded to instances with the least number of requests. Destination rule example The following example destination rule configures three different subsets for the my-svc destination service, with different load balancing policies: apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: my-destination-rule spec: host: my-svc trafficPolicy: loadBalancer: simple: RANDOM subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 trafficPolicy: loadBalancer: simple: ROUND_ROBIN - name: v3 labels: version: v3 This guide references the Bookinfo sample application to provide examples of routing in an example application. Install the Bookinfo application to learn how these routing examples work. 3.7.8. Bookinfo routing tutorial The Service Mesh Bookinfo sample application consists of four separate microservices, each with multiple versions. After installing the Bookinfo sample application, three different versions of the reviews microservice run concurrently. When you access the Bookinfo app /product page in a browser and refresh several times, sometimes the book review output contains star ratings and other times it does not. Without an explicit default service version to route to, Service Mesh routes requests to all available versions one after the other. This tutorial helps you apply rules that route all traffic to v1 (version 1) of the microservices. Later, you can apply a rule to route traffic based on the value of an HTTP request header. Prerequisites Deploy the Bookinfo sample application to work with the following examples. 3.7.8.1. Applying a virtual service In the following procedure, the virtual service routes all traffic to v1 of each micro-service by applying virtual services that set the default version for the micro-services. Procedure Apply the virtual services. USD oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/virtual-service-all-v1.yaml To verify that you applied the virtual services, display the defined routes with the following command: USD oc get virtualservices -o yaml That command returns a resource of kind: VirtualService in YAML format. You have configured Service Mesh to route to the v1 version of the Bookinfo microservices including the reviews service version 1. 3.7.8.2. Testing the new route configuration Test the new configuration by refreshing the /productpage of the Bookinfo application. Procedure Set the value for the GATEWAY_URL parameter. You can use this variable to find the URL for your Bookinfo product page later. In this example, istio-system is the name of the control plane project. export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}') Run the following command to retrieve the URL for the product page. echo "http://USDGATEWAY_URL/productpage" Open the Bookinfo site in your browser. The reviews part of the page displays with no rating stars, no matter how many times you refresh. This is because you configured Service Mesh to route all traffic for the reviews service to the version reviews:v1 and this version of the service does not access the star ratings service. Your service mesh now routes traffic to one version of a service. 3.7.8.3. Route based on user identity Change the route configuration so that all traffic from a specific user is routed to a specific service version. In this case, all traffic from a user named jason will be routed to the service reviews:v2 . Service Mesh does not have any special, built-in understanding of user identity. This example is enabled by the fact that the productpage service adds a custom end-user header to all outbound HTTP requests to the reviews service. Procedure Run the following command to enable user-based routing in the Bookinfo sample application. USD oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml Run the following command to confirm the rule is created. This command returns all resources of kind: VirtualService in YAML format. USD oc get virtualservice reviews -o yaml On the /productpage of the Bookinfo app, log in as user jason with no password. Refresh the browser. The star ratings appear to each review. Log in as another user (pick any name you want). Refresh the browser. Now the stars are gone. Traffic is now routed to reviews:v1 for all users except Jason. You have successfully configured the Bookinfo sample application to route traffic based on user identity. 3.7.9. Additional resources For more information about configuring an OpenShift Container Platform wildcard policy, see "Using wildcard routes" in Ingress Operator in OpenShift Container Platform . 3.8. Deploying applications on Service Mesh Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . When you deploy an application into the Service Mesh, there are several differences between the behavior of applications in the upstream community version of Istio and the behavior of applications within a Red Hat OpenShift Service Mesh installation. 3.8.1. Prerequisites Review Comparing Red Hat OpenShift Service Mesh and upstream Istio community installations Review Installing Red Hat OpenShift Service Mesh 3.8.2. Creating control plane templates You can create reusable configurations with ServiceMeshControlPlane templates. Individual users can extend the templates they create with their own configurations. Templates can also inherit configuration information from other templates. For example, you can create an accounting control plane for the accounting team and a marketing control plane for the marketing team. If you create a development template and a production template, members of the marketing team and the accounting team can extend the development and production templates with team specific customization. When you configure control plane templates, which follow the same syntax as the ServiceMeshControlPlane , users inherit settings in a hierarchical fashion. The Operator is delivered with a default template with default settings for Red Hat OpenShift Service Mesh. To add custom templates you must create a ConfigMap named smcp-templates in the openshift-operators project and mount the ConfigMap in the Operator container at /usr/local/share/istio-operator/templates . 3.8.2.1. Creating the ConfigMap Follow this procedure to create the ConfigMap. Prerequisites An installed, verified Service Mesh Operator. An account with the cluster-admin role. Location of the Operator deployment. Access to the OpenShift CLI ( oc ). Procedure Log in to the OpenShift Container Platform CLI as a cluster administrator. From the CLI, run this command to create the ConfigMap named smcp-templates in the openshift-operators project and replace <templates-directory> with the location of the ServiceMeshControlPlane files on your local disk: USD oc create configmap --from-file=<templates-directory> smcp-templates -n openshift-operators Locate the Operator ClusterServiceVersion name. USD oc get clusterserviceversion -n openshift-operators | grep 'Service Mesh' Example output maistra.v1.0.0 Red Hat OpenShift Service Mesh 1.0.0 Succeeded Edit the Operator cluster service version to instruct the Operator to use the smcp-templates ConfigMap. USD oc edit clusterserviceversion -n openshift-operators maistra.v1.0.0 Add a volume mount and volume to the Operator deployment. deployments: - name: istio-operator spec: template: spec: containers: volumeMounts: - name: discovery-cache mountPath: /home/istio-operator/.kube/cache/discovery - name: smcp-templates mountPath: /usr/local/share/istio-operator/templates/ volumes: - name: discovery-cache emptyDir: medium: Memory - name: smcp-templates configMap: name: smcp-templates ... Save your changes and exit the editor. You can now use the template parameter in the ServiceMeshControlPlane to specify a template. apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: minimal-install spec: template: default 3.8.3. Enabling automatic sidecar injection When deploying an application, you must opt-in to injection by configuring the label sidecar.istio.io/inject in spec.template.metadata.labels to true in the deployment object. Opting in ensures that the sidecar injection does not interfere with other OpenShift Container Platform features such as builder pods used by numerous frameworks within the OpenShift Container Platform ecosystem. Prerequisites Identify the namespaces that are part of your service mesh and the deployments that need automatic sidecar injection. Procedure To find your deployments use the oc get command. USD oc get deployment -n <namespace> For example, to view the Deployment YAML file for the 'ratings-v1' microservice in the bookinfo namespace, use the following command to see the resource in YAML format. oc get deployment -n bookinfo ratings-v1 -o yaml Open the application's Deployment YAML file in an editor. Add spec.template.metadata.labels.sidecar.istio/inject to your Deployment YAML file and set sidecar.istio.io/inject to true as shown in the following example. Example snippet from bookinfo deployment-ratings-v1.yaml apiVersion: apps/v1 kind: Deployment metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: labels: sidecar.istio.io/inject: 'true' Note Using the annotations parameter when enabling automatic sidecar injection is deprecated and is replaced by using the labels parameter. Save the Deployment YAML file. Add the file back to the project that contains your app. USD oc apply -n <namespace> -f deployment.yaml In this example, bookinfo is the name of the project that contains the ratings-v1 app and deployment-ratings-v1.yaml is the file you edited. USD oc apply -n bookinfo -f deployment-ratings-v1.yaml To verify that the resource uploaded successfully, run the following command. USD oc get deployment -n <namespace> <deploymentName> -o yaml For example, USD oc get deployment -n bookinfo ratings-v1 -o yaml 3.8.4. Setting proxy environment variables through annotations Configuration for the Envoy sidecar proxies is managed by the ServiceMeshControlPlane . You can set environment variables for the sidecar proxy for applications by adding pod annotations to the deployment in the injection-template.yaml file. The environment variables are injected to the sidecar. Example injection-template.yaml apiVersion: apps/v1 kind: Deployment metadata: name: resource spec: replicas: 7 selector: matchLabels: app: resource template: metadata: annotations: sidecar.maistra.io/proxyEnv: "{ \"maistra_test_env\": \"env_value\", \"maistra_test_env_2\": \"env_value_2\" }" Warning You should never include maistra.io/ labels and annotations when creating your own custom resources. These labels and annotations indicate that the resources are generated and managed by the Operator. If you are copying content from an Operator-generated resource when creating your own resources, do not include labels or annotations that start with maistra.io/ . Resources that include these labels or annotations will be overwritten or deleted by the Operator during the reconciliation. 3.8.5. Updating Mixer policy enforcement In versions of Red Hat OpenShift Service Mesh, Mixer's policy enforcement was enabled by default. Mixer policy enforcement is now disabled by default. You must enable it before running policy tasks. Prerequisites Access to the OpenShift CLI ( oc ). Note The examples use istio-system as the control plane namespace. Replace this value with the namespace where you deployed the Service Mesh Control Plane (SMCP). Procedure Log in to the OpenShift Container Platform CLI. Run this command to check the current Mixer policy enforcement status: USD oc get cm -n istio-system istio -o jsonpath='{.data.mesh}' | grep disablePolicyChecks If disablePolicyChecks: true , edit the Service Mesh ConfigMap: USD oc edit cm -n istio-system istio Locate disablePolicyChecks: true within the ConfigMap and change the value to false . Save the configuration and exit the editor. Re-check the Mixer policy enforcement status to ensure it is set to false . 3.8.5.1. Setting the correct network policy Service Mesh creates network policies in the Service Mesh control plane and member namespaces to allow traffic between them. Before you deploy, consider the following conditions to ensure the services in your service mesh that were previously exposed through an OpenShift Container Platform route. Traffic into the service mesh must always go through the ingress-gateway for Istio to work properly. Deploy services external to the service mesh in separate namespaces that are not in any service mesh. Non-mesh services that need to be deployed within a service mesh enlisted namespace should label their deployments maistra.io/expose-route: "true" , which ensures OpenShift Container Platform routes to these services still work. 3.8.6. Bookinfo example application The Bookinfo example application allows you to test your Red Hat OpenShift Service Mesh 2.6.6 installation on OpenShift Container Platform. The Bookinfo application displays information about a book, similar to a single catalog entry of an online book store. The application displays a page that describes the book, book details (ISBN, number of pages, and other information), and book reviews. The Bookinfo application consists of these microservices: The productpage microservice calls the details and reviews microservices to populate the page. The details microservice contains book information. The reviews microservice contains book reviews. It also calls the ratings microservice. The ratings microservice contains book ranking information that accompanies a book review. There are three versions of the reviews microservice: Version v1 does not call the ratings Service. Version v2 calls the ratings Service and displays each rating as one to five black stars. Version v3 calls the ratings Service and displays each rating as one to five red stars. 3.8.6.1. Installing the Bookinfo application This tutorial walks you through how to create a sample application by creating a project, deploying the Bookinfo application to that project, and viewing the running application in Service Mesh. Prerequisites OpenShift Container Platform 4.1 or higher installed. Red Hat OpenShift Service Mesh 2.6.6 installed. Access to the OpenShift CLI ( oc ). You are logged in to OpenShift Container Platform as`cluster-admin`. Note The Bookinfo sample application cannot be installed on IBM Z(R) and IBM Power(R). Note The commands in this section assume the Service Mesh control plane project is istio-system . If you installed the control plane in another namespace, edit each command before you run it. Procedure Click Home Projects . Click Create Project . Enter bookinfo as the Project Name , enter a Display Name , and enter a Description , then click Create . Alternatively, you can run this command from the CLI to create the bookinfo project. USD oc new-project bookinfo Click Operators Installed Operators . Click the Project menu and use the Service Mesh control plane namespace. In this example, use istio-system . Click the Red Hat OpenShift Service Mesh Operator. Click the Istio Service Mesh Member Roll tab. If you have already created a Istio Service Mesh Member Roll, click the name, then click the YAML tab to open the YAML editor. If you have not created a ServiceMeshMemberRoll , click Create ServiceMeshMemberRoll . Click Members , then enter the name of your project in the Value field. Click Create to save the updated Service Mesh Member Roll. Or, save the following example to a YAML file. Bookinfo ServiceMeshMemberRoll example servicemeshmemberroll-default.yaml apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - bookinfo Run the following command to upload that file and create the ServiceMeshMemberRoll resource in the istio-system namespace. In this example, istio-system is the name of the Service Mesh control plane project. USD oc create -n istio-system -f servicemeshmemberroll-default.yaml Run the following command to verify the ServiceMeshMemberRoll was created successfully. USD oc get smmr -n istio-system -o wide The installation has finished successfully when the STATUS column is Configured . NAME READY STATUS AGE MEMBERS default 1/1 Configured 70s ["bookinfo"] From the CLI, deploy the Bookinfo application in the `bookinfo` project by applying the bookinfo.yaml file: USD oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/platform/kube/bookinfo.yaml You should see output similar to the following: service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created deployment.apps/reviews-v2 created deployment.apps/reviews-v3 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created Create the ingress gateway by applying the bookinfo-gateway.yaml file: USD oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/bookinfo-gateway.yaml You should see output similar to the following: gateway.networking.istio.io/bookinfo-gateway created virtualservice.networking.istio.io/bookinfo created Set the value for the GATEWAY_URL parameter: USD export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}') 3.8.6.2. Adding default destination rules Before you can use the Bookinfo application, you must first add default destination rules. There are two preconfigured YAML files, depending on whether or not you enabled mutual transport layer security (TLS) authentication. Procedure To add destination rules, run one of the following commands: If you did not enable mutual TLS: USD oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/destination-rule-all.yaml If you enabled mutual TLS: USD oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/destination-rule-all-mtls.yaml You should see output similar to the following: destinationrule.networking.istio.io/productpage created destinationrule.networking.istio.io/reviews created destinationrule.networking.istio.io/ratings created destinationrule.networking.istio.io/details created 3.8.6.3. Verifying the Bookinfo installation To confirm that the sample Bookinfo application was successfully deployed, perform the following steps. Prerequisites Red Hat OpenShift Service Mesh installed. Complete the steps for installing the Bookinfo sample app. You are logged in to OpenShift Container Platform as`cluster-admin`. Procedure from CLI Verify that all pods are ready with this command: USD oc get pods -n bookinfo All pods should have a status of Running . You should see output similar to the following: NAME READY STATUS RESTARTS AGE details-v1-55b869668-jh7hb 2/2 Running 0 12m productpage-v1-6fc77ff794-nsl8r 2/2 Running 0 12m ratings-v1-7d7d8d8b56-55scn 2/2 Running 0 12m reviews-v1-868597db96-bdxgq 2/2 Running 0 12m reviews-v2-5b64f47978-cvssp 2/2 Running 0 12m reviews-v3-6dfd49b55b-vcwpf 2/2 Running 0 12m Run the following command to retrieve the URL for the product page: echo "http://USDGATEWAY_URL/productpage" Copy and paste the output in a web browser to verify the Bookinfo product page is deployed. Procedure from Kiali web console Obtain the address for the Kiali web console. Log in to the OpenShift Container Platform web console. Navigate to Networking Routes . On the Routes page, select the Service Mesh control plane project, for example istio-system , from the Namespace menu. The Location column displays the linked address for each route. Click the link in the Location column for Kiali. Click Log In With OpenShift . The Kiali Overview screen presents tiles for each project namespace. In Kiali, click Graph . Select bookinfo from the Namespace list, and App graph from the Graph Type list. Click Display idle nodes from the Display menu. This displays nodes that are defined but have not received or sent requests. It can confirm that an application is properly defined, but that no request traffic has been reported. Use the Duration menu to increase the time period to help ensure older traffic is captured. Use the Refresh Rate menu to refresh traffic more or less often, or not at all. Click Services , Workloads or Istio Config to see list views of bookinfo components, and confirm that they are healthy. 3.8.6.4. Removing the Bookinfo application Follow these steps to remove the Bookinfo application. Prerequisites OpenShift Container Platform 4.1 or higher installed. Red Hat OpenShift Service Mesh 2.6.6 installed. Access to the OpenShift CLI ( oc ). 3.8.6.4.1. Delete the Bookinfo project Procedure Log in to the OpenShift Container Platform web console. Click to Home Projects . Click the bookinfo menu , and then click Delete Project . Type bookinfo in the confirmation dialog box, and then click Delete . Alternatively, you can run this command using the CLI to create the bookinfo project. USD oc delete project bookinfo 3.8.6.4.2. Remove the Bookinfo project from the Service Mesh member roll Procedure Log in to the OpenShift Container Platform web console. Click Operators Installed Operators . Click the Project menu and choose istio-system from the list. Click the Istio Service Mesh Member Roll link under Provided APIS for the Red Hat OpenShift Service Mesh Operator. Click the ServiceMeshMemberRoll menu and select Edit Service Mesh Member Roll . Edit the default Service Mesh Member Roll YAML and remove bookinfo from the members list. Alternatively, you can run this command using the CLI to remove the bookinfo project from the ServiceMeshMemberRoll . In this example, istio-system is the name of the Service Mesh control plane project. USD oc -n istio-system patch --type='json' smmr default -p '[{"op": "remove", "path": "/spec/members", "value":["'"bookinfo"'"]}]' Click Save to update Service Mesh Member Roll. 3.8.7. Generating example traces and analyzing trace data Jaeger is an open source distributed tracing system. With Jaeger, you can perform a trace that follows the path of a request through various microservices which make up an application. Jaeger is installed by default as part of the Service Mesh. This tutorial uses Service Mesh and the Bookinfo sample application to demonstrate how you can use Jaeger to perform distributed tracing. Prerequisites OpenShift Container Platform 4.1 or higher installed. Red Hat OpenShift Service Mesh 2.6.6 installed. Jaeger enabled during the installation. Bookinfo example application installed. Procedure After installing the Bookinfo sample application, send traffic to the mesh. Enter the following command several times. USD curl "http://USDGATEWAY_URL/productpage" This command simulates a user visiting the productpage microservice of the application. In the OpenShift Container Platform console, navigate to Networking Routes and search for the Jaeger route, which is the URL listed under Location . Alternatively, use the CLI to query for details of the route. In this example, istio-system is the Service Mesh control plane namespace: USD export JAEGER_URL=USD(oc get route -n istio-system jaeger -o jsonpath='{.spec.host}') Enter the following command to reveal the URL for the Jaeger console. Paste the result in a browser and navigate to that URL. echo USDJAEGER_URL Log in using the same user name and password as you use to access the OpenShift Container Platform console. In the left pane of the Jaeger dashboard, from the Service menu, select productpage.bookinfo and click Find Traces at the bottom of the pane. A list of traces is displayed. Click one of the traces in the list to open a detailed view of that trace. If you click the first one in the list, which is the most recent trace, you see the details that correspond to the latest refresh of the /productpage . 3.9. Data visualization and observability Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . You can view your application's topology, health and metrics in the Kiali console. If your service is having issues, the Kiali console offers ways to visualize the data flow through your service. You can view insights about the mesh components at different levels, including abstract applications, services, and workloads. It also provides an interactive graph view of your namespace in real time. Before you begin You can observe the data flow through your application if you have an application installed. If you don't have your own application installed, you can see how observability works in Red Hat OpenShift Service Mesh by installing the Bookinfo sample application . 3.9.1. Viewing service mesh data The Kiali operator works with the telemetry data gathered in Red Hat OpenShift Service Mesh to provide graphs and real-time network diagrams of the applications, services, and workloads in your namespace. To access the Kiali console you must have Red Hat OpenShift Service Mesh installed and projects configured for the service mesh. Procedure Use the perspective switcher to switch to the Administrator perspective. Click Home Projects . Click the name of your project. For example, click bookinfo . In the Launcher section, click Kiali . Log in to the Kiali console with the same user name and password that you use to access the OpenShift Container Platform console. When you first log in to the Kiali Console, you see the Overview page which displays all the namespaces in your service mesh that you have permission to view. If you are validating the console installation, there might not be any data to display. 3.9.2. Viewing service mesh data in the Kiali console The Kiali Graph offers a powerful visualization of your mesh traffic. The topology combines real-time request traffic with your Istio configuration information to present immediate insight into the behavior of your service mesh, letting you quickly pinpoint issues. Multiple Graph Types let you visualize traffic as a high-level service topology, a low-level workload topology, or as an application-level topology. There are several graphs to choose from: The App graph shows an aggregate workload for all applications that are labeled the same. The Service graph shows a node for each service in your mesh but excludes all applications and workloads from the graph. It provides a high level view and aggregates all traffic for defined services. The Versioned App graph shows a node for each version of an application. All versions of an application are grouped together. The Workload graph shows a node for each workload in your service mesh. This graph does not require you to use the application and version labels. If your application does not use version labels, use this the graph. Graph nodes are decorated with a variety of information, pointing out various route routing options like virtual services and service entries, as well as special configuration like fault-injection and circuit breakers. It can identify mTLS issues, latency issues, error traffic and more. The Graph is highly configurable, can show traffic animation, and has powerful Find and Hide abilities. Click the Legend button to view information about the shapes, colors, arrows, and badges displayed in the graph. To view a summary of metrics, select any node or edge in the graph to display its metric details in the summary details panel. 3.9.2.1. Changing graph layouts in Kiali The layout for the Kiali graph can render differently depending on your application architecture and the data to display. For example, the number of graph nodes and their interactions can determine how the Kiali graph is rendered. Because it is not possible to create a single layout that renders nicely for every situation, Kiali offers a choice of several different layouts. Prerequisites If you do not have your own application installed, install the Bookinfo sample application. Then generate traffic for the Bookinfo application by entering the following command several times. USD curl "http://USDGATEWAY_URL/productpage" This command simulates a user visiting the productpage microservice of the application. Procedure Launch the Kiali console. Click Log In With OpenShift . In Kiali console, click Graph to view a namespace graph. From the Namespace menu, select your application namespace, for example, bookinfo . To choose a different graph layout, do either or both of the following: Select different graph data groupings from the menu at the top of the graph. App graph Service graph Versioned App graph (default) Workload graph Select a different graph layout from the Legend at the bottom of the graph. Layout default dagre Layout 1 cose-bilkent Layout 2 cola 3.10. Custom resources Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . You can customize your Red Hat OpenShift Service Mesh by modifying the default Service Mesh custom resource or by creating a new custom resource. 3.10.1. Prerequisites An account with the cluster-admin role. Completed the Preparing to install Red Hat OpenShift Service Mesh process. Have installed the operators. 3.10.2. Red Hat OpenShift Service Mesh custom resources Note The istio-system project is used as an example throughout the Service Mesh documentation, but you can use other projects as necessary. A custom resource allows you to extend the API in an Red Hat OpenShift Service Mesh project or cluster. When you deploy Service Mesh it creates a default ServiceMeshControlPlane that you can modify to change the project parameters. The Service Mesh operator extends the API by adding the ServiceMeshControlPlane resource type, which enables you to create ServiceMeshControlPlane objects within projects. By creating a ServiceMeshControlPlane object, you instruct the Operator to install a Service Mesh control plane into the project, configured with the parameters you set in the ServiceMeshControlPlane object. This example ServiceMeshControlPlane definition contains all of the supported parameters and deploys Red Hat OpenShift Service Mesh 1.1.18.2 images based on Red Hat Enterprise Linux (RHEL). Important The 3scale Istio Adapter is deployed and configured in the custom resource file. It also requires a working 3scale account ( SaaS or On-Premises ). Example istio-installation.yaml apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: basic-install spec: istio: global: proxy: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi gateways: istio-egressgateway: autoscaleEnabled: false istio-ingressgateway: autoscaleEnabled: false ior_enabled: false mixer: policy: autoscaleEnabled: false telemetry: autoscaleEnabled: false resources: requests: cpu: 100m memory: 1G limits: cpu: 500m memory: 4G pilot: autoscaleEnabled: false traceSampling: 100 kiali: enabled: true grafana: enabled: true tracing: enabled: true jaeger: template: all-in-one 3.10.3. ServiceMeshControlPlane parameters The following examples illustrate use of the ServiceMeshControlPlane parameters and the tables provide additional information about supported parameters. Important The resources you configure for Red Hat OpenShift Service Mesh with these parameters, including CPUs, memory, and the number of pods, are based on the configuration of your OpenShift Container Platform cluster. Configure these parameters based on the available resources in your current cluster configuration. 3.10.3.1. Istio global example Here is an example that illustrates the Istio global parameters for the ServiceMeshControlPlane and a description of the available parameters with appropriate values. Note In order for the 3scale Istio Adapter to work, disablePolicyChecks must be false . Example global parameters istio: global: tag: 1.1.0 hub: registry.redhat.io/openshift-service-mesh/ proxy: resources: requests: cpu: 10m memory: 128Mi limits: mtls: enabled: false disablePolicyChecks: true policyCheckFailOpen: false imagePullSecrets: - MyPullSecret Table 3.4. Global parameters Parameter Description Values Default value disablePolicyChecks This parameter enables/disables policy checks. true / false true policyCheckFailOpen This parameter indicates whether traffic is allowed to pass through to the Envoy sidecar when the Mixer policy service cannot be reached. true / false false tag The tag that the Operator uses to pull the Istio images. A valid container image tag. 1.1.0 hub The hub that the Operator uses to pull Istio images. A valid image repository. maistra/ or registry.redhat.io/openshift-service-mesh/ mtls This parameter controls whether to enable/disable Mutual Transport Layer Security (mTLS) between services by default. true / false false imagePullSecrets If access to the registry providing the Istio images is secure, list an imagePullSecret here. redhat-registry-pullsecret OR quay-pullsecret None These parameters are specific to the proxy subset of global parameters. Table 3.5. Proxy parameters Type Parameter Description Values Default value requests cpu The amount of CPU resources requested for Envoy proxy. CPU resources, specified in cores or millicores (for example, 200m, 0.5, 1) based on your environment's configuration. 10m memory The amount of memory requested for Envoy proxy Available memory in bytes(for example, 200Ki, 50Mi, 5Gi) based on your environment's configuration. 128Mi limits cpu The maximum amount of CPU resources requested for Envoy proxy. CPU resources, specified in cores or millicores (for example, 200m, 0.5, 1) based on your environment's configuration. 2000m memory The maximum amount of memory Envoy proxy is permitted to use. Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environment's configuration. 1024Mi 3.10.3.2. Istio gateway configuration Here is an example that illustrates the Istio gateway parameters for the ServiceMeshControlPlane and a description of the available parameters with appropriate values. Example gateway parameters gateways: egress: enabled: true runtime: deployment: autoScaling: enabled: true maxReplicas: 5 minReplicas: 1 enabled: true ingress: enabled: true runtime: deployment: autoScaling: enabled: true maxReplicas: 5 minReplicas: 1 Table 3.6. Istio Gateway parameters Parameter Description Values Default value gateways.egress.runtime.deployment.autoScaling.enabled This parameter enables/disables autoscaling. true / false true gateways.egress.runtime.deployment.autoScaling.minReplicas The minimum number of pods to deploy for the egress gateway based on the autoscaleEnabled setting. A valid number of allocatable pods based on your environment's configuration. 1 gateways.egress.runtime.deployment.autoScaling.maxReplicas The maximum number of pods to deploy for the egress gateway based on the autoscaleEnabled setting. A valid number of allocatable pods based on your environment's configuration. 5 gateways.ingress.runtime.deployment.autoScaling.enabled This parameter enables/disables autoscaling. true / false true gateways.ingress.runtime.deployment.autoScaling.minReplicas The minimum number of pods to deploy for the ingress gateway based on the autoscaleEnabled setting. A valid number of allocatable pods based on your environment's configuration. 1 gateways.ingress.runtime.deployment.autoScaling.maxReplicas The maximum number of pods to deploy for the ingress gateway based on the autoscaleEnabled setting. A valid number of allocatable pods based on your environment's configuration. 5 Cluster administrators can refer to "Using wildcard routes" in Ingress Operator in OpenShift Container Platform for instructions on how to enable subdomains. 3.10.3.3. Istio Mixer configuration Here is an example that illustrates the Mixer parameters for the ServiceMeshControlPlane and a description of the available parameters with appropriate values. Example mixer parameters mixer: enabled: true policy: autoscaleEnabled: false telemetry: autoscaleEnabled: false resources: requests: cpu: 10m memory: 128Mi limits: Table 3.7. Istio Mixer policy parameters Parameter Description Values Default value enabled This parameter enables/disables Mixer. true / false true autoscaleEnabled This parameter enables/disables autoscaling. Disable this for small environments. true / false true autoscaleMin The minimum number of pods to deploy based on the autoscaleEnabled setting. A valid number of allocatable pods based on your environment's configuration. 1 autoscaleMax The maximum number of pods to deploy based on the autoscaleEnabled setting. A valid number of allocatable pods based on your environment's configuration. 5 Table 3.8. Istio Mixer telemetry parameters Type Parameter Description Values Default requests cpu The percentage of CPU resources requested for Mixer telemetry. CPU resources in millicores based on your environment's configuration. 10m memory The amount of memory requested for Mixer telemetry. Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environment's configuration. 128Mi limits cpu The maximum percentage of CPU resources Mixer telemetry is permitted to use. CPU resources in millicores based on your environment's configuration. 4800m memory The maximum amount of memory Mixer telemetry is permitted to use. Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environment's configuration. 4G 3.10.3.4. Istio Pilot configuration You can configure Pilot to schedule or set limits on resource allocation. The following example illustrates the Pilot parameters for the ServiceMeshControlPlane and a description of the available parameters with appropriate values. Example pilot parameters spec: runtime: components: pilot: deployment: autoScaling: enabled: true minReplicas: 1 maxReplicas: 5 targetCPUUtilizationPercentage: 85 pod: tolerations: - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 60 affinity: podAntiAffinity: requiredDuringScheduling: - key: istio topologyKey: kubernetes.io/hostname operator: In values: - pilot container: resources: limits: cpu: 100m memory: 128M Table 3.9. Istio Pilot parameters Parameter Description Values Default value cpu The percentage of CPU resources requested for Pilot. CPU resources in millicores based on your environment's configuration. 10m memory The amount of memory requested for Pilot. Available memory in bytes (for example, 200Ki, 50Mi, 5Gi) based on your environment's configuration. 128Mi autoscaleEnabled This parameter enables/disables autoscaling. Disable this for small environments. true / false true traceSampling This value controls how often random sampling occurs. Note: Increase for development or testing. A valid percentage. 1.0 3.10.4. Configuring Kiali When the Service Mesh Operator creates the ServiceMeshControlPlane it also processes the Kiali resource. The Kiali Operator then uses this object when creating Kiali instances. The default Kiali parameters specified in the ServiceMeshControlPlane are as follows: Example Kiali parameters apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: kiali: enabled: true dashboard: viewOnlyMode: false ingress: enabled: true Table 3.10. Kiali parameters Parameter Description Values Default value This parameter enables/disables Kiali. Kiali is enabled by default. true / false true This parameter enables/disables view-only mode for the Kiali console. When view-only mode is enabled, users cannot use the console to make changes to the Service Mesh. true / false false This parameter enables/disables ingress for Kiali. true / false true 3.10.4.1. Configuring Kiali for Grafana When you install Kiali and Grafana as part of Red Hat OpenShift Service Mesh the Operator configures the following by default: Grafana is enabled as an external service for Kiali Grafana authorization for the Kiali console Grafana URL for the Kiali console Kiali can automatically detect the Grafana URL. However if you have a custom Grafana installation that is not easily auto-detectable by Kiali, you must update the URL value in the ServiceMeshControlPlane resource. Additional Grafana parameters spec: kiali: enabled: true dashboard: viewOnlyMode: false grafanaURL: "https://grafana-istio-system.127.0.0.1.nip.io" ingress: enabled: true 3.10.4.2. Configuring Kiali for Jaeger When you install Kiali and Jaeger as part of Red Hat OpenShift Service Mesh the Operator configures the following by default: Jaeger is enabled as an external service for Kiali Jaeger authorization for the Kiali console Jaeger URL for the Kiali console Kiali can automatically detect the Jaeger URL. However if you have a custom Jaeger installation that is not easily auto-detectable by Kiali, you must update the URL value in the ServiceMeshControlPlane resource. Additional Jaeger parameters spec: kiali: enabled: true dashboard: viewOnlyMode: false jaegerURL: "http://jaeger-query-istio-system.127.0.0.1.nip.io" ingress: enabled: true 3.10.5. Configuring Jaeger When the Service Mesh Operator creates the ServiceMeshControlPlane resource it can also create the resources for distributed tracing. Service Mesh uses Jaeger for distributed tracing. You can specify your Jaeger configuration in either of two ways: Configure Jaeger in the ServiceMeshControlPlane resource. There are some limitations with this approach. Configure Jaeger in a custom Jaeger resource and then reference that Jaeger instance in the ServiceMeshControlPlane resource. If a Jaeger resource matching the value of name exists, the control plane will use the existing installation. This approach lets you fully customize your Jaeger configuration. The default Jaeger parameters specified in the ServiceMeshControlPlane are as follows: Default all-in-one Jaeger parameters apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: version: v1.1 istio: tracing: enabled: true jaeger: template: all-in-one Table 3.11. Jaeger parameters Parameter Description Values Default value This parameter enables/disables installing and deploying tracing by the Service Mesh Operator. Installing Jaeger is enabled by default. To use an existing Jaeger deployment, set this value to false . true / false true This parameter specifies which Jaeger deployment strategy to use. all-in-one - For development, testing, demonstrations, and proof of concept. production-elasticsearch - For production use. all-in-one Note The default template in the ServiceMeshControlPlane resource is the all-in-one deployment strategy which uses in-memory storage. For production, the only supported storage option is Elasticsearch, therefore you must configure the ServiceMeshControlPlane to request the production-elasticsearch template when you deploy Service Mesh within a production environment. 3.10.5.1. Configuring Elasticsearch The default Jaeger deployment strategy uses the all-in-one template so that the installation can be completed using minimal resources. However, because the all-in-one template uses in-memory storage, it is only recommended for development, demo, or testing purposes and should NOT be used for production environments. If you are deploying Service Mesh and Jaeger in a production environment you must change the template to the production-elasticsearch template, which uses Elasticsearch for Jaeger's storage needs. Elasticsearch is a memory intensive application. The initial set of nodes specified in the default OpenShift Container Platform installation may not be large enough to support the Elasticsearch cluster. You should modify the default Elasticsearch configuration to match your use case and the resources you have requested for your OpenShift Container Platform installation. You can adjust both the CPU and memory limits for each component by modifying the resources block with valid CPU and memory values. Additional nodes must be added to the cluster if you want to run with the recommended amount (or more) of memory. Ensure that you do not exceed the resources requested for your OpenShift Container Platform installation. Default "production" Jaeger parameters with Elasticsearch apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: tracing: enabled: true ingress: enabled: true jaeger: template: production-elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: resources: requests: cpu: "1" memory: "16Gi" limits: cpu: "1" memory: "16Gi" Table 3.12. Elasticsearch parameters Parameter Description Values Default Value Examples This parameter enables/disables tracing in Service Mesh. Jaeger is installed by default. true / false true This parameter enables/disables ingress for Jaeger. true / false true This parameter specifies which Jaeger deployment strategy to use. all-in-one / production-elasticsearch all-in-one Number of Elasticsearch nodes to create. Integer value. 1 Proof of concept = 1, Minimum deployment =3 Number of central processing units for requests, based on your environment's configuration. Specified in cores or millicores (for example, 200m, 0.5, 1). 1Gi Proof of concept = 500m, Minimum deployment =1 Available memory for requests, based on your environment's configuration. Specified in bytes (for example, 200Ki, 50Mi, 5Gi). 500m Proof of concept = 1Gi, Minimum deployment = 16Gi* Limit on number of central processing units, based on your environment's configuration. Specified in cores or millicores (for example, 200m, 0.5, 1). Proof of concept = 500m, Minimum deployment =1 Available memory limit based on your environment's configuration. Specified in bytes (for example, 200Ki, 50Mi, 5Gi). Proof of concept = 1Gi, Minimum deployment = 16Gi* * Each Elasticsearch node can operate with a lower memory setting though this is not recommended for production deployments. For production use, you should have no less than 16Gi allocated to each pod by default, but preferably allocate as much as you can, up to 64Gi per pod. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Navigate to Operators Installed Operators . Click the Red Hat OpenShift Service Mesh Operator. Click the Istio Service Mesh Control Plane tab. Click the name of your control plane file, for example, basic-install . Click the YAML tab. Edit the Jaeger parameters, replacing the default all-in-one template with parameters for the production-elasticsearch template, modified for your use case. Ensure that the indentation is correct. Click Save . Click Reload . OpenShift Container Platform redeploys Jaeger and creates the Elasticsearch resources based on the specified parameters. 3.10.5.2. Connecting to an existing Jaeger instance In order for the SMCP to connect to an existing Jaeger instance, the following must be true: The Jaeger instance is deployed in the same namespace as the control plane, for example, into the istio-system namespace. To enable secure communication between services, you should enable the oauth-proxy, which secures communication to your Jaeger instance, and make sure the secret is mounted into your Jaeger instance so Kiali can communicate with it. To use a custom or already existing Jaeger instance, set spec.istio.tracing.enabled to "false" to disable the deployment of a Jaeger instance. Supply the correct jaeger-collector endpoint to Mixer by setting spec.istio.global.tracer.zipkin.address to the hostname and port of your jaeger-collector service. The hostname of the service is usually <jaeger-instance-name>-collector.<namespace>.svc.cluster.local . Supply the correct jaeger-query endpoint to Kiali for gathering traces by setting spec.istio.kiali.jaegerInClusterURL to the hostname of your jaeger-query service - the port is normally not required, as it uses 443 by default. The hostname of the service is usually <jaeger-instance-name>-query.<namespace>.svc.cluster.local . Supply the dashboard URL of your Jaeger instance to Kiali to enable accessing Jaeger through the Kiali console. You can retrieve the URL from the OpenShift route that is created by the Jaeger Operator. If your Jaeger resource is called external-jaeger and resides in the istio-system project, you can retrieve the route using the following command: USD oc get route -n istio-system external-jaeger Example output NAME HOST/PORT PATH SERVICES [...] external-jaeger external-jaeger-istio-system.apps.test external-jaeger-query [...] The value under HOST/PORT is the externally accessible URL of the Jaeger dashboard. Example Jaeger resource apiVersion: jaegertracing.io/v1 kind: "Jaeger" metadata: name: "external-jaeger" # Deploy to the Control Plane Namespace namespace: istio-system spec: # Set Up Authentication ingress: enabled: true security: oauth-proxy openshift: # This limits user access to the Jaeger instance to users who have access # to the control plane namespace. Make sure to set the correct namespace here sar: '{"namespace": "istio-system", "resource": "pods", "verb": "get"}' htpasswdFile: /etc/proxy/htpasswd/auth volumeMounts: - name: secret-htpasswd mountPath: /etc/proxy/htpasswd volumes: - name: secret-htpasswd secret: secretName: htpasswd The following ServiceMeshControlPlane example assumes that you have deployed Jaeger using the Jaeger Operator and the example Jaeger resource. Example ServiceMeshControlPlane with external Jaeger apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: external-jaeger namespace: istio-system spec: version: v1.1 istio: tracing: # Disable Jaeger deployment by service mesh operator enabled: false global: tracer: zipkin: # Set Endpoint for Trace Collection address: external-jaeger-collector.istio-system.svc.cluster.local:9411 kiali: # Set Jaeger dashboard URL dashboard: jaegerURL: https://external-jaeger-istio-system.apps.test # Set Endpoint for Trace Querying jaegerInClusterURL: external-jaeger-query.istio-system.svc.cluster.local 3.10.5.3. Configuring Elasticsearch The default Jaeger deployment strategy uses the all-in-one template so that the installation can be completed using minimal resources. However, because the all-in-one template uses in-memory storage, it is only recommended for development, demo, or testing purposes and should NOT be used for production environments. If you are deploying Service Mesh and Jaeger in a production environment you must change the template to the production-elasticsearch template, which uses Elasticsearch for Jaeger's storage needs. Elasticsearch is a memory intensive application. The initial set of nodes specified in the default OpenShift Container Platform installation may not be large enough to support the Elasticsearch cluster. You should modify the default Elasticsearch configuration to match your use case and the resources you have requested for your OpenShift Container Platform installation. You can adjust both the CPU and memory limits for each component by modifying the resources block with valid CPU and memory values. Additional nodes must be added to the cluster if you want to run with the recommended amount (or more) of memory. Ensure that you do not exceed the resources requested for your OpenShift Container Platform installation. Default "production" Jaeger parameters with Elasticsearch apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: tracing: enabled: true ingress: enabled: true jaeger: template: production-elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: resources: requests: cpu: "1" memory: "16Gi" limits: cpu: "1" memory: "16Gi" Table 3.13. Elasticsearch parameters Parameter Description Values Default Value Examples This parameter enables/disables tracing in Service Mesh. Jaeger is installed by default. true / false true This parameter enables/disables ingress for Jaeger. true / false true This parameter specifies which Jaeger deployment strategy to use. all-in-one / production-elasticsearch all-in-one Number of Elasticsearch nodes to create. Integer value. 1 Proof of concept = 1, Minimum deployment =3 Number of central processing units for requests, based on your environment's configuration. Specified in cores or millicores (for example, 200m, 0.5, 1). 1Gi Proof of concept = 500m, Minimum deployment =1 Available memory for requests, based on your environment's configuration. Specified in bytes (for example, 200Ki, 50Mi, 5Gi). 500m Proof of concept = 1Gi, Minimum deployment = 16Gi* Limit on number of central processing units, based on your environment's configuration. Specified in cores or millicores (for example, 200m, 0.5, 1). Proof of concept = 500m, Minimum deployment =1 Available memory limit based on your environment's configuration. Specified in bytes (for example, 200Ki, 50Mi, 5Gi). Proof of concept = 1Gi, Minimum deployment = 16Gi* * Each Elasticsearch node can operate with a lower memory setting though this is not recommended for production deployments. For production use, you should have no less than 16Gi allocated to each pod by default, but preferably allocate as much as you can, up to 64Gi per pod. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Navigate to Operators Installed Operators . Click the Red Hat OpenShift Service Mesh Operator. Click the Istio Service Mesh Control Plane tab. Click the name of your control plane file, for example, basic-install . Click the YAML tab. Edit the Jaeger parameters, replacing the default all-in-one template with parameters for the production-elasticsearch template, modified for your use case. Ensure that the indentation is correct. Click Save . Click Reload . OpenShift Container Platform redeploys Jaeger and creates the Elasticsearch resources based on the specified parameters. 3.10.5.4. Configuring the Elasticsearch index cleaner job When the Service Mesh Operator creates the ServiceMeshControlPlane it also creates the custom resource (CR) for Jaeger. The Red Hat OpenShift distributed tracing platform (Jaeger) Operator then uses this CR when creating Jaeger instances. When using Elasticsearch storage, by default a job is created to clean old traces from it. To configure the options for this job, you edit the Jaeger custom resource (CR), to customize it for your use case. The relevant options are listed below. apiVersion: jaegertracing.io/v1 kind: Jaeger spec: strategy: production storage: type: elasticsearch esIndexCleaner: enabled: false numberOfDays: 7 schedule: "55 23 * * *" Table 3.14. Elasticsearch index cleaner parameters Parameter Values Description enabled: true/ false Enable or disable the index cleaner job. numberOfDays: integer value Number of days to wait before deleting an index. schedule: "55 23 * * *" Cron expression for the job to run 3.10.6. 3scale configuration The following table explains the parameters for the 3scale Istio Adapter in the ServiceMeshControlPlane resource. Example 3scale parameters apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: addons: 3Scale: enabled: false PARAM_THREESCALE_LISTEN_ADDR: 3333 PARAM_THREESCALE_LOG_LEVEL: info PARAM_THREESCALE_LOG_JSON: true PARAM_THREESCALE_LOG_GRPC: false PARAM_THREESCALE_REPORT_METRICS: true PARAM_THREESCALE_METRICS_PORT: 8080 PARAM_THREESCALE_CACHE_TTL_SECONDS: 300 PARAM_THREESCALE_CACHE_REFRESH_SECONDS: 180 PARAM_THREESCALE_CACHE_ENTRIES_MAX: 1000 PARAM_THREESCALE_CACHE_REFRESH_RETRIES: 1 PARAM_THREESCALE_ALLOW_INSECURE_CONN: false PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS: 10 PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS: 60 PARAM_USE_CACHED_BACKEND: false PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS: 15 PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED: true # ... Table 3.15. 3scale parameters Parameter Description Values Default value enabled Whether to use the 3scale adapter true / false false PARAM_THREESCALE_LISTEN_ADDR Sets the listen address for the gRPC server Valid port number 3333 PARAM_THREESCALE_LOG_LEVEL Sets the minimum log output level. debug , info , warn , error , or none info PARAM_THREESCALE_LOG_JSON Controls whether the log is formatted as JSON true / false true PARAM_THREESCALE_LOG_GRPC Controls whether the log contains gRPC info true / false true PARAM_THREESCALE_REPORT_METRICS Controls whether 3scale system and backend metrics are collected and reported to Prometheus true / false true PARAM_THREESCALE_METRICS_PORT Sets the port that the 3scale /metrics endpoint can be scrapped from Valid port number 8080 PARAM_THREESCALE_CACHE_TTL_SECONDS Time period, in seconds, to wait before purging expired items from the cache Time period in seconds 300 PARAM_THREESCALE_CACHE_REFRESH_SECONDS Time period before expiry when cache elements are attempted to be refreshed Time period in seconds 180 PARAM_THREESCALE_CACHE_ENTRIES_MAX Max number of items that can be stored in the cache at any time. Set to 0 to disable caching Valid number 1000 PARAM_THREESCALE_CACHE_REFRESH_RETRIES The number of times unreachable hosts are retried during a cache update loop Valid number 1 PARAM_THREESCALE_ALLOW_INSECURE_CONN Allow to skip certificate verification when calling 3scale APIs. Enabling this is not recommended. true / false false PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS Sets the number of seconds to wait before terminating requests to 3scale System and Backend Time period in seconds 10 PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS Sets the maximum amount of seconds (+/-10% jitter) a connection may exist before it is closed Time period in seconds 60 PARAM_USE_CACHE_BACKEND If true, attempt to create an in-memory apisonator cache for authorization requests true / false false PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS If the backend cache is enabled, this sets the interval in seconds for flushing the cache against 3scale Time period in seconds 15 PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED Whenever the backend cache cannot retrieve authorization data, whether to deny (closed) or allow (open) requests true / false true 3.11. Using the 3scale Istio adapter Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . The 3scale Istio Adapter is an optional adapter that allows you to label a service running within the Red Hat OpenShift Service Mesh and integrate that service with the 3scale API Management solution. It is not required for Red Hat OpenShift Service Mesh. 3.11.1. Integrate the 3scale adapter with Red Hat OpenShift Service Mesh You can use these examples to configure requests to your services using the 3scale Istio Adapter. Prerequisites Red Hat OpenShift Service Mesh version 1.x A working 3scale account ( SaaS or 3scale 2.5 On-Premises ) Enabling backend cache requires 3scale 2.9 or greater Red Hat OpenShift Service Mesh prerequisites Note To configure the 3scale Istio Adapter, refer to Red Hat OpenShift Service Mesh custom resources for instructions on adding adapter parameters to the custom resource file. Note Pay particular attention to the kind: handler resource. You must update this with your 3scale account credentials. You can optionally add a service_id to a handler, but this is kept for backwards compatibility only, since it would render the handler only useful for one service in your 3scale account. If you add service_id to a handler, enabling 3scale for other services requires you to create more handlers with different service_ids . Use a single handler per 3scale account by following the steps below: Procedure Create a handler for your 3scale account and specify your account credentials. Omit any service identifier. apiVersion: "config.istio.io/v1alpha2" kind: handler metadata: name: threescale spec: adapter: threescale params: system_url: "https://<organization>-admin.3scale.net/" access_token: "<ACCESS_TOKEN>" connection: address: "threescale-istio-adapter:3333" Optionally, you can provide a backend_url field within the params section to override the URL provided by the 3scale configuration. This may be useful if the adapter runs on the same cluster as the 3scale on-premise instance, and you wish to leverage the internal cluster DNS. Edit or patch the Deployment resource of any services belonging to your 3scale account as follows: Add the "service-mesh.3scale.net/service-id" label with a value corresponding to a valid service_id . Add the "service-mesh.3scale.net/credentials" label with its value being the name of the handler resource from step 1. Do step 2 to link it to your 3scale account credentials and to its service identifier, whenever you intend to add more services. Modify the rule configuration with your 3scale configuration to dispatch the rule to the threescale handler. Rule configuration example apiVersion: "config.istio.io/v1alpha2" kind: rule metadata: name: threescale spec: match: destination.labels["service-mesh.3scale.net"] == "true" actions: - handler: threescale.handler instances: - threescale-authorization.instance 3.11.1.1. Generating 3scale custom resources The adapter includes a tool that allows you to generate the handler , instance , and rule custom resources. Table 3.16. Usage Option Description Required Default value -h, --help Produces help output for available options No --name Unique name for this URL, token pair Yes -n, --namespace Namespace to generate templates No istio-system -t, --token 3scale access token Yes -u, --url 3scale Admin Portal URL Yes --backend-url 3scale backend URL. If set, it overrides the value that is read from system configuration No -s, --service 3scale API/Service ID No --auth 3scale authentication pattern to specify (1=API Key, 2=App Id/App Key, 3=OIDC) No Hybrid -o, --output File to save produced manifests to No Standard output --version Outputs the CLI version and exits immediately No 3.11.1.1.1. Generate templates from URL examples Note Run the following commands via oc exec from the 3scale adapter container image in Generating manifests from a deployed adapter . Use the 3scale-config-gen command to help avoid YAML syntax and indentation errors. You can omit the --service if you use the annotations. This command must be invoked from within the container image via oc exec . Procedure Use the 3scale-config-gen command to autogenerate templates files allowing the token, URL pair to be shared by multiple services as a single handler: The following example generates the templates with the service ID embedded in the handler: Additional resources Tokens . 3.11.1.2. Generating manifests from a deployed adapter Note NAME is an identifier you use to identify with the service you are managing with 3scale. The CREDENTIALS_NAME reference is an identifier that corresponds to the match section in the rule configuration. This is automatically set to the NAME identifier if you are using the CLI tool. Its value does not need to be anything specific: the label value should just match the contents of the rule. See Routing service traffic through the adapter for more information. Run this command to generate manifests from a deployed adapter in the istio-system namespace: This will produce sample output to the terminal. Edit these samples if required and create the objects using the oc create command. When the request reaches the adapter, the adapter needs to know how the service maps to an API on 3scale. You can provide this information in two ways: Label the workload (recommended) Hard code the handler as service_id Update the workload with the required annotations: Note You only need to update the service ID provided in this example if it is not already embedded in the handler. The setting in the handler takes precedence . 3.11.1.3. Routing service traffic through the adapter Follow these steps to drive traffic for your service through the 3scale adapter. Prerequisites Credentials and service ID from your 3scale administrator. Procedure Match the rule destination.labels["service-mesh.3scale.net/credentials"] == "threescale" that you previously created in the configuration, in the kind: rule resource. Add the above label to PodTemplateSpec on the Deployment of the target workload to integrate a service. the value, threescale , refers to the name of the generated handler. This handler stores the access token required to call 3scale. Add the destination.labels["service-mesh.3scale.net/service-id"] == "replace-me" label to the workload to pass the service ID to the adapter via the instance at request time. 3.11.2. Configure the integration settings in 3scale Follow this procedure to configure the 3scale integration settings. Note For 3scale SaaS customers, Red Hat OpenShift Service Mesh is enabled as part of the Early Access program. Procedure Navigate to [your_API_name] Integration Click Settings . Select the Istio option under Deployment . The API Key (user_key) option under Authentication is selected by default. Click Update Product to save your selection. Click Configuration . Click Update Configuration . 3.11.3. Caching behavior Responses from 3scale System APIs are cached by default within the adapter. Entries will be purged from the cache when they become older than the cacheTTLSeconds value. Also by default, automatic refreshing of cached entries will be attempted seconds before they expire, based on the cacheRefreshSeconds value. You can disable automatic refreshing by setting this value higher than the cacheTTLSeconds value. Caching can be disabled entirely by setting cacheEntriesMax to a non-positive value. By using the refreshing process, cached values whose hosts become unreachable will be retried before eventually being purged when past their expiry. 3.11.4. Authenticating requests This release supports the following authentication methods: Standard API Keys : single randomized strings or hashes acting as an identifier and a secret token. Application identifier and key pairs : immutable identifier and mutable secret key strings. OpenID authentication method : client ID string parsed from the JSON Web Token. 3.11.4.1. Applying authentication patterns Modify the instance custom resource, as illustrated in the following authentication method examples, to configure authentication behavior. You can accept the authentication credentials from: Request headers Request parameters Both request headers and query parameters Note When specifying values from headers, they must be lower case. For example, if you want to send a header as User-Key , this must be referenced in the configuration as request.headers["user-key"] . 3.11.4.1.1. API key authentication method Service Mesh looks for the API key in query parameters and request headers as specified in the user option in the subject custom resource parameter. It checks the values in the order given in the custom resource file. You can restrict the search for the API key to either query parameters or request headers by omitting the unwanted option. In this example, Service Mesh looks for the API key in the user_key query parameter. If the API key is not in the query parameter, Service Mesh then checks the user-key header. API key authentication method example apiVersion: "config.istio.io/v1alpha2" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: user: request.query_params["user_key"] | request.headers["user-key"] | "" action: path: request.url_path method: request.method | "get" If you want the adapter to examine a different query parameter or request header, change the name as appropriate. For example, to check for the API key in a query parameter named "key", change request.query_params["user_key"] to request.query_params["key"] . 3.11.4.1.2. Application ID and application key pair authentication method Service Mesh looks for the application ID and application key in query parameters and request headers, as specified in the properties option in the subject custom resource parameter. The application key is optional. It checks the values in the order given in the custom resource file. You can restrict the search for the credentials to either query parameters or request headers by not including the unwanted option. In this example, Service Mesh looks for the application ID and application key in the query parameters first, moving on to the request headers if needed. Application ID and application key pair authentication method example apiVersion: "config.istio.io/v1alpha2" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: app_id: request.query_params["app_id"] | request.headers["app-id"] | "" app_key: request.query_params["app_key"] | request.headers["app-key"] | "" action: path: request.url_path method: request.method | "get" If you want the adapter to examine a different query parameter or request header, change the name as appropriate. For example, to check for the application ID in a query parameter named identification , change request.query_params["app_id"] to request.query_params["identification"] . 3.11.4.1.3. OpenID authentication method To use the OpenID Connect (OIDC) authentication method , use the properties value on the subject field to set client_id , and optionally app_key . You can manipulate this object using the methods described previously. In the example configuration shown below, the client identifier (application ID) is parsed from the JSON Web Token (JWT) under the label azp . You can modify this as needed. OpenID authentication method example apiVersion: "config.istio.io/v1alpha2" kind: instance metadata: name: threescale-authorization spec: template: threescale-authorization params: subject: properties: app_key: request.query_params["app_key"] | request.headers["app-key"] | "" client_id: request.auth.claims["azp"] | "" action: path: request.url_path method: request.method | "get" service: destination.labels["service-mesh.3scale.net/service-id"] | "" For this integration to work correctly, OIDC must still be done in 3scale for the client to be created in the identity provider (IdP). You should create a Request authorization for the service you want to protect in the same namespace as that service. The JWT is passed in the Authorization header of the request. In the sample RequestAuthentication defined below, replace issuer , jwksUri , and selector as appropriate. OpenID Policy example apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs 3.11.4.1.4. Hybrid authentication method You can choose to not enforce a particular authentication method and accept any valid credentials for either method. If both an API key and an application ID/application key pair are provided, Service Mesh uses the API key. In this example, Service Mesh checks for an API key in the query parameters, then the request headers. If there is no API key, it then checks for an application ID and key in the query parameters, then the request headers. Hybrid authentication method example apiVersion: "config.istio.io/v1alpha2" kind: instance metadata: name: threescale-authorization spec: template: authorization params: subject: user: request.query_params["user_key"] | request.headers["user-key"] | properties: app_id: request.query_params["app_id"] | request.headers["app-id"] | "" app_key: request.query_params["app_key"] | request.headers["app-key"] | "" client_id: request.auth.claims["azp"] | "" action: path: request.url_path method: request.method | "get" service: destination.labels["service-mesh.3scale.net/service-id"] | "" 3.11.5. 3scale Adapter metrics The adapter, by default reports various Prometheus metrics that are exposed on port 8080 at the /metrics endpoint. These metrics provide insight into how the interactions between the adapter and 3scale are performing. The service is labeled to be automatically discovered and scraped by Prometheus. 3.11.6. 3scale Istio adapter verification You might want to check whether the 3scale Istio adapter is working as expected. If your adapter is not working, use the following steps to help troubleshoot the problem. Procedure Ensure the 3scale-adapter pod is running in the Service Mesh control plane namespace: USD oc get pods -n istio-system Check that the 3scale-adapter pod has printed out information about itself booting up, such as its version: USD oc logs istio-system When performing requests to the services protected by the 3scale adapter integration, always try requests that lack the right credentials and ensure they fail. Check the 3scale adapter logs to gather additional information. Additional resources Inspecting pod and container logs . 3.11.7. 3scale Istio adapter troubleshooting checklist As the administrator installing the 3scale Istio adapter, there are a number of scenarios that might be causing your integration to not function properly. Use the following list to troubleshoot your installation: Incorrect YAML indentation. Missing YAML sections. Forgot to apply the changes in the YAML to the cluster. Forgot to label the service workloads with the service-mesh.3scale.net/credentials key. Forgot to label the service workloads with service-mesh.3scale.net/service-id when using handlers that do not contain a service_id so they are reusable per account. The Rule custom resource points to the wrong handler or instance custom resources, or the references lack the corresponding namespace suffix. The Rule custom resource match section cannot possibly match the service you are configuring, or it points to a destination workload that is not currently running or does not exist. Wrong access token or URL for the 3scale Admin Portal in the handler. The Instance custom resource's params/subject/properties section fails to list the right parameters for app_id , app_key , or client_id , either because they specify the wrong location such as the query parameters, headers, and authorization claims, or the parameter names do not match the requests used for testing. Failing to use the configuration generator without realizing that it actually lives in the adapter container image and needs oc exec to invoke it. 3.12. Removing Service Mesh Warning You are viewing documentation for a Red Hat OpenShift Service Mesh release that is no longer supported. Service Mesh version 1.0 and 1.1 control planes are no longer supported. For information about upgrading your service mesh control plane, see Upgrading Service Mesh . For information about the support status of a particular Red Hat OpenShift Service Mesh release, see the Product lifecycle page . To remove Red Hat OpenShift Service Mesh from an existing OpenShift Container Platform instance, remove the control plane before removing the operators. 3.12.1. Removing the Red Hat OpenShift Service Mesh control plane To uninstall Service Mesh from an existing OpenShift Container Platform instance, first you delete the Service Mesh control plane and the Operators. Then, you run commands to remove residual resources. 3.12.1.1. Removing the Service Mesh control plane using the web console You can remove the Red Hat OpenShift Service Mesh control plane by using the web console. Procedure Log in to the OpenShift Container Platform web console. Click the Project menu and select the project where you installed the Service Mesh control plane, for example istio-system . Navigate to Operators Installed Operators . Click Service Mesh Control Plane under Provided APIs . Click the ServiceMeshControlPlane menu . Click Delete Service Mesh Control Plane . Click Delete on the confirmation dialog window to remove the ServiceMeshControlPlane . 3.12.1.2. Removing the Service Mesh control plane using the CLI You can remove the Red Hat OpenShift Service Mesh control plane by using the CLI. In this example, istio-system is the name of the control plane project. Procedure Log in to the OpenShift Container Platform CLI. Run the following command to delete the ServiceMeshMemberRoll resource. USD oc delete smmr -n istio-system default Run this command to retrieve the name of the installed ServiceMeshControlPlane : USD oc get smcp -n istio-system Replace <name_of_custom_resource> with the output from the command, and run this command to remove the custom resource: USD oc delete smcp -n istio-system <name_of_custom_resource> 3.12.2. Removing the installed Operators You must remove the Operators to successfully remove Red Hat OpenShift Service Mesh. After you remove the Red Hat OpenShift Service Mesh Operator, you must remove the Kiali Operator, the Red Hat OpenShift distributed tracing platform (Jaeger) Operator, and the OpenShift Elasticsearch Operator. 3.12.2.1. Removing the Operators Follow this procedure to remove the Operators that make up Red Hat OpenShift Service Mesh. Repeat the steps for each of the following Operators. Red Hat OpenShift Service Mesh Kiali Red Hat OpenShift distributed tracing platform (Jaeger) OpenShift Elasticsearch Procedure Log in to the OpenShift Container Platform web console. From the Operators Installed Operators page, scroll or type a keyword into the Filter by name to find each Operator. Then, click the Operator name. On the Operator Details page, select Uninstall Operator from the Actions menu. Follow the prompts to uninstall each Operator. 3.12.2.2. Clean up Operator resources Follow this procedure to manually remove resources left behind after removing the Red Hat OpenShift Service Mesh Operator using the OpenShift Container Platform web console. Prerequisites An account with cluster administration access. Access to the OpenShift CLI ( oc ). Procedure Log in to the OpenShift Container Platform CLI as a cluster administrator. Run the following commands to clean up resources after uninstalling the Operators. If you intend to keep using Jaeger as a stand alone service without service mesh, do not delete the Jaeger resources. Note The Operators are installed in the openshift-operators namespace by default. If you installed the Operators in another namespace, replace openshift-operators with the name of the project where the Red Hat OpenShift Service Mesh Operator was installed. USD oc delete validatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io USD oc delete mutatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io USD oc delete -n openshift-operators daemonset/istio-node USD oc delete clusterrole/istio-admin clusterrole/istio-cni clusterrolebinding/istio-cni USD oc delete clusterrole istio-view istio-edit USD oc delete clusterrole jaegers.jaegertracing.io-v1-admin jaegers.jaegertracing.io-v1-crdview jaegers.jaegertracing.io-v1-edit jaegers.jaegertracing.io-v1-view USD oc get crds -o name | grep '.*\.istio\.io' | xargs -r -n 1 oc delete USD oc get crds -o name | grep '.*\.maistra\.io' | xargs -r -n 1 oc delete USD oc get crds -o name | grep '.*\.kiali\.io' | xargs -r -n 1 oc delete USD oc delete crds jaegers.jaegertracing.io USD oc delete svc admission-controller -n <operator-project> USD oc delete project <istio-system-project>
[ "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.18.0", "oc adm must-gather -- /usr/bin/gather_audit_logs", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s", "oc adm must-gather --run-namespace <namespace> --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.18.0", "oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.6", "oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:2.6 gather <namespace>", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: action: DENY rules: - from: - source: namespaces: [\"dev\"] to: - operation: hosts: [\"httpbin.com\",\"httpbin.com:*\"]", "apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: default spec: action: DENY rules: - to: - operation: hosts: [\"httpbin.example.com:*\"]", "spec: global: pathNormalization: <option>", "{ \"runtime\": { \"symlink_root\": \"/var/lib/istio/envoy/runtime\" } }", "oc create secret generic -n <SMCPnamespace> gateway-bootstrap --from-file=bootstrap-override.json", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: gateways: istio-ingressgateway: env: ISTIO_BOOTSTRAP_OVERRIDE: /var/lib/istio/envoy/custom-bootstrap/bootstrap-override.json secretVolumes: - mountPath: /var/lib/istio/envoy/custom-bootstrap name: custom-bootstrap secretName: gateway-bootstrap", "oc create secret generic -n <SMCPnamespace> gateway-settings --from-literal=overload.global_downstream_max_connections=10000", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: template: default #Change the version to \"v1.0\" if you are on the 1.0 stream. version: v1.1 istio: gateways: istio-ingressgateway: env: ISTIO_BOOTSTRAP_OVERRIDE: /var/lib/istio/envoy/custom-bootstrap/bootstrap-override.json secretVolumes: - mountPath: /var/lib/istio/envoy/custom-bootstrap name: custom-bootstrap secretName: gateway-bootstrap # below is the new secret mount - mountPath: /var/lib/istio/envoy/runtime name: gateway-settings secretName: gateway-settings", "oc get jaeger -n istio-system", "NAME AGE jaeger 3d21h", "oc get jaeger jaeger -oyaml -n istio-system > /tmp/jaeger-cr.yaml", "oc delete jaeger jaeger -n istio-system", "oc create -f /tmp/jaeger-cr.yaml -n istio-system", "rm /tmp/jaeger-cr.yaml", "oc delete -f <jaeger-cr-file>", "oc delete -f jaeger-prod-elasticsearch.yaml", "oc create -f <jaeger-cr-file>", "oc get pods -n jaeger-system -w", "spec: version: v1.1", "apiVersion: \"rbac.istio.io/v1alpha1\" kind: ServiceRoleBinding metadata: name: httpbin-client-binding namespace: httpbin spec: subjects: - user: \"cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account\" properties: request.headers[<header>]: \"value\"", "apiVersion: \"rbac.istio.io/v1alpha1\" kind: ServiceRoleBinding metadata: name: httpbin-client-binding namespace: httpbin spec: subjects: - user: \"cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account\" properties: request.regex.headers[<header>]: \"<regular expression>\"", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc new-project istio-system", "oc create -n istio-system -f istio-installation.yaml", "oc get smcp -n istio-system", "NAME READY STATUS PROFILES VERSION AGE basic-install 11/11 ComponentsReady [\"default\"] v1.1.18 4m25s", "oc get pods -n istio-system -w", "NAME READY STATUS RESTARTS AGE grafana-7bf5764d9d-2b2f6 2/2 Running 0 28h istio-citadel-576b9c5bbd-z84z4 1/1 Running 0 28h istio-egressgateway-5476bc4656-r4zdv 1/1 Running 0 28h istio-galley-7d57b47bb7-lqdxv 1/1 Running 0 28h istio-ingressgateway-dbb8f7f46-ct6n5 1/1 Running 0 28h istio-pilot-546bf69578-ccg5x 2/2 Running 0 28h istio-policy-77fd498655-7pvjw 2/2 Running 0 28h istio-sidecar-injector-df45bd899-ctxdt 1/1 Running 0 28h istio-telemetry-66f697d6d5-cj28l 2/2 Running 0 28h jaeger-896945cbc-7lqrr 2/2 Running 0 11h kiali-78d9c5b87c-snjzh 1/1 Running 0 22h prometheus-6dff867c97-gr2n5 2/2 Running 0 28h", "oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443", "oc new-project <your-project>", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name", "oc create -n istio-system -f servicemeshmemberroll-default.yaml", "oc get smmr -n istio-system default", "oc edit smmr -n <controlplane-namespace>", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system #control plane project spec: members: # a list of projects joined into the service mesh - your-project-name - another-project-name", "oc patch deployment/<deployment> -p '{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/restartedAt\": \"'`date -Iseconds`'\"}}}}}'", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true", "apiVersion: \"authentication.istio.io/v1alpha1\" kind: \"Policy\" metadata: name: default namespace: <NAMESPACE> spec: peers: - mtls: {}", "apiVersion: \"networking.istio.io/v1alpha3\" kind: \"DestinationRule\" metadata: name: \"default\" namespace: <CONTROL_PLANE_NAMESPACE>> spec: host: \"*.local\" trafficPolicy: tls: mode: ISTIO_MUTUAL", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: tls: minProtocolVersion: TLSv1_2 maxProtocolVersion: TLSv1_3", "oc create secret generic cacerts -n istio-system --from-file=<path>/ca-cert.pem --from-file=<path>/ca-key.pem --from-file=<path>/root-cert.pem --from-file=<path>/cert-chain.pem", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true security: selfSigned: false", "oc delete secret istio.default", "RATINGSPOD=`oc get pods -l app=ratings -o jsonpath='{.items[0].metadata.name}'`", "oc exec -it USDRATINGSPOD -c istio-proxy -- /bin/cat /etc/certs/root-cert.pem > /tmp/pod-root-cert.pem", "oc exec -it USDRATINGSPOD -c istio-proxy -- /bin/cat /etc/certs/cert-chain.pem > /tmp/pod-cert-chain.pem", "openssl x509 -in <path>/root-cert.pem -text -noout > /tmp/root-cert.crt.txt", "openssl x509 -in /tmp/pod-root-cert.pem -text -noout > /tmp/pod-root-cert.crt.txt", "diff /tmp/root-cert.crt.txt /tmp/pod-root-cert.crt.txt", "sed '0,/^-----END CERTIFICATE-----/d' /tmp/pod-cert-chain.pem > /tmp/pod-cert-chain-ca.pem", "openssl x509 -in <path>/ca-cert.pem -text -noout > /tmp/ca-cert.crt.txt", "openssl x509 -in /tmp/pod-cert-chain-ca.pem -text -noout > /tmp/pod-cert-chain-ca.crt.txt", "diff /tmp/ca-cert.crt.txt /tmp/pod-cert-chain-ca.crt.txt", "head -n 21 /tmp/pod-cert-chain.pem > /tmp/pod-cert-chain-workload.pem", "openssl verify -CAfile <(cat <path>/ca-cert.pem <path>/root-cert.pem) /tmp/pod-cert-chain-workload.pem", "/tmp/pod-cert-chain-workload.pem: OK", "oc delete secret cacerts -n istio-system", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: global: mtls: enabled: true security: selfSigned: true", "apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: ext-host-gwy spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 443 name: https protocol: HTTPS hosts: - ext-host.example.com tls: mode: SIMPLE serverCertificate: /tmp/tls.crt privateKey: /tmp/tls.key", "apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: virtual-svc spec: hosts: - ext-host.example.com gateways: - ext-host-gwy", "apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - \"*\"", "oc apply -f gateway.yaml", "apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - \"*\" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080", "oc apply -f vs.yaml", "export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')", "export TARGET_PORT=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.port.targetPort}')", "curl -s -I \"USDGATEWAY_URL/productpage\"", "oc get svc istio-ingressgateway -n istio-system", "export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')", "export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].port}')", "export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].port}')", "export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].port}')", "export INGRESS_HOST=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')", "export INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"http2\")].nodePort}')", "export SECURE_INGRESS_PORT=USD(oc -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"https\")].nodePort}')", "export TCP_INGRESS_PORT=USD(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name==\"tcp\")].nodePort}')", "spec: istio: gateways: istio-egressgateway: autoscaleEnabled: false autoscaleMin: 1 autoscaleMax: 5 istio-ingressgateway: autoscaleEnabled: false autoscaleMin: 1 autoscaleMax: 5 ior_enabled: true", "apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: gateway1 spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - www.bookinfo.com - bookinfo.example.com", "oc -n <control_plane_namespace> get routes", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD gateway1-lvlfn bookinfo.example.com istio-ingressgateway <all> None gateway1-scqhv www.bookinfo.com istio-ingressgateway <all> None", "apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: svc-entry spec: hosts: - ext-svc.example.com ports: - number: 443 name: https protocol: HTTPS location: MESH_EXTERNAL resolution: DNS", "apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: ext-res-dr spec: host: ext-svc.example.com trafficPolicy: tls: mode: MUTUAL clientCertificate: /etc/certs/myclientcert.pem privateKey: /etc/certs/client_private_key.pem caCertificates: /etc/certs/rootcacerts.pem", "apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: end-user: exact: jason route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v3", "oc apply -f <VirtualService.yaml>", "spec: hosts:", "spec: http: - match:", "spec: http: - match: - destination:", "apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: my-destination-rule spec: host: my-svc trafficPolicy: loadBalancer: simple: RANDOM subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 trafficPolicy: loadBalancer: simple: ROUND_ROBIN - name: v3 labels: version: v3", "oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/virtual-service-all-v1.yaml", "oc get virtualservices -o yaml", "export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')", "echo \"http://USDGATEWAY_URL/productpage\"", "oc apply -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml", "oc get virtualservice reviews -o yaml", "oc create configmap --from-file=<templates-directory> smcp-templates -n openshift-operators", "oc get clusterserviceversion -n openshift-operators | grep 'Service Mesh'", "maistra.v1.0.0 Red Hat OpenShift Service Mesh 1.0.0 Succeeded", "oc edit clusterserviceversion -n openshift-operators maistra.v1.0.0", "deployments: - name: istio-operator spec: template: spec: containers: volumeMounts: - name: discovery-cache mountPath: /home/istio-operator/.kube/cache/discovery - name: smcp-templates mountPath: /usr/local/share/istio-operator/templates/ volumes: - name: discovery-cache emptyDir: medium: Memory - name: smcp-templates configMap: name: smcp-templates", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: minimal-install spec: template: default", "oc get deployment -n <namespace>", "get deployment -n bookinfo ratings-v1 -o yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: ratings-v1 namespace: bookinfo labels: app: ratings version: v1 spec: template: metadata: labels: sidecar.istio.io/inject: 'true'", "oc apply -n <namespace> -f deployment.yaml", "oc apply -n bookinfo -f deployment-ratings-v1.yaml", "oc get deployment -n <namespace> <deploymentName> -o yaml", "oc get deployment -n bookinfo ratings-v1 -o yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: resource spec: replicas: 7 selector: matchLabels: app: resource template: metadata: annotations: sidecar.maistra.io/proxyEnv: \"{ \\\"maistra_test_env\\\": \\\"env_value\\\", \\\"maistra_test_env_2\\\": \\\"env_value_2\\\" }\"", "oc get cm -n istio-system istio -o jsonpath='{.data.mesh}' | grep disablePolicyChecks", "oc edit cm -n istio-system istio", "oc new-project bookinfo", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default spec: members: - bookinfo", "oc create -n istio-system -f servicemeshmemberroll-default.yaml", "oc get smmr -n istio-system -o wide", "NAME READY STATUS AGE MEMBERS default 1/1 Configured 70s [\"bookinfo\"]", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/platform/kube/bookinfo.yaml", "service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created deployment.apps/reviews-v2 created deployment.apps/reviews-v3 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/bookinfo-gateway.yaml", "gateway.networking.istio.io/bookinfo-gateway created virtualservice.networking.istio.io/bookinfo created", "export GATEWAY_URL=USD(oc -n istio-system get route istio-ingressgateway -o jsonpath='{.spec.host}')", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/destination-rule-all.yaml", "oc apply -n bookinfo -f https://raw.githubusercontent.com/Maistra/istio/maistra-2.6/samples/bookinfo/networking/destination-rule-all-mtls.yaml", "destinationrule.networking.istio.io/productpage created destinationrule.networking.istio.io/reviews created destinationrule.networking.istio.io/ratings created destinationrule.networking.istio.io/details created", "oc get pods -n bookinfo", "NAME READY STATUS RESTARTS AGE details-v1-55b869668-jh7hb 2/2 Running 0 12m productpage-v1-6fc77ff794-nsl8r 2/2 Running 0 12m ratings-v1-7d7d8d8b56-55scn 2/2 Running 0 12m reviews-v1-868597db96-bdxgq 2/2 Running 0 12m reviews-v2-5b64f47978-cvssp 2/2 Running 0 12m reviews-v3-6dfd49b55b-vcwpf 2/2 Running 0 12m", "echo \"http://USDGATEWAY_URL/productpage\"", "oc delete project bookinfo", "oc -n istio-system patch --type='json' smmr default -p '[{\"op\": \"remove\", \"path\": \"/spec/members\", \"value\":[\"'\"bookinfo\"'\"]}]'", "curl \"http://USDGATEWAY_URL/productpage\"", "export JAEGER_URL=USD(oc get route -n istio-system jaeger -o jsonpath='{.spec.host}')", "echo USDJAEGER_URL", "curl \"http://USDGATEWAY_URL/productpage\"", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: basic-install spec: istio: global: proxy: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 128Mi gateways: istio-egressgateway: autoscaleEnabled: false istio-ingressgateway: autoscaleEnabled: false ior_enabled: false mixer: policy: autoscaleEnabled: false telemetry: autoscaleEnabled: false resources: requests: cpu: 100m memory: 1G limits: cpu: 500m memory: 4G pilot: autoscaleEnabled: false traceSampling: 100 kiali: enabled: true grafana: enabled: true tracing: enabled: true jaeger: template: all-in-one", "istio: global: tag: 1.1.0 hub: registry.redhat.io/openshift-service-mesh/ proxy: resources: requests: cpu: 10m memory: 128Mi limits: mtls: enabled: false disablePolicyChecks: true policyCheckFailOpen: false imagePullSecrets: - MyPullSecret", "gateways: egress: enabled: true runtime: deployment: autoScaling: enabled: true maxReplicas: 5 minReplicas: 1 enabled: true ingress: enabled: true runtime: deployment: autoScaling: enabled: true maxReplicas: 5 minReplicas: 1", "mixer: enabled: true policy: autoscaleEnabled: false telemetry: autoscaleEnabled: false resources: requests: cpu: 10m memory: 128Mi limits:", "spec: runtime: components: pilot: deployment: autoScaling: enabled: true minReplicas: 1 maxReplicas: 5 targetCPUUtilizationPercentage: 85 pod: tolerations: - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 60 affinity: podAntiAffinity: requiredDuringScheduling: - key: istio topologyKey: kubernetes.io/hostname operator: In values: - pilot container: resources: limits: cpu: 100m memory: 128M", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: kiali: enabled: true dashboard: viewOnlyMode: false ingress: enabled: true", "enabled", "dashboard viewOnlyMode", "ingress enabled", "spec: kiali: enabled: true dashboard: viewOnlyMode: false grafanaURL: \"https://grafana-istio-system.127.0.0.1.nip.io\" ingress: enabled: true", "spec: kiali: enabled: true dashboard: viewOnlyMode: false jaegerURL: \"http://jaeger-query-istio-system.127.0.0.1.nip.io\" ingress: enabled: true", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: version: v1.1 istio: tracing: enabled: true jaeger: template: all-in-one", "tracing: enabled:", "jaeger: template:", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: tracing: enabled: true ingress: enabled: true jaeger: template: production-elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: resources: requests: cpu: \"1\" memory: \"16Gi\" limits: cpu: \"1\" memory: \"16Gi\"", "tracing: enabled:", "ingress: enabled:", "jaeger: template:", "elasticsearch: nodeCount:", "requests: cpu:", "requests: memory:", "limits: cpu:", "limits: memory:", "oc get route -n istio-system external-jaeger", "NAME HOST/PORT PATH SERVICES [...] external-jaeger external-jaeger-istio-system.apps.test external-jaeger-query [...]", "apiVersion: jaegertracing.io/v1 kind: \"Jaeger\" metadata: name: \"external-jaeger\" # Deploy to the Control Plane Namespace namespace: istio-system spec: # Set Up Authentication ingress: enabled: true security: oauth-proxy openshift: # This limits user access to the Jaeger instance to users who have access # to the control plane namespace. Make sure to set the correct namespace here sar: '{\"namespace\": \"istio-system\", \"resource\": \"pods\", \"verb\": \"get\"}' htpasswdFile: /etc/proxy/htpasswd/auth volumeMounts: - name: secret-htpasswd mountPath: /etc/proxy/htpasswd volumes: - name: secret-htpasswd secret: secretName: htpasswd", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane metadata: name: external-jaeger namespace: istio-system spec: version: v1.1 istio: tracing: # Disable Jaeger deployment by service mesh operator enabled: false global: tracer: zipkin: # Set Endpoint for Trace Collection address: external-jaeger-collector.istio-system.svc.cluster.local:9411 kiali: # Set Jaeger dashboard URL dashboard: jaegerURL: https://external-jaeger-istio-system.apps.test # Set Endpoint for Trace Querying jaegerInClusterURL: external-jaeger-query.istio-system.svc.cluster.local", "apiVersion: maistra.io/v1 kind: ServiceMeshControlPlane spec: istio: tracing: enabled: true ingress: enabled: true jaeger: template: production-elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: resources: requests: cpu: \"1\" memory: \"16Gi\" limits: cpu: \"1\" memory: \"16Gi\"", "tracing: enabled:", "ingress: enabled:", "jaeger: template:", "elasticsearch: nodeCount:", "requests: cpu:", "requests: memory:", "limits: cpu:", "limits: memory:", "apiVersion: jaegertracing.io/v1 kind: Jaeger spec: strategy: production storage: type: elasticsearch esIndexCleaner: enabled: false numberOfDays: 7 schedule: \"55 23 * * *\"", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic spec: addons: 3Scale: enabled: false PARAM_THREESCALE_LISTEN_ADDR: 3333 PARAM_THREESCALE_LOG_LEVEL: info PARAM_THREESCALE_LOG_JSON: true PARAM_THREESCALE_LOG_GRPC: false PARAM_THREESCALE_REPORT_METRICS: true PARAM_THREESCALE_METRICS_PORT: 8080 PARAM_THREESCALE_CACHE_TTL_SECONDS: 300 PARAM_THREESCALE_CACHE_REFRESH_SECONDS: 180 PARAM_THREESCALE_CACHE_ENTRIES_MAX: 1000 PARAM_THREESCALE_CACHE_REFRESH_RETRIES: 1 PARAM_THREESCALE_ALLOW_INSECURE_CONN: false PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS: 10 PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS: 60 PARAM_USE_CACHED_BACKEND: false PARAM_BACKEND_CACHE_FLUSH_INTERVAL_SECONDS: 15 PARAM_BACKEND_CACHE_POLICY_FAIL_CLOSED: true", "apiVersion: \"config.istio.io/v1alpha2\" kind: handler metadata: name: threescale spec: adapter: threescale params: system_url: \"https://<organization>-admin.3scale.net/\" access_token: \"<ACCESS_TOKEN>\" connection: address: \"threescale-istio-adapter:3333\"", "apiVersion: \"config.istio.io/v1alpha2\" kind: rule metadata: name: threescale spec: match: destination.labels[\"service-mesh.3scale.net\"] == \"true\" actions: - handler: threescale.handler instances: - threescale-authorization.instance", "3scale-config-gen --name=admin-credentials --url=\"https://<organization>-admin.3scale.net:443\" --token=\"[redacted]\"", "3scale-config-gen --url=\"https://<organization>-admin.3scale.net\" --name=\"my-unique-id\" --service=\"123456789\" --token=\"[redacted]\"", "export NS=\"istio-system\" URL=\"https://replaceme-admin.3scale.net:443\" NAME=\"name\" TOKEN=\"token\" exec -n USD{NS} USD(oc get po -n USD{NS} -o jsonpath='{.items[?(@.metadata.labels.app==\"3scale-istio-adapter\")].metadata.name}') -it -- ./3scale-config-gen --url USD{URL} --name USD{NAME} --token USD{TOKEN} -n USD{NS}", "export CREDENTIALS_NAME=\"replace-me\" export SERVICE_ID=\"replace-me\" export DEPLOYMENT=\"replace-me\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" patch=\"USD(oc get deployment \"USD{DEPLOYMENT}\" --template='{\"spec\":{\"template\":{\"metadata\":{\"labels\":{ {{ range USDk,USDv := .spec.template.metadata.labels }}\"{{ USDk }}\":\"{{ USDv }}\",{{ end }}\"service-mesh.3scale.net/service-id\":\"'\"USD{SERVICE_ID}\"'\",\"service-mesh.3scale.net/credentials\":\"'\"USD{CREDENTIALS_NAME}\"'\"}}}}}' )\" patch deployment \"USD{DEPLOYMENT}\" --patch ''\"USD{patch}\"''", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization namespace: istio-system spec: template: authorization params: subject: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" action: path: request.url_path method: request.method | \"get\"", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: threescale-authorization params: subject: properties: app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"", "apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: bookinfo spec: selector: matchLabels: app: productpage jwtRules: - issuer: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak jwksUri: >- http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs", "apiVersion: \"config.istio.io/v1alpha2\" kind: instance metadata: name: threescale-authorization spec: template: authorization params: subject: user: request.query_params[\"user_key\"] | request.headers[\"user-key\"] | properties: app_id: request.query_params[\"app_id\"] | request.headers[\"app-id\"] | \"\" app_key: request.query_params[\"app_key\"] | request.headers[\"app-key\"] | \"\" client_id: request.auth.claims[\"azp\"] | \"\" action: path: request.url_path method: request.method | \"get\" service: destination.labels[\"service-mesh.3scale.net/service-id\"] | \"\"", "oc get pods -n istio-system", "oc logs istio-system", "oc delete smmr -n istio-system default", "oc get smcp -n istio-system", "oc delete smcp -n istio-system <name_of_custom_resource>", "oc delete validatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io", "oc delete mutatingwebhookconfiguration/openshift-operators.servicemesh-resources.maistra.io", "oc delete -n openshift-operators daemonset/istio-node", "oc delete clusterrole/istio-admin clusterrole/istio-cni clusterrolebinding/istio-cni", "oc delete clusterrole istio-view istio-edit", "oc delete clusterrole jaegers.jaegertracing.io-v1-admin jaegers.jaegertracing.io-v1-crdview jaegers.jaegertracing.io-v1-edit jaegers.jaegertracing.io-v1-view", "oc get crds -o name | grep '.*\\.istio\\.io' | xargs -r -n 1 oc delete", "oc get crds -o name | grep '.*\\.maistra\\.io' | xargs -r -n 1 oc delete", "oc get crds -o name | grep '.*\\.kiali\\.io' | xargs -r -n 1 oc delete", "oc delete crds jaegers.jaegertracing.io", "oc delete svc admission-controller -n <operator-project>", "oc delete project <istio-system-project>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/service_mesh/service-mesh-1-x
4.4. Configuring the Samba Cluster Resources
4.4. Configuring the Samba Cluster Resources This section provides the procedure for configuring the Samba cluster resources for this use case. The following procedure creates a snapshot of the cluster's cib file named samba.cib and adds the resources to that test file rather then configuring them directly on the running cluster. After the resources and constraints are configured, the procedure pushes the contents of samba.cib to the running cluster configuration file. On one node of the cluster, run the following procedure. Create a snapshot of the cib file, which is the cluster configuration file. Create the CTDB resource to be used by Samba. Create this resource as a cloned resource so that it will run on both cluster nodes. Create the cloned Samba server. Create the colocation and order constraints for the cluster resources. The startup order is Filesystem resource, CTDB resource, then Samba resource. Push the content of the cib snapshot to the cluster. Check the status of the cluster to verify that the resource is running. Note that in Red Hat Enterprise Linux 7.4 it can take a couple of minutes for CTDB to start Samba, export the shares, and stabilize. If you check the cluster status before this process has completed, you may see a message that the CTDB status call failed. Once this process has completed, you can clear this message from the display by running the pcs resource cleanup ctdb-clone command. Note If you find that the resources you configured are not running, you can run the pcs resource debug-start resource command to test the resource configuration. This starts the service outside of the cluster's control and knowledge. If the configured resources are running again, run pcs resource cleanup resource to make the cluster aware of the updates. For information on the pcs resource debug-start command, see the Enabling, Disabling, and Banning Cluster Resources section in the High Availability Add-On Reference manual.
[ "pcs cluster cib samba.cib", "pcs -f samba.cib resource create ctdb ocf:heartbeat:CTDB ctdb_recovery_lock=\"/mnt/gfs2share/ctdb/ctdb.lock\" ctdb_dbdir=/var/ctdb ctdb_socket=/tmp/ctdb.socket ctdb_logfile=/var/log/ctdb.log op monitor interval=10 timeout=30 op start timeout=90 op stop timeout=100 --clone", "pcs -f samba.cib resource create samba systemd:smb --clone", "pcs -f samba.cib constraint order fs-clone then ctdb-clone Adding fs-clone ctdb-clone (kind: Mandatory) (Options: first-action=start then-action=start) pcs -f samba.cib constraint order ctdb-clone then samba-clone Adding ctdb-clone samba-clone (kind: Mandatory) (Options: first-action=start then-action=start) pcs -f samba.cib constraint colocation add ctdb-clone with fs-clone pcs -f samba.cib constraint colocation add samba-clone with ctdb-clone", "pcs cluster cib-push samba.cib CIB updated", "pcs status Cluster name: my_cluster Stack: corosync Current DC: z1.example.com (version 1.1.16-12.el7_4.2-94ff4df) - partition with quorum Last updated: Thu Oct 19 18:17:07 2017 Last change: Thu Oct 19 18:16:50 2017 by hacluster via crmd on z1.example.com 2 nodes configured 11 resources configured Online: [ z1.example.com z2.example.com ] Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Clone Set: dlm-clone [dlm] Started: [ z1.example.com z2.example.com ] Clone Set: clvmd-clone [clvmd] Started: [ z1.example.com z2.example.com ] Clone Set: fs-clone [fs] Started: [ z1.example.com z2.example.com ] Clone Set: ctdb-clone [ctdb] Started: [ z1.example.com z2.example.com ] Clone Set: samba-clone [samba] Started: [ z1.example.com z2.example.com ]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/s1-resourcegroupcreatesamba-haaa
Chapter 9. Scalability and performance optimization
Chapter 9. Scalability and performance optimization 9.1. Optimizing storage Optimizing storage helps to minimize storage use across all resources. By optimizing storage, administrators help ensure that existing storage resources are working in an efficient manner. 9.1.1. Available persistent storage options Understand your persistent storage options so that you can optimize your OpenShift Container Platform environment. Table 9.1. Available storage options Storage type Description Examples Block Presented to the operating system (OS) as a block device Suitable for applications that need full control of storage and operate at a low level on files bypassing the file system Also referred to as a Storage Area Network (SAN) Non-shareable, which means that only one client at a time can mount an endpoint of this type AWS EBS and VMware vSphere support dynamic persistent volume (PV) provisioning natively in the OpenShift Container Platform. File Presented to the OS as a file system export to be mounted Also referred to as Network Attached Storage (NAS) Concurrency, latency, file locking mechanisms, and other capabilities vary widely between protocols, implementations, vendors, and scales. RHEL NFS, NetApp NFS [1] , and Vendor NFS Object Accessible through a REST API endpoint Configurable for use in the OpenShift image registry Applications must build their drivers into the application and/or container. AWS S3 NetApp NFS supports dynamic PV provisioning when using the Trident plugin. 9.1.2. Recommended configurable storage technology The following table summarizes the recommended and configurable storage technologies for the given OpenShift Container Platform cluster application. Table 9.2. Recommended and configurable storage technology Storage type Block File Object 1 ReadOnlyMany 2 ReadWriteMany 3 Prometheus is the underlying technology used for metrics. 4 This does not apply to physical disk, VM physical disk, VMDK, loopback over NFS, AWS EBS, and Azure Disk. 5 For metrics, using file storage with the ReadWriteMany (RWX) access mode is unreliable. If you use file storage, do not configure the RWX access mode on any persistent volume claims (PVCs) that are configured for use with metrics. 6 For logging, review the recommended storage solution in Configuring persistent storage for the log store section. Using NFS storage as a persistent volume or through NAS, such as Gluster, can corrupt the data. Hence, NFS is not supported for Elasticsearch storage and LokiStack log store in OpenShift Container Platform Logging. You must use one persistent volume type per log store. 7 Object storage is not consumed through OpenShift Container Platform's PVs or PVCs. Apps must integrate with the object storage REST API. ROX 1 Yes 4 Yes 4 Yes RWX 2 No Yes Yes Registry Configurable Configurable Recommended Scaled registry Not configurable Configurable Recommended Metrics 3 Recommended Configurable 5 Not configurable Elasticsearch Logging Recommended Configurable 6 Not supported 6 Loki Logging Not configurable Not configurable Recommended Apps Recommended Recommended Not configurable 7 Note A scaled registry is an OpenShift image registry where two or more pod replicas are running. 9.1.2.1. Specific application storage recommendations Important Testing shows issues with using the NFS server on Red Hat Enterprise Linux (RHEL) as a storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations in the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. 9.1.2.1.1. Registry In a non-scaled/high-availability (HA) OpenShift image registry cluster deployment: The storage technology does not have to support RWX access mode. The storage technology must ensure read-after-write consistency. The preferred storage technology is object storage followed by block storage. File storage is not recommended for OpenShift image registry cluster deployment with production workloads. 9.1.2.1.2. Scaled registry In a scaled/HA OpenShift image registry cluster deployment: The storage technology must support RWX access mode. The storage technology must ensure read-after-write consistency. The preferred storage technology is object storage. Red Hat OpenShift Data Foundation (ODF), Amazon Simple Storage Service (Amazon S3), Google Cloud Storage (GCS), Microsoft Azure Blob Storage, and OpenStack Swift are supported. Object storage should be S3 or Swift compliant. For non-cloud platforms, such as vSphere and bare metal installations, the only configurable technology is file storage. Block storage is not configurable. The use of Network File System (NFS) storage with OpenShift Container Platform is supported. However, the use of NFS storage with a scaled registry can cause known issues. For more information, see the Red Hat Knowledgebase solution, Is NFS supported for OpenShift cluster internal components in Production? . 9.1.2.1.3. Metrics In an OpenShift Container Platform hosted metrics cluster deployment: The preferred storage technology is block storage. Object storage is not configurable. Important It is not recommended to use file storage for a hosted metrics cluster deployment with production workloads. 9.1.2.1.4. Logging In an OpenShift Container Platform hosted logging cluster deployment: Loki Operator: The preferred storage technology is S3 compatible Object storage. Block storage is not configurable. OpenShift Elasticsearch Operator: The preferred storage technology is block storage. Object storage is not supported. Note As of logging version 5.4.3 the OpenShift Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. 9.1.2.1.5. Applications Application use cases vary from application to application, as described in the following examples: Storage technologies that support dynamic PV provisioning have low mount time latencies, and are not tied to nodes to support a healthy cluster. Application developers are responsible for knowing and understanding the storage requirements for their application, and how it works with the provided storage to ensure that issues do not occur when an application scales or interacts with the storage layer. 9.1.2.2. Other specific application storage recommendations Important It is not recommended to use RAID configurations on Write intensive workloads, such as etcd . If you are running etcd with a RAID configuration, you might be at risk of encountering performance issues with your workloads. Red Hat OpenStack Platform (RHOSP) Cinder: RHOSP Cinder tends to be adept in ROX access mode use cases. Databases: Databases (RDBMSs, NoSQL DBs, etc.) tend to perform best with dedicated block storage. The etcd database must have enough storage and adequate performance capacity to enable a large cluster. Information about monitoring and benchmarking tools to establish ample storage and a high-performance environment is described in Recommended etcd practices . 9.1.3. Data storage management The following table summarizes the main directories that OpenShift Container Platform components write data to. Table 9.3. Main directories for storing OpenShift Container Platform data Directory Notes Sizing Expected growth /var/log Log files for all components. 10 to 30 GB. Log files can grow quickly; size can be managed by growing disks or by using log rotate. /var/lib/etcd Used for etcd storage when storing the database. Less than 20 GB. Database can grow up to 8 GB. Will grow slowly with the environment. Only storing metadata. Additional 20-25 GB for every additional 8 GB of memory. /var/lib/containers This is the mount point for the CRI-O runtime. Storage used for active container runtimes, including pods, and storage of local images. Not used for registry storage. 50 GB for a node with 16 GB memory. Note that this sizing should not be used to determine minimum cluster requirements. Additional 20-25 GB for every additional 8 GB of memory. Growth is limited by capacity for running containers. /var/lib/kubelet Ephemeral volume storage for pods. This includes anything external that is mounted into a container at runtime. Includes environment variables, kube secrets, and data volumes not backed by persistent volumes. Varies Minimal if pods requiring storage are using persistent volumes. If using ephemeral storage, this can grow quickly. 9.1.4. Optimizing storage performance for Microsoft Azure OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes. For production Azure clusters and clusters with intensive workloads, the virtual machine operating system disk for control plane machines should be able to sustain a tested and recommended minimum throughput of 5000 IOPS / 200MBps. This throughput can be provided by having a minimum of 1 TiB Premium SSD (P30). In Azure and Azure Stack Hub, disk performance is directly dependent on SSD disk sizes. To achieve the throughput supported by a Standard_D8s_v3 virtual machine, or other similar machine types, and the target of 5000 IOPS, at least a P30 disk is required. Host caching must be set to ReadOnly for low latency and high IOPS and throughput when reading data. Reading data from the cache, which is present either in the VM memory or in the local SSD disk, is much faster than reading from the disk, which is in the blob storage. 9.1.5. Additional resources Configuring the Elasticsearch log store 9.2. Optimizing routing The OpenShift Container Platform HAProxy router can be scaled or configured to optimize performance. 9.2.1. Baseline Ingress Controller (router) performance The OpenShift Container Platform Ingress Controller, or router, is the ingress point for ingress traffic for applications and services that are configured using routes and ingresses. When evaluating a single HAProxy router performance in terms of HTTP requests handled per second, the performance varies depending on many factors. In particular: HTTP keep-alive/close mode Route type TLS session resumption client support Number of concurrent connections per target route Number of target routes Back end server page size Underlying infrastructure (network/SDN solution, CPU, and so on) While performance in your specific environment will vary, Red Hat lab tests on a public cloud instance of size 4 vCPU/16GB RAM. A single HAProxy router handling 100 routes terminated by backends serving 1kB static pages is able to handle the following number of transactions per second. In HTTP keep-alive mode scenarios: Encryption LoadBalancerService HostNetwork none 21515 29622 edge 16743 22913 passthrough 36786 53295 re-encrypt 21583 25198 In HTTP close (no keep-alive) scenarios: Encryption LoadBalancerService HostNetwork none 5719 8273 edge 2729 4069 passthrough 4121 5344 re-encrypt 2320 2941 The default Ingress Controller configuration was used with the spec.tuningOptions.threadCount field set to 4 . Two different endpoint publishing strategies were tested: Load Balancer Service and Host Network. TLS session resumption was used for encrypted routes. With HTTP keep-alive, a single HAProxy router is capable of saturating a 1 Gbit NIC at page sizes as small as 8 kB. When running on bare metal with modern processors, you can expect roughly twice the performance of the public cloud instance above. This overhead is introduced by the virtualization layer in place on public clouds and holds mostly true for private cloud-based virtualization as well. The following table is a guide to how many applications to use behind the router: Number of applications Application type 5-10 static file/web server or caching proxy 100-1000 applications generating dynamic content In general, HAProxy can support routes for up to 1000 applications, depending on the technology in use. Ingress Controller performance might be limited by the capabilities and performance of the applications behind it, such as language or static versus dynamic content. Ingress, or router, sharding should be used to serve more routes towards applications and help horizontally scale the routing tier. For more information on Ingress sharding, see Configuring Ingress Controller sharding by using route labels and Configuring Ingress Controller sharding by using namespace labels . You can modify the Ingress Controller deployment by using the information provided in Setting Ingress Controller thread count for threads and Ingress Controller configuration parameters for timeouts, and other tuning configurations in the Ingress Controller specification. 9.2.2. Configuring Ingress Controller liveness, readiness, and startup probes Cluster administrators can configure the timeout values for the kubelet's liveness, readiness, and startup probes for router deployments that are managed by the OpenShift Container Platform Ingress Controller (router). The liveness and readiness probes of the router use the default timeout value of 1 second, which is too brief when networking or runtime performance is severely degraded. Probe timeouts can cause unwanted router restarts that interrupt application connections. The ability to set larger timeout values can reduce the risk of unnecessary and unwanted restarts. You can update the timeoutSeconds value on the livenessProbe , readinessProbe , and startupProbe parameters of the router container. Parameter Description livenessProbe The livenessProbe reports to the kubelet whether a pod is dead and needs to be restarted. readinessProbe The readinessProbe reports whether a pod is healthy or unhealthy. When the readiness probe reports an unhealthy pod, then the kubelet marks the pod as not ready to accept traffic. Subsequently, the endpoints for that pod are marked as not ready, and this status propagates to the kube-proxy. On cloud platforms with a configured load balancer, the kube-proxy communicates to the cloud load-balancer not to send traffic to the node with that pod. startupProbe The startupProbe gives the router pod up to 2 minutes to initialize before the kubelet begins sending the router liveness and readiness probes. This initialization time can prevent routers with many routes or endpoints from prematurely restarting. Important The timeout configuration option is an advanced tuning technique that can be used to work around issues. However, these issues should eventually be diagnosed and possibly a support case or Jira issue opened for any issues that causes probes to time out. The following example demonstrates how you can directly patch the default router deployment to set a 5-second timeout for the liveness and readiness probes: USD oc -n openshift-ingress patch deploy/router-default --type=strategic --patch='{"spec":{"template":{"spec":{"containers":[{"name":"router","livenessProbe":{"timeoutSeconds":5},"readinessProbe":{"timeoutSeconds":5}}]}}}}' Verification USD oc -n openshift-ingress describe deploy/router-default | grep -e Liveness: -e Readiness: Liveness: http-get http://:1936/healthz delay=0s timeout=5s period=10s #success=1 #failure=3 Readiness: http-get http://:1936/healthz/ready delay=0s timeout=5s period=10s #success=1 #failure=3 9.2.3. Configuring HAProxy reload interval When you update a route or an endpoint associated with a route, the OpenShift Container Platform router updates the configuration for HAProxy. Then, HAProxy reloads the updated configuration for those changes to take effect. When HAProxy reloads, it generates a new process that handles new connections using the updated configuration. HAProxy keeps the old process running to handle existing connections until those connections are all closed. When old processes have long-lived connections, these processes can accumulate and consume resources. The default minimum HAProxy reload interval is five seconds. You can configure an Ingress Controller using its spec.tuningOptions.reloadInterval field to set a longer minimum reload interval. Warning Setting a large value for the minimum HAProxy reload interval can cause latency in observing updates to routes and their endpoints. To lessen the risk, avoid setting a value larger than the tolerable latency for updates. Procedure Change the minimum HAProxy reload interval of the default Ingress Controller to 15 seconds by running the following command: USD oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{"spec":{"tuningOptions":{"reloadInterval":"15s"}}}' 9.3. Optimizing networking The OpenShift SDN uses OpenvSwitch, virtual extensible LAN (VXLAN) tunnels, OpenFlow rules, and iptables. This network can be tuned by using jumbo frames, multi-queue, and ethtool settings. OVN-Kubernetes uses Generic Network Virtualization Encapsulation (Geneve) instead of VXLAN as the tunnel protocol. This network can be tuned by using network interface controller (NIC) offloads. VXLAN provides benefits over VLANs, such as an increase in networks from 4096 to over 16 million, and layer 2 connectivity across physical networks. This allows for all pods behind a service to communicate with each other, even if they are running on different systems. VXLAN encapsulates all tunneled traffic in user datagram protocol (UDP) packets. However, this leads to increased CPU utilization. Both these outer- and inner-packets are subject to normal checksumming rules to guarantee data is not corrupted during transit. Depending on CPU performance, this additional processing overhead can cause a reduction in throughput and increased latency when compared to traditional, non-overlay networks. Cloud, VM, and bare metal CPU performance can be capable of handling much more than one Gbps network throughput. When using higher bandwidth links such as 10 or 40 Gbps, reduced performance can occur. This is a known issue in VXLAN-based environments and is not specific to containers or OpenShift Container Platform. Any network that relies on VXLAN tunnels will perform similarly because of the VXLAN implementation. If you are looking to push beyond one Gbps, you can: Evaluate network plugins that implement different routing techniques, such as border gateway protocol (BGP). Use VXLAN-offload capable network adapters. VXLAN-offload moves the packet checksum calculation and associated CPU overhead off of the system CPU and onto dedicated hardware on the network adapter. This frees up CPU cycles for use by pods and applications, and allows users to utilize the full bandwidth of their network infrastructure. VXLAN-offload does not reduce latency. However, CPU utilization is reduced even in latency tests. 9.3.1. Optimizing the MTU for your network There are two important maximum transmission units (MTUs): the network interface controller (NIC) MTU and the cluster network MTU. The NIC MTU is configured at the time of OpenShift Container Platform installation, and you can also change the cluster's MTU as a Day 2 operation. See "Changing cluster network MTU" for more information. The MTU must be less than or equal to the maximum supported value of the NIC of your network. If you are optimizing for throughput, choose the largest possible value. If you are optimizing for lowest latency, choose a lower value. The OpenShift SDN network plugin overlay MTU must be less than the NIC MTU by 50 bytes at a minimum. This accounts for the SDN overlay header. So, on a normal ethernet network, this should be set to 1450 . On a jumbo frame ethernet network, this should be set to 8950 . These values should be set automatically by the Cluster Network Operator based on the NIC's configured MTU. Therefore, cluster administrators do not typically update these values. Amazon Web Services (AWS) and bare-metal environments support jumbo frame ethernet networks. This setting will help throughput, especially with transmission control protocol (TCP). For OVN and Geneve, the MTU must be less than the NIC MTU by 100 bytes at a minimum. Note This 50 byte overlay header is relevant to the OpenShift SDN network plugin. Other SDN solutions might require the value to be more or less. Additional resources Changing cluster network MTU 9.3.2. Recommended practices for installing large scale clusters When installing large clusters or scaling the cluster to larger node counts, set the cluster network cidr accordingly in your install-config.yaml file before you install the cluster: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 The default cluster network cidr 10.128.0.0/14 cannot be used if the cluster size is more than 500 nodes. It must be set to 10.128.0.0/12 or 10.128.0.0/10 to get to larger node counts beyond 500 nodes. 9.3.3. Impact of IPsec Because encrypting and decrypting node hosts uses CPU power, performance is affected both in throughput and CPU usage on the nodes when encryption is enabled, regardless of the IP security system being used. IPSec encrypts traffic at the IP payload level, before it hits the NIC, protecting fields that would otherwise be used for NIC offloading. This means that some NIC acceleration features might not be usable when IPSec is enabled and will lead to decreased throughput and increased CPU usage. 9.3.4. Additional resources Modifying advanced network configuration parameters Configuration parameters for the OVN-Kubernetes network plugin Configuration parameters for the OpenShift SDN network plugin Improving cluster stability in high latency environments using worker latency profiles 9.4. Optimizing CPU usage with mount namespace encapsulation You can optimize CPU usage in OpenShift Container Platform clusters by using mount namespace encapsulation to provide a private namespace for kubelet and CRI-O processes. This reduces the cluster CPU resources used by systemd with no difference in functionality. Important Mount namespace encapsulation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 9.4.1. Encapsulating mount namespaces Mount namespaces are used to isolate mount points so that processes in different namespaces cannot view each others' files. Encapsulation is the process of moving Kubernetes mount namespaces to an alternative location where they will not be constantly scanned by the host operating system. The host operating system uses systemd to constantly scan all mount namespaces: both the standard Linux mounts and the numerous mounts that Kubernetes uses to operate. The current implementation of kubelet and CRI-O both use the top-level namespace for all container runtime and kubelet mount points. However, encapsulating these container-specific mount points in a private namespace reduces systemd overhead with no difference in functionality. Using a separate mount namespace for both CRI-O and kubelet can encapsulate container-specific mounts from any systemd or other host operating system interaction. This ability to potentially achieve major CPU optimization is now available to all OpenShift Container Platform administrators. Encapsulation can also improve security by storing Kubernetes-specific mount points in a location safe from inspection by unprivileged users. The following diagrams illustrate a Kubernetes installation before and after encapsulation. Both scenarios show example containers which have mount propagation settings of bidirectional, host-to-container, and none. Here we see systemd, host operating system processes, kubelet, and the container runtime sharing a single mount namespace. systemd, host operating system processes, kubelet, and the container runtime each have access to and visibility of all mount points. Container 1, configured with bidirectional mount propagation, can access systemd and host mounts, kubelet and CRI-O mounts. A mount originating in Container 1, such as /run/a is visible to systemd, host operating system processes, kubelet, container runtime, and other containers with host-to-container or bidirectional mount propagation configured (as in Container 2). Container 2, configured with host-to-container mount propagation, can access systemd and host mounts, kubelet and CRI-O mounts. A mount originating in Container 2, such as /run/b , is not visible to any other context. Container 3, configured with no mount propagation, has no visibility of external mount points. A mount originating in Container 3, such as /run/c , is not visible to any other context. The following diagram illustrates the system state after encapsulation. The main systemd process is no longer devoted to unnecessary scanning of Kubernetes-specific mount points. It only monitors systemd-specific and host mount points. The host operating system processes can access only the systemd and host mount points. Using a separate mount namespace for both CRI-O and kubelet completely separates all container-specific mounts away from any systemd or other host operating system interaction whatsoever. The behavior of Container 1 is unchanged, except a mount it creates such as /run/a is no longer visible to systemd or host operating system processes. It is still visible to kubelet, CRI-O, and other containers with host-to-container or bidirectional mount propagation configured (like Container 2). The behavior of Container 2 and Container 3 is unchanged. 9.4.2. Configuring mount namespace encapsulation You can configure mount namespace encapsulation so that a cluster runs with less resource overhead. Note Mount namespace encapsulation is a Technology Preview feature and it is disabled by default. To use it, you must enable the feature manually. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. Procedure Create a file called mount_namespace_config.yaml with the following YAML: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-kubens-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kubens.service --- apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-kubens-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kubens.service Apply the mount namespace MachineConfig CR by running the following command: USD oc apply -f mount_namespace_config.yaml Example output machineconfig.machineconfiguration.openshift.io/99-kubens-master created machineconfig.machineconfiguration.openshift.io/99-kubens-worker created The MachineConfig CR can take up to 30 minutes to finish being applied in the cluster. You can check the status of the MachineConfig CR by running the following command: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-03d4bc4befb0f4ed3566a2c8f7636751 False True False 3 0 0 0 45m worker rendered-worker-10577f6ab0117ed1825f8af2ac687ddf False True False 3 1 1 Wait for the MachineConfig CR to be applied successfully across all control plane and worker nodes after running the following command: USD oc wait --for=condition=Updated mcp --all --timeout=30m Example output machineconfigpool.machineconfiguration.openshift.io/master condition met machineconfigpool.machineconfiguration.openshift.io/worker condition met Verification To verify encapsulation for a cluster host, run the following commands: Open a debug shell to the cluster host: USD oc debug node/<node_name> Open a chroot session: sh-4.4# chroot /host Check the systemd mount namespace: sh-4.4# readlink /proc/1/ns/mnt Example output mnt:[4026531953] Check kubelet mount namespace: sh-4.4# readlink /proc/USD(pgrep kubelet)/ns/mnt Example output mnt:[4026531840] Check the CRI-O mount namespace: sh-4.4# readlink /proc/USD(pgrep crio)/ns/mnt Example output mnt:[4026531840] These commands return the mount namespaces associated with systemd, kubelet, and the container runtime. In OpenShift Container Platform, the container runtime is CRI-O. Encapsulation is in effect if systemd is in a different mount namespace to kubelet and CRI-O as in the above example. Encapsulation is not in effect if all three processes are in the same mount namespace. 9.4.3. Inspecting encapsulated namespaces You can inspect Kubernetes-specific mount points in the cluster host operating system for debugging or auditing purposes by using the kubensenter script that is available in Red Hat Enterprise Linux CoreOS (RHCOS). SSH shell sessions to the cluster host are in the default namespace. To inspect Kubernetes-specific mount points in an SSH shell prompt, you need to run the kubensenter script as root. The kubensenter script is aware of the state of the mount encapsulation, and is safe to run even if encapsulation is not enabled. Note oc debug remote shell sessions start inside the Kubernetes namespace by default. You do not need to run kubensenter to inspect mount points when you use oc debug . If the encapsulation feature is not enabled, the kubensenter findmnt and findmnt commands return the same output, regardless of whether they are run in an oc debug session or in an SSH shell prompt. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have configured SSH access to the cluster host. Procedure Open a remote SSH shell to the cluster host. For example: USD ssh core@<node_name> Run commands using the provided kubensenter script as the root user. To run a single command inside the Kubernetes namespace, provide the command and any arguments to the kubensenter script. For example, to run the findmnt command inside the Kubernetes namespace, run the following command: [core@control-plane-1 ~]USD sudo kubensenter findmnt Example output kubensenter: Autodetect: kubens.service namespace found at /run/kubens/mnt TARGET SOURCE FSTYPE OPTIONS / /dev/sda4[/ostree/deploy/rhcos/deploy/32074f0e8e5ec453e56f5a8a7bc9347eaa4172349ceab9c22b709d9d71a3f4b0.0] | xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota shm tmpfs ... To start a new interactive shell inside the Kubernetes namespace, run the kubensenter script without any arguments: [core@control-plane-1 ~]USD sudo kubensenter Example output kubensenter: Autodetect: kubens.service namespace found at /run/kubens/mnt 9.4.4. Running additional services in the encapsulated namespace Any monitoring tool that relies on the ability to run in the host operating system and have visibility of mount points created by kubelet, CRI-O, or containers themselves, must enter the container mount namespace to see these mount points. The kubensenter script that is provided with OpenShift Container Platform executes another command inside the Kubernetes mount point and can be used to adapt any existing tools. The kubensenter script is aware of the state of the mount encapsulation feature status, and is safe to run even if encapsulation is not enabled. In that case the script executes the provided command in the default mount namespace. For example, if a systemd service needs to run inside the new Kubernetes mount namespace, edit the service file and use the ExecStart= command line with kubensenter . [Unit] Description=Example service [Service] ExecStart=/usr/bin/kubensenter /path/to/original/command arg1 arg2 9.4.5. Additional resources What are namespaces Manage containers in namespaces by using nsenter MachineConfig
[ "oc -n openshift-ingress patch deploy/router-default --type=strategic --patch='{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"router\",\"livenessProbe\":{\"timeoutSeconds\":5},\"readinessProbe\":{\"timeoutSeconds\":5}}]}}}}'", "oc -n openshift-ingress describe deploy/router-default | grep -e Liveness: -e Readiness: Liveness: http-get http://:1936/healthz delay=0s timeout=5s period=10s #success=1 #failure=3 Readiness: http-get http://:1936/healthz/ready delay=0s timeout=5s period=10s #success=1 #failure=3", "oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{\"spec\":{\"tuningOptions\":{\"reloadInterval\":\"15s\"}}}'", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-kubens-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kubens.service --- apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-kubens-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kubens.service", "oc apply -f mount_namespace_config.yaml", "machineconfig.machineconfiguration.openshift.io/99-kubens-master created machineconfig.machineconfiguration.openshift.io/99-kubens-worker created", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-03d4bc4befb0f4ed3566a2c8f7636751 False True False 3 0 0 0 45m worker rendered-worker-10577f6ab0117ed1825f8af2ac687ddf False True False 3 1 1", "oc wait --for=condition=Updated mcp --all --timeout=30m", "machineconfigpool.machineconfiguration.openshift.io/master condition met machineconfigpool.machineconfiguration.openshift.io/worker condition met", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-4.4# readlink /proc/1/ns/mnt", "mnt:[4026531953]", "sh-4.4# readlink /proc/USD(pgrep kubelet)/ns/mnt", "mnt:[4026531840]", "sh-4.4# readlink /proc/USD(pgrep crio)/ns/mnt", "mnt:[4026531840]", "ssh core@<node_name>", "[core@control-plane-1 ~]USD sudo kubensenter findmnt", "kubensenter: Autodetect: kubens.service namespace found at /run/kubens/mnt TARGET SOURCE FSTYPE OPTIONS / /dev/sda4[/ostree/deploy/rhcos/deploy/32074f0e8e5ec453e56f5a8a7bc9347eaa4172349ceab9c22b709d9d71a3f4b0.0] | xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota shm tmpfs", "[core@control-plane-1 ~]USD sudo kubensenter", "kubensenter: Autodetect: kubens.service namespace found at /run/kubens/mnt", "[Unit] Description=Example service [Service] ExecStart=/usr/bin/kubensenter /path/to/original/command arg1 arg2" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/scalability_and_performance/scalability-and-performance-optimization
16.2.4. DHCP Relay Agent
16.2.4. DHCP Relay Agent The DHCP Relay Agent ( dhcrelay ) allows for the relay of DHCP and BOOTP requests from a subnet with no DHCP server on it to one or more DHCP servers on other subnets. When a DHCP client requests information, the DHCP Relay Agent forwards the request to the list of DHCP servers specified when the DHCP Relay Agent is started. When a DHCP server returns a reply, the reply is broadcast or unicast on the network that sent the original request. The DHCP Relay Agent listens for DHCP requests on all interfaces unless the interfaces are specified in /etc/sysconfig/dhcrelay with the INTERFACES directive. To start the DHCP Relay Agent, use the command service dhcrelay start .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/dhcp-relay-agent
5.2.2. Displaying Physical Volumes
5.2.2. Displaying Physical Volumes There are three commands you can use to display properties of LVM physical volumes: pvs , pvdisplay , and pvscan . The pvs command provides physical volume information in a configurable form, displaying one line per physical volume. The pvs command provides a great deal of format control, and is useful for scripting. For information on using the pvs command to customize your output, see Section 5.8, "Customized Reporting for LVM" . The pvdisplay command provides a verbose multi-line output for each physical volume. It displays physical properties (size, extents, volume group, and so on) in a fixed format. The following example shows the output of the pvdisplay command for a single physical volume. The pvscan command scans all supported LVM block devices in the system for physical volumes. The following command shows all physical devices found: You can define a filter in the /etc/lvm/lvm.conf file so that this command will avoid scanning specific physical volumes. For information on using filters to control which devices are scanned, see Section 5.5, "Controlling LVM Device Scans with Filters" .
[ "pvdisplay --- Physical volume --- PV Name /dev/sdc1 VG Name new_vg PV Size 17.14 GB / not usable 3.40 MB Allocatable yes PE Size (KByte) 4096 Total PE 4388 Free PE 4375 Allocated PE 13 PV UUID Joqlch-yWSj-kuEn-IdwM-01S9-XO8M-mcpsVe", "pvscan PV /dev/sdb2 VG vg0 lvm2 [964.00 MB / 0 free] PV /dev/sdc1 VG vg0 lvm2 [964.00 MB / 428.00 MB free] PV /dev/sdc2 lvm2 [964.84 MB] Total: 3 [2.83 GB] / in use: 2 [1.88 GB] / in no VG: 1 [964.84 MB]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/physvol_display
13.4. Access the Running Application
13.4. Access the Running Application The Hello World quickstart application runs on the following URLs: First Server Instance: http://localhost:8080/jboss-helloworld-jdg Second Server Instance: http://localhost:8180/jboss-helloworld-jdg 23154%2C+Getting+Started+Guide-6.608-09-2016+09%3A22%3A31JBoss+Data+Grid+6Documentation6.6.1 Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/access_the_running_application
About
About OpenShift Container Platform 4.12 Introduction to OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/about/index
Chapter 14. The Certificate System Configuration Files
Chapter 14. The Certificate System Configuration Files The primary configuration file for every subsystem is its CS.cfg file. This chapter covers basic information about and rules for editing the CS.cfg file. This chapter also describes some other useful configuration files used by the subsystems, such as password and web services files. 14.1. File and Directory Locations for Certificate System Subsystems Certificate System servers consist of an Apache Tomcat instance, which contains one or more subsystems. Each subsystem consists of a web application, which handles requests for a specific type of PKI function. The available subsystems are: CA, KRA, OCSP, TKS, and TPS. Each instance can contain only one of each type of a PKI subsystem. A subsystem can be installed within a particular instance using the pkispawn command. 14.1.1. Instance-specific Information For instance information for the default instance (pki-tomcat), see Table 2.2, "Tomcat Instance Information" Table 14.1. Certificate Server Port Assignments (Default) Port Type Port Number Notes Secure port 8443 Main port used to access PKI services by end-users, agents, and admins over HTTPS. Insecure port 8080 Used to access the server insecurely for some end-entity functions over HTTP. Used for instance to provide CRLs, which are already signed and therefore need not be encrypted. AJP port 8009 Used to access the server from a front end Apache proxy server through an AJP connection. Redirects to the HTTPS port. Tomcat port 8005 Used by the web server. 14.1.2. CA Subsystem Information This section contains details about the CA subsystem, which is one of the possible subsystems that can be installed as a web application in a Certificate Server instance. Table 14.2. CA Subsystem Information for the Default Instance (pki-tomcat) Setting Value Main directory /var/lib/pki/pki-tomcat/ca/ Configuration directory /var/lib/pki/pki-tomcat/ca/conf/ [a] Configuration file /var/lib/pki/pki-tomcat/ca/conf/CS.cfg Subsystem certificates CA signing certificate OCSP signing certificate (for the CA's internal OCSP service) TLS server certificate Audit log signing certificate Subsystem certificate [b] Security databases /var/lib/pki/pki-tomcat/alias/ [c] Log files /var/log/pki/pki-tomcat/ca/logs/ [d] Install log /var/log/pki/pki-ca-spawn. date .log Uninstall log /var/log/pki/pki-ca-destroy. date .log Audit logs /var/log/pki/pki-tomcat/ca/signedAudit/ Profile files /var/lib/pki/pki-tomcat/ca/profiles/ca/ Email notification templates /var/lib/pki/pki-tomcat/ca/emails/ Web services files Agent services: /var/lib/pki/pki-tomcat/ca/webapps/ca/agent/ Admin services: /var/lib/pki/pki-tomcat/ca/webapps/ca/admin/ End user services: /var/lib/pki/pki-tomcat/ca/webapps/ca/ee/ [a] Aliased to /etc/pki/pki-tomcat/ca/ [b] The subsystem certificate is always issued by the security domain so that domain-level operations that require client authentication are based on this subsystem certificate. [c] Note that all subsystem certificates are stored in the instance security database [d] Aliased to /var/lib/pki/pki-tomcat/ca 14.1.3. KRA Subsystem Information This section contains details about the KRA subsystem, which is one of the possible subsystems that can be installed as a web application in a Certificate Server instance. Table 14.3. KRA Subsystem Information for the Default Instance (pki-tomcat) Setting Value Main directory /var/lib/pki/pki-tomcat/kra/ Configuration directory /var/lib/pki/pki-tomcat/kra/conf/ [a] Configuration file /var/lib/pki/pki-tomcat/kra/conf/CS.cfg Subsystem certificates Transport certificate Storage certificate TLS server certificate Audit log signing certificate Subsystem certificate [b] Security databases /var/lib/pki/pki-tomcat/alias/ [c] Log files /var/lib/pki/pki-tomcat/kra/logs/ Install log /var/log/pki/pki-kra-spawn- date .log Uninstall log /var/log/pki/pki-kra-destroy- date .log Audit logs /var/log/pki/pki-tomcat/kra/signedAudit/ Web services files Agent services: /var/lib/pki/pki-tomcat/kra/webapps/kra/agent/ Admin services: /var/lib/pki/pki-tomcat/kra/webapps/kra/admin/ [a] Linked to /etc/pki/pki-tomcat/kra/ [b] The subsystem certificate is always issued by the security domain so that domain-level operations that require client authentication are based on this subsystem certificate. [c] Note that all subsystem certificates are stored in the instance security database 14.1.4. OCSP Subsystem Information This section contains details about the OCSP subsystem, which is one of the possible subsystems that can be installed as a web application in a Certificate Server instance. Table 14.4. OCSP Subsystem Information for the Default Instance (pki-tomcat) Setting Value Main directory /var/lib/pki/pki-tomcat/ocsp/ Configuration directory /var/lib/pki/pki-tomcat/ocsp/conf/ [a] Configuration file /var/lib/pki/pki-tomcat/ocsp/conf/CS.cfg Subsystem certificates Transport certificate Storage certificate TLS server certificate Audit log signing certificate Subsystem certificate [b] Security databases /var/lib/pki/pki-tomcat/alias/ [c] Log files /var/lib/pki/pki-tomcat/ocsp/logs/ Install log /var/log/pki/pki-ocsp-spawn- date .log Uninstall log /var/log/pki/pki-ocsp-destroy- date .log Audit logs /var/log/pki/pki-tomcat/ocsp/signedAudit/ Web services files Agent services: /var/lib/pki/pki-tomcat/ocsp/webapps/ocsp/agent/ Admin services: /var/lib/pki/pki-tomcat/ocsp/webapps/ocsp/admin/ [a] Linked to /etc/pki/pki-tomcat/ocsp/ [b] The subsystem certificate is always issued by the security domain so that domain-level operations that require client authentication are based on this subsystem certificate. [c] Note that all subsystem certificates are stored in the instance security database 14.1.5. TKS Subsystem Information This section contains details about the TKS subsystem, which is one of the possible subsystems that can be installed as a web application in a Certificate Server instance. Table 14.5. Every time a subsystem is created either through the initial installation or creating additional instances with (pki-tomcat) Setting Value Main directory /var/lib/pki/pki-tomcat/tks/ Configuration directory /var/lib/pki/pki-tomcat/tks/conf/ [a] Configuration file /var/lib/pki/pki-tomcat/tks/conf/CS.cfg Subsystem certificates Transport certificate Storage certificate TLS server certificate Audit log signing certificate Subsystem certificate [b] Security databases /var/lib/pki/pki-tomcat/alias/ [c] Log files /var/lib/pki/pki-tomcat/tks/logs/ Install log /var/log/pki/pki-tks-spawn- date .log Uninstall log /var/log/pki/pki-tks-destroy- date .log Audit logs /var/log/pki/pki-tomcat/tks/signedAudit/ Web services files Agent services: /var/lib/pki/pki-tomcat/tks/webapps/tks/agent/ Admin services: /var/lib/pki/pki-tomcat/tks/webapps/tks/admin/ [a] Linked to /etc/pki/pki-tomcat/tks/ [b] The subsystem certificate is always issued by the security domain so that domain-level operations that require client authentication are based on this subsystem certificate. [c] Note that all subsystem certificates are stored in the instance security database 14.1.6. TPS Subsystem Information This section contains details about the TPS subsystem, which is one of the possible subsystems that can be installed as a web application in a Certificate Server instance. Table 14.6. TPS Subsystem Information for the Default Instance (pki-tomcat) Setting Value Main directory /var/lib/pki/pki-tomcat/tps Configuration directory /var/lib/pki/pki-tomcat/tps/conf/ [a] Configuration file /var/lib/pki/pki-tomcat/tps/conf/CS.cfg Subsystem certificates Transport certificate Storage certificate TLS server certificate Audit log signing certificate Subsystem certificate [b] Security databases /var/lib/pki/pki-tomcat/alias/ [c] Log files /var/lib/pki/pki-tomcat/tps/logs/ Install log /var/log/pki/pki-tps-spawn- date .log Uninstall log /var/log/pki/pki-tps-destroy- date .log Audit logs /var/log/pki/pki-tomcat/tps/signedAudit/ Web services files Agent services: /var/lib/pki/pki-tomcat/tps/webapps/tps/agent/ Admin services: /var/lib/pki/pki-tomcat/tps/webapps/tps/admin/ [a] Linked to /etc/pki/pki-tomcat/tps/ [b] The subsystem certificate is always issued by the security domain so that domain-level operations that require client authentication are based on this subsystem certificate. [c] Note that all subsystem certificates are stored in the instance security database 14.1.7. Shared Certificate System Subsystem File Locations There are some directories used by or common to all Certificate System subsystem instances for general server operations, listed in Table 2.8, "Subsystem File Locations" . Table 14.7. Subsystem File Locations Directory Location Contents /var/lib/ instance_name Contains the main instance directory, which is the location for user-specific directory locations and customized configuration files, profiles, certificate databases, web files, and other files for the subsystem instance. /usr/share/java/pki Contains Java archive files shared by the Certificate System subsystems. Along with shared files for all subsystems, there are subsystem-specific files in subfolders: pki/ca/ (CA) pki/kra/ (KRA) pki/ocsp/ (OCSP) pki/tks/ (TKS) Not used by the TPS subsystem. /usr/share/pki Contains common files and templates used to create Certificate System instances. Along with shared files for all subsystems, there are subsystem-specific files in subfolders: pki/ca/ (CA) pki/kra/ (KRA) pki/ocsp/ (OCSP) pki/tks/ (TKS) pki/tps (TPS) /usr/bin Contains the pkispawn and pkidestroy instance configuration scripts and tools (Java, native, and security) shared by the Certificate System subsystems. /var/lib/tomcat5/common/lib Contains links to Java archive files shared by local Tomcat web applications and shared by the Certificate System subsystems. Not used by the TPS subsystem. /var/lib/tomcat5/server/lib Contains links to Java archive files used by the local Tomcat web server and shared by the Certificate System subsystems. Not used by the TPS subsystem. /usr/shared/pki Contains the Java archive files used by the Tomcat server and applications used by the Certificate System instances. Not used by the TPS subsystem. /usr/lib/httpd/modules /usr/lib64/httpd/modules Contains Apache modules used by the TPS subsystem. Not used by the CA, KRA, OCSP, or TKS subsystems. /usr/lib/mozldap /usr/lib64/mozldap Mozilla LDAP SDK tools used by the TPS subsystem. Not used by the CA, KRA, OCSP, or TKS subsystems.
null
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/configfiles
Administration Guide
Administration Guide Red Hat CodeReady Workspaces 2.1 Administering Red Hat CodeReady Workspaces 2.1 Supriya Takkhi Robert Kratky [email protected] Michal Maler [email protected] Fabrice Flore-Thebault [email protected] Yana Hontyk [email protected] Red Hat Developer Group Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.1/html/administration_guide/index
Chapter 3. Customizing the Fuse Console branding
Chapter 3. Customizing the Fuse Console branding You can customize the Fuse Console branding information, such as title, logo, and login page information, by using the Fuse Console branding plugin. By default, the Fuse Console branding is defined in the hawtconfig.json that is located in the Fuse Console WAR file ( eap-install-dir/standalone/deployments/hawtio-wildfly-<version>.war ). When you implement the Fuse Console branding plugin, you can override the default branding with your own custom branding. Procedure Download the branding plugin example from https://github.com/hawtio/hawtio/tree/master/examples/branding-plugin to a local directory of your choice. In an editor of your choice, open the Fuse Console branding plugin's src/main/webapp/plugin/brandingPlugin.js file to customize the Fuse Console branding. You can change the values of the configuration properties listed in Table A.1, "Fuse Console Configuration Properties" . Save your changes. In an editor of your choice, open the Fuse Console branding plugin's pom.xml file to its <parent> section: Edit the <parent> section as follows: Change the value of the <version> property to match the version of your Fuse on EAP installation. For example, if your Fuse on EAP installation directory name is 2.0.0.fuse-760015 , set the version to 2.0.0.fuse-760015 . Remove the <relativePath>../..</relativePath> line. For example: In a Terminal window, build the branding-plugin project by running the following command: This command creates a branding-plugin.war file in the project's /target folder. Copy the branding-plugin.war file to your EAP installation's standalone/deployments directory. If Fuse is not already running, start it by running the following command: On Linux/Mac OS: ./bin/standalone.sh On Windows: ./bin/standalone.bat In a web browser, open the Fuse Console by using the URL that the start command returned in the step (the default URL is http://localhost:8080/hawtio ). Note If you have already run the Fuse Console in a web browser, the branding is stored in the browser's local storage. To use new branding settings, you must clear the browser's local storage.
[ "<parent> <groupId>io.hawt</groupId> <artifactId>project</artifactId> <version>2.9-SNAPSHOT</version> <relativePath>../..</relativePath> </parent>", "<parent> <groupId>io.hawt</groupId> <artifactId>project</artifactId> <version> 2.0.0.fuse-760015</version> </parent>", "mvn clean install" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/managing_fuse_on_jboss_eap_standalone/fuse-console-branding-eap
Chapter 16. Configuring kdump in the web console
Chapter 16. Configuring kdump in the web console You can set up and test the kdump configuration by using the RHEL 8 web console. The web console can enable the kdump service at boot time. With the web console, you can configure the reserved memory for kdump and to select the vmcore saving location in an uncompressed or compressed format. 16.1. Configuring kdump memory usage and target location in web console You can configure the memory reserve for the kdump kernel and also specify the target location to capture the vmcore dump file with the RHEL web console interface. Prerequisites The web console must be installed and accessible. For details, see Installing the web console . Procedure In the web console, open the Kernel dump tab and start the kdump service by setting the Kernel crash dump switch to on. Configure the kdump memory usage in the terminal, for example: Restart the system to apply the changes. In the Kernel dump tab, click Edit at the end of the Crash dump location field. Specify the target directory for saving the vmcore dump file: For a local filesystem, select Local Filesystem from the drop-down menu. For a remote system by using the SSH protocol, select Remote over SSH from the drop-down menu and specify the following fields: In the Server field, enter the remote server address. In the SSH key field, enter the SSH key location. In the Directory field, enter the target directory. For a remote system by using the NFS protocol, select Remote over NFS from the drop-down menu and specify the following fields: In the Server field, enter the remote server address. In the Export field, enter the location of the shared folder of an NFS server. In the Directory field, enter the target directory. Note You can reduce the size of the vmcore file by selecting the Compression checkbox. Optional: Display the automation script by clicking View automation script . A window with the generated script opens. You can browse a shell script and an Ansible playbook generation options tab. Optional: Copy the script by clicking Copy to clipboard . You can use this script to apply the same configuration on multiple machines. Verification Click Test configuration . Click Crash system under Test kdump settings . Warning When you start the system crash, the kernel operation stops and results in a system crash with data loss. Additional resources Supported kdump targets
[ "sudo grubby --update-kernel ALL --args crashkernel=512M" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_monitoring_and_updating_the_kernel/configuring-kdump-in-the-web-console_managing-monitoring-and-updating-the-kernel
40.2.3. Separating Kernel and User-space Profiles
40.2.3. Separating Kernel and User-space Profiles By default, kernel mode and user mode information is gathered for each event. To configure OProfile not to count events in kernel mode for a specific counter, execute the following command: Execute the following command to start profiling kernel mode for the counter again: To configure OProfile not to count events in user mode for a specific counter, execute the following command: Execute the following command to start profiling user mode for the counter again: When the OProfile daemon writes the profile data to sample files, it can separate the kernel and library profile data into separate sample files. To configure how the daemon writes to sample files, execute the following command as root: <choice> can be one of the following: none - do not separate the profiles (default) library - generate per-application profiles for libraries kernel - generate per-application profiles for the kernel and kernel modules all - generate per-application profiles for libraries and per-application profiles for the kernel and kernel modules If --separate=library is used, the sample file name includes the name of the executable as well as the name of the library.
[ "opcontrol --event= <event-name> : <sample-rate> : <unit-mask> :0", "opcontrol --event= <event-name> : <sample-rate> : <unit-mask> :1", "opcontrol --event= <event-name> : <sample-rate> : <unit-mask> : <kernel> :0", "opcontrol --event= <event-name> : <sample-rate> : <unit-mask> : <kernel> :1", "opcontrol --separate= <choice>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/configuring_oprofile-separating_kernel_and_user_space_profiles
Chapter 27. External Array Management (libStorageMgmt)
Chapter 27. External Array Management (libStorageMgmt) Red Hat Enterprise Linux 7 ships with a new external array management library called libStorageMgmt . 27.1. Introduction to libStorageMgmt The libStorageMgmt library is a storage array independent Application Programming Interface (API). As a developer, you can use this API to manage different storage arrays and leverage the hardware accelerated features. This library is used as a building block for other higher level management tools and applications. End system administrators can also use it as a tool to manually manage storage and automate storage management tasks with the use of scripts. With the libStorageMgmt library, you can perform the following operations: List storage pools, volumes, access groups, or file systems. Create and delete volumes, access groups, file systems, or NFS exports. Grant and remove access to volumes, access groups, or initiators. Replicate volumes with snapshots, clones, and copies. Create and delete access groups and edit members of a group. Server resources such as CPU and interconnect bandwidth are not utilized because the operations are all done on the array. The libstoragemgmt package provides: A stable C and Python API for client application and plug-in developers. A command-line interface that utilizes the library ( lsmcli ). A daemon that executes the plug-in ( lsmd ). A simulator plug-in that allows the testing of client applications ( sim ). Plug-in architecture for interfacing with arrays. Warning This library and its associated tool have the ability to destroy any and all data located on the arrays it manages. It is highly recommended to develop and test applications and scripts against the storage simulator plug-in to remove any logic errors before working with production systems. Testing applications and scripts on actual non-production hardware before deploying to production is also strongly encouraged if possible. The libStorageMgmt library in Red Hat Enterprise Linux 7 adds a default udev rule to handle the REPORTED LUNS DATA HAS CHANGED unit attention. When a storage configuration change has taken place, one of several Unit Attention ASC/ASCQ codes reports the change. A uevent is then generated and is rescanned automatically with sysfs . The file /lib/udev/rules.d/90-scsi-ua.rules contains example rules to enumerate other events that the kernel can generate. The libStorageMgmt library uses a plug-in architecture to accommodate differences in storage arrays. For more information on libStorageMgmt plug-ins and how to write them, see the Red Hat Developer Guide .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/ch-libstoragemgmt
7.3. Pools
7.3. Pools Virtual machine pools allow for rapid provisioning of numerous identical virtual machines to users as desktops. Users who have been granted permission to access and use virtual machines from a pool receive an available virtual machine based on their position in a queue of requests. Virtual machines in a pool do not allow data persistence; each time a virtual machine is assigned from a pool, it is allocated in its base state. This is ideally suited to be used in situations where user data is stored centrally. Virtual machine pools are created from a template. Each virtual machine in a pool uses the same backing read-only image, and uses a temporary copy-on-write image to hold changed and newly generated data. Virtual machines in a pool are different from other virtual machines in that the copy-on-write layer that holds user-generated and -changed data is lost at shutdown. The implication of this is that a virtual machine pool requires no more storage than the template that backs it, plus some space for data generated or changed during use. Virtual machine pools are an efficient way to provide computing power to users for some tasks without the storage cost of providing each user with a dedicated virtual desktop. Example 7.1. Example Pool Usage A technical support company employs 10 help desk staff. However, only five are working at any given time. Instead of creating ten virtual machines, one for each help desk employee, a pool of five virtual machines can be created. Help desk employees allocate themselves a virtual machine at the beginning of their shift and return it to the pool at the end.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/pools1
Chapter 2. Enhancements
Chapter 2. Enhancements This section describes a highlighted set of enhancements and new features in AMQ Broker 7.8. For a complete list of enhancements in the release, see AMQ Broker 7.8.0 Enhancements . New version of AMQ Management Console AMQ Broker 7.8 includes a new version of AMQ Management Console. For more information on using the console, see Using AMQ Management Console in Managing AMQ Broker . New database certifications AMQ Broker 7.8 adds support for PostgreSQL 11.5 and MySQL 8. For complete information about the databases supported by different versions of AMQ Broker, see Red Hat AMQ 7 Supported Configurations . Federating address and queues In AMQ Broker 7.8 you can configure federation of addresses and queues. Federation enables transmission of messages between brokers, without requiring the brokers to be in a common cluster. For example, federation is suitable for reliably sending messages from one cluster to another. This transmission might be across a Wide Area Network (WAN), Regions of a cloud infrastructure, or over the Internet. For more information, see Federating addresses and queues in Configuring AMQ Broker . Disabling queues In AMQ Broker 7.8, you can disable queues that you have defined in your broker configuration. For example, you might want to define a queue so that clients can subscribe to it, but you are not ready to use the queue for message routing. Alternatively, you might want to stop message flow to a queue, but still keep clients bound to the queue. In these cases, you can disable the queue. For more information, see Disabling queues in Configuring AMQ Broker . Performance improvements for vertical scaling of queues AMQ Broker 7.8 adds scalability improvements that improve broker performance when a deployment automatically scales to large numbers of queues. This improvement applies to all supported protocols, but is particularly beneficial for MQTT, which is often used in large-scale deployments. This performance enhancement is most noticeable for broker deployments with very large numbers of queues, for example, 50,000 or more. Updating running Operator-based broker deployments with address settings In AMQ Broker 7.8, you can now add address settings to an Operator-based broker deployment that is already running. Support for including address settings in the Custom Resource (CR) instance for a broker deployment was added in AMQ Broker 7.7. However, in 7.7, you needed to configure the address settings when creating the broker deployment for the first time. For more information on configuring addresses, queues, and address settings, see Configuring addresses and queues for Operator-based broker deployments in Deploying AMQ Broker on OpenShift . Additions to the base audit logger The base audit logger now logs when you pause and resume addresses. To learn how to configure logging, see Logging in Configuring AMQ Broker . New metric for broker address memory usage percentage In 7.8, the Prometheus metrics plugin for AMQ Broker exports a new metric named artemis_address_memory_usage_percentage . This metric is the total address memory used by all addresses on a broker as a percentage of the value of the global-max-size parameter. To learn how to configure the Prometheus metrics plugin, see Monitoring broker runtime metrics in Managing AMQ Broker . Improved configuration of diverts In 7.8, if you use AMQ Management Console or the management API to configure a runtime divert on a live broker, the divert is automatically propagated to the backup broker. This was not the case in releases. Specifying a custom Init Container image The latest version of the Operator for 7.8 uses a specialized container called an Init Container to generate the broker configuration. By default, the Operator uses a built-in Init Container image. However, you can also specify a custom Init Container image that modifies or adds to the configuration created by the built-in Init Container. For more information, see Specifying a custom Init Container image in Deploying AMQ Broker on OpenShift .. Operator support for multiple container platforms In 7.8, the AMQ Broker Operator supports the following container platforms: OpenShift Container Platform OpenShift Container Platform on IBM Z OpenShift Container Platform on IBM Power Systems Operator support for OpenShift Container Platform on IBM Power Systems is new in 7.8. A version of the Operator for AMQ Broker 7.5 supports OpenShift Container Platform on IBM Z. In 7.5, you need to install and deploy a separate version of the Operator for each supported platform. In 7.8, you need to install only a single version, which supports all three container platforms. Based on the container platform that you are using, the Operator automatically chooses a broker container image to use in your deployment. To learn how to install the latest version of the Operator, see the following sections in Deploying AMQ Broker on OpenShift : Installing the Operator using the CLI Installing the Operator using the Operator Lifecycle Manager Automatic selection of broker container image by Operator In the latest version of the Operator for 7.8, when you use a Custom Resource (CR) instance to create a broker deployment, you no longer need to explicitly specify a broker container image name in the CR. Instead, when you deploy the CR, the Operator automatically determines the appropriate broker container image to use. This also applies to the Init Container that generates the broker configuration. To learn more, see How the Operator chooses container images in Deploying AMQ Broker on OpenShift . RHEL 8 Operator An Operator named Red Hat Integration - AMQ Broker for RHEL 8 (Multiarch) is available on x86_64 platforms, IBM Z, and IBM Power Systems. It supports the following channels: 7.x - This channel will update to 7.9 when available. 7.8.x - This is the Long Term Support (LTS) channel. To determine which Operator to choose, see the Red Hat Enterprise Linux Container Compatibility Matrix . Operator channels In the latest version of the Operator for 7.8, the following new update channels are available for the Red Hat Integration - AMQ Broker Operator`: 7.x - This is equivalant to the current channel which is now deprecated. 7.8.x - This is the Long Term Support (LTS) channel. Operator versioning In the latest version of the Operator for 7.8, Operators now adopt the same versioning scheme as AMQ Broker. For example, the Operator release on x86_64 platforms was version 0.19 in OperatorHub, that version is updated to 7.8.2-opr-1 . Documentation updates The AMQ Broker documentation is updated to provide instructions regarding the new Operators and channels and support for IBM Z and IBM Power Systems. Additional resources For a complete list of enhancements in the AMQ Broker 7.8 release, see AMQ Broker 7.8 Enhancements .
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/release_notes_for_red_hat_amq_broker_7.8/enhancements
7.166. ppp
7.166. ppp 7.166.1. RHBA-2015:0685 - ppp bug fix and enhancement update Updated ppp packages that fix several bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The ppp packages contain the Point-to-Point Protocol (PPP) daemon and documentation for PPP support. The PPP protocol provides a method for transmitting datagrams over serial point-to-point links. PPP is usually used to dial in to an Internet Service Provider (ISP) or other organization over a modem and phone line. Bug Fixes BZ# 906912 Previously, when the radius client configuration file contained an option not recognized by the PPP radius plug-in, an error was reported. To fix this bug, the parser for the configuration file has been amended to skip unrecognized options. Now, unknown options are skipped without reporting errors. BZ# 922769 Prior to this update, the ppp package incorrectly required the logrotate package. Consequently, the logrotate package could not be easily uninstalled. To fix this bug, the hard dependency on the logrotate package has been removed, and it is now possible to easily uninstall the logrotate package. BZ# 1197792 Previously, the Point-to-Point Protocol daemon (PPPD) terminated unexpectedly when the pppol2tp plug-in was used, and the PPPD command line contained a dump option. To fix this bug, the initialization of the variable containing textual representation of the file descriptor passed to the pppol2tp plug-in has been corrected. Now, the variable initializes properly, and PPPD no longer crashes in this scenario. Enhancement BZ# 815128 The ppp package now includes two new plug-ins (pppol2tp.so and openl2tp.so) that allow the use of kernel mode l2tp in dependent packages. As a result, it is now possible to leverage in-kernel pppo-l2tp protocol implementation by xl2tpd and openl2tpd. Users of ppp are advised to upgrade to these updated packages, which fix these bugs and add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-ppp
Chapter 4. Handling a machine configuration for hosted control planes
Chapter 4. Handling a machine configuration for hosted control planes In a standalone OpenShift Container Platform cluster, a machine config pool manages a set of nodes. You can handle a machine configuration by using the MachineConfigPool custom resource (CR). Tip You can reference any machineconfiguration.openshift.io resources in the nodepool.spec.config field of the NodePool CR. In hosted control planes, the MachineConfigPool CR does not exist. A node pool contains a set of compute nodes. You can handle a machine configuration by using node pools. 4.1. Configuring node pools for hosted control planes On hosted control planes, you can configure node pools by creating a MachineConfig object inside of a config map in the management cluster. Procedure To create a MachineConfig object inside of a config map in the management cluster, enter the following information: apiVersion: v1 kind: ConfigMap metadata: name: <configmap_name> namespace: clusters data: config: | apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: <machineconfig_name> spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:... mode: 420 overwrite: true path: USD{PATH} 1 1 Sets the path on the node where the MachineConfig object is stored. After you add the object to the config map, you can apply the config map to the node pool as follows: USD oc edit nodepool <nodepool_name> --namespace <hosted_cluster_namespace> apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: # ... name: nodepool-1 namespace: clusters # ... spec: config: - name: <configmap_name> 1 # ... 1 Replace <configmap_name> with the name of your config map. 4.2. Referencing the kubelet configuration in node pools To reference your kubelet configuration in node pools, you add the kubelet configuration in a config map and then apply the config map in the NodePool resource. Procedure Add the kubelet configuration inside of a config map in the management cluster by entering the following information: Example ConfigMap object with the kubelet configuration apiVersion: v1 kind: ConfigMap metadata: name: <configmap_name> 1 namespace: clusters data: config: | apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: <kubeletconfig_name> 2 spec: kubeletConfig: registerWithTaints: - key: "example.sh/unregistered" value: "true" effect: "NoExecute" 1 Replace <configmap_name> with the name of your config map. 2 Replace <kubeletconfig_name> with the name of the KubeletConfig resource. Apply the config map to the node pool by entering the following command: USD oc edit nodepool <nodepool_name> --namespace clusters 1 1 Replace <nodepool_name> with the name of your node pool. Example NodePool resource configuration apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: # ... name: nodepool-1 namespace: clusters # ... spec: config: - name: <configmap_name> 1 # ... 1 Replace <configmap_name> with the name of your config map. 4.3. Configuring node tuning in a hosted cluster To set node-level tuning on the nodes in your hosted cluster, you can use the Node Tuning Operator. In hosted control planes, you can configure node tuning by creating config maps that contain Tuned objects and referencing those config maps in your node pools. Procedure Create a config map that contains a valid tuned manifest, and reference the manifest in a node pool. In the following example, a Tuned manifest defines a profile that sets vm.dirty_ratio to 55 on nodes that contain the tuned-1-node-label node label with any value. Save the following ConfigMap manifest in a file named tuned-1.yaml : apiVersion: v1 kind: ConfigMap metadata: name: tuned-1 namespace: clusters data: tuning: | apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: tuned-1 namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift profile include=openshift-node [sysctl] vm.dirty_ratio="55" name: tuned-1-profile recommend: - priority: 20 profile: tuned-1-profile Note If you do not add any labels to an entry in the spec.recommend section of the Tuned spec, node-pool-based matching is assumed, so the highest priority profile in the spec.recommend section is applied to nodes in the pool. Although you can achieve more fine-grained node-label-based matching by setting a label value in the Tuned .spec.recommend.match section, node labels will not persist during an upgrade unless you set the .spec.management.upgradeType value of the node pool to InPlace . Create the ConfigMap object in the management cluster: USD oc --kubeconfig="USDMGMT_KUBECONFIG" create -f tuned-1.yaml Reference the ConfigMap object in the spec.tuningConfig field of the node pool, either by editing a node pool or creating one. In this example, assume that you have only one NodePool , named nodepool-1 , which contains 2 nodes. apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: ... name: nodepool-1 namespace: clusters ... spec: ... tuningConfig: - name: tuned-1 status: ... Note You can reference the same config map in multiple node pools. In hosted control planes, the Node Tuning Operator appends a hash of the node pool name and namespace to the name of the Tuned CRs to distinguish them. Outside of this case, do not create multiple TuneD profiles of the same name in different Tuned CRs for the same hosted cluster. Verification Now that you have created the ConfigMap object that contains a Tuned manifest and referenced it in a NodePool , the Node Tuning Operator syncs the Tuned objects into the hosted cluster. You can verify which Tuned objects are defined and which TuneD profiles are applied to each node. List the Tuned objects in the hosted cluster: USD oc --kubeconfig="USDHC_KUBECONFIG" get tuned.tuned.openshift.io -n openshift-cluster-node-tuning-operator Example output NAME AGE default 7m36s rendered 7m36s tuned-1 65s List the Profile objects in the hosted cluster: USD oc --kubeconfig="USDHC_KUBECONFIG" get profile.tuned.openshift.io -n openshift-cluster-node-tuning-operator Example output NAME TUNED APPLIED DEGRADED AGE nodepool-1-worker-1 tuned-1-profile True False 7m43s nodepool-1-worker-2 tuned-1-profile True False 7m14s Note If no custom profiles are created, the openshift-node profile is applied by default. To confirm that the tuning was applied correctly, start a debug shell on a node and check the sysctl values: USD oc --kubeconfig="USDHC_KUBECONFIG" debug node/nodepool-1-worker-1 -- chroot /host sysctl vm.dirty_ratio Example output vm.dirty_ratio = 55 4.4. Deploying the SR-IOV Operator for hosted control planes Important Hosted control planes on the AWS platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . After you configure and deploy your hosting service cluster, you can create a subscription to the SR-IOV Operator on a hosted cluster. The SR-IOV pod runs on worker machines rather than the control plane. Prerequisites You must configure and deploy the hosted cluster on AWS. For more information, see Configuring the hosting cluster on AWS (Technology Preview) . Procedure Create a namespace and an Operator group: apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator Create a subscription to the SR-IOV Operator: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subsription namespace: openshift-sriov-network-operator spec: channel: stable name: sriov-network-operator config: nodeSelector: node-role.kubernetes.io/worker: "" source: s/qe-app-registry/redhat-operators sourceNamespace: openshift-marketplace Verification To verify that the SR-IOV Operator is ready, run the following command and view the resulting output: USD oc get csv -n openshift-sriov-network-operator Example output NAME DISPLAY VERSION REPLACES PHASE sriov-network-operator.4.14.0-202211021237 SR-IOV Network Operator 4.14.0-202211021237 sriov-network-operator.4.14.0-202210290517 Succeeded To verify that the SR-IOV pods are deployed, run the following command: USD oc get pods -n openshift-sriov-network-operator
[ "apiVersion: v1 kind: ConfigMap metadata: name: <configmap_name> namespace: clusters data: config: | apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: <machineconfig_name> spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data: mode: 420 overwrite: true path: USD{PATH} 1", "oc edit nodepool <nodepool_name> --namespace <hosted_cluster_namespace>", "apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: name: nodepool-1 namespace: clusters spec: config: - name: <configmap_name> 1", "apiVersion: v1 kind: ConfigMap metadata: name: <configmap_name> 1 namespace: clusters data: config: | apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: <kubeletconfig_name> 2 spec: kubeletConfig: registerWithTaints: - key: \"example.sh/unregistered\" value: \"true\" effect: \"NoExecute\"", "oc edit nodepool <nodepool_name> --namespace clusters 1", "apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: name: nodepool-1 namespace: clusters spec: config: - name: <configmap_name> 1", "apiVersion: v1 kind: ConfigMap metadata: name: tuned-1 namespace: clusters data: tuning: | apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: tuned-1 namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift profile include=openshift-node [sysctl] vm.dirty_ratio=\"55\" name: tuned-1-profile recommend: - priority: 20 profile: tuned-1-profile", "oc --kubeconfig=\"USDMGMT_KUBECONFIG\" create -f tuned-1.yaml", "apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: name: nodepool-1 namespace: clusters spec: tuningConfig: - name: tuned-1 status:", "oc --kubeconfig=\"USDHC_KUBECONFIG\" get tuned.tuned.openshift.io -n openshift-cluster-node-tuning-operator", "NAME AGE default 7m36s rendered 7m36s tuned-1 65s", "oc --kubeconfig=\"USDHC_KUBECONFIG\" get profile.tuned.openshift.io -n openshift-cluster-node-tuning-operator", "NAME TUNED APPLIED DEGRADED AGE nodepool-1-worker-1 tuned-1-profile True False 7m43s nodepool-1-worker-2 tuned-1-profile True False 7m14s", "oc --kubeconfig=\"USDHC_KUBECONFIG\" debug node/nodepool-1-worker-1 -- chroot /host sysctl vm.dirty_ratio", "vm.dirty_ratio = 55", "apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subsription namespace: openshift-sriov-network-operator spec: channel: stable name: sriov-network-operator config: nodeSelector: node-role.kubernetes.io/worker: \"\" source: s/qe-app-registry/redhat-operators sourceNamespace: openshift-marketplace", "oc get csv -n openshift-sriov-network-operator", "NAME DISPLAY VERSION REPLACES PHASE sriov-network-operator.4.14.0-202211021237 SR-IOV Network Operator 4.14.0-202211021237 sriov-network-operator.4.14.0-202210290517 Succeeded", "oc get pods -n openshift-sriov-network-operator" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/hosted_control_planes/handling-a-machine-configuration-for-hosted-control-planes
Chapter 16. Upgrading to OpenShift Data Foundation
Chapter 16. Upgrading to OpenShift Data Foundation 16.1. Overview of the OpenShift Data Foundation update process OpenShift Container Storage, based on the open source Ceph technology, has expanded its scope and foundational role in a containerized, hybrid cloud environment since its introduction. It complements existing storage in addition to other data-related hardware and software, making them rapidly attachable, accessible, and scalable in a hybrid cloud environment. To better reflect these foundational and infrastructure distinctives, OpenShift Container Storage is now OpenShift Data Foundation . Important You can perform the upgrade process for OpenShift Data Foundation version 4.9 from OpenShift Container Storage version 4.8 only by installing the OpenShift Data Foundation operator from OpenShift Container Platform OperatorHub. In the future release, you can upgrade Red Hat OpenShift Data Foundation, either between minor releases like 4.9 and 4.x, or between batch updates like 4.9.0 and 4.9.1 by enabling automatic updates (if not done so during operator installation) or performing manual updates. You also need to upgrade the different parts of Red Hat OpenShift Data Foundation in the following order for both internal and external mode deployments: Update OpenShift Container Platform according to the Updating clusters documentation for OpenShift Container Platform. Update Red Hat OpenShift Data Foundation. To prepare a disconnected environment for updates , see Operators guide to using Operator Lifecycle Manager on restricted networks to be able to update Red Hat OpenShift Data Foundation as well as Local Storage Operator when in use. Update Red Hat OpenShift Container Storage operator version 4.8 to version 4.9 by installing the Red Hat OpenShift Data Foundation operator from the OperatorHub on OpenShift Container Platform web console. See Updating Red Hat OpenShift Container Storage 4.8 to Red Hat OpenShift Data Foundation 4.9 . Update Red Hat OpenShift Data Foundation from 4.9.x to 4.9.y . See Updating Red Hat OpenShift Data Foundation 4.9.x to 4.9.y . For updating external mode deployments , you must also perform the steps from section Updating the OpenShift Data Foundation external secret . If you use local storage: Update the Local Storage operator . See Checking for Local Storage Operator deployments if you are unsure. Perform post-update configuration changes for clusters backed by local storage. See Post-update configuration for clusters backed by local storage for details. Update considerations Review the following important considerations before you begin. Red Hat recommends using the same version of Red Hat OpenShift Container Platform with Red Hat OpenShift Data Foundation. See the Interoperability Matrix for more information about supported combinations of OpenShift Container Platform and Red Hat OpenShift Data Foundation. The Local Storage Operator is fully supported only when the Local Storage Operator version matches the Red Hat OpenShift Container Platform version. The flexible scaling feature is available only in new deployments of Red Hat OpenShift Data Foundation versions 4.7 and later. Storage clusters upgraded from a version to version 4.7 or later do not support flexible scaling. For more information, see Flexible scaling of OpenShift Container Storage cluster in the New features section of 4.7 Release Notes . 16.2. Updating Red Hat OpenShift Container Storage 4.8 to Red Hat OpenShift Data Foundation 4.9 This chapter helps you to upgrade between the z-stream release for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what's not. For Internal and Internal-attached deployments, upgrading OpenShift Container Storage upgrades all OpenShift Container Storage services including the backend Ceph Storage cluster. For External mode deployments, upgrading OpenShift Container Storage only upgrades the OpenShift Container Storage service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately. We recommend upgrading RHCS along with OpenShift Container Storage in order to get new feature support, security fixes, and other bug fixes. Since we do not have a strong dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. See solution to know more about Red Hat Ceph Storage releases. Important Upgrading to 4.9 directly from any version older than 4.8 is unsupported. Prerequisites Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.9.X, see Updating Clusters . Ensure that the OpenShift Container Storage cluster is healthy and data is resilient. Navigate to Storage Overview and check both Block and File and Object tabs for the green tick on the status card. Green tick indicates that the storage cluster , object service and data resiliency are all healthy. Ensure that all OpenShift Container Storage Pods, including the operator pods, are in Running state in the openshift-storage namespace. To view the state of the pods, on the OpenShift Web Console, click Workloads Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster. Procedure On the OpenShift Web Console, navigate to OperatorHub . Search for OpenShift Data Foundation using the Filter by keyword box and click on the OpenShift Data Foundation tile. Click Install . On the install Operator page, click Install . Wait for the Operator installation to complete. Note We recommend using all default settings. Changing it may result in unexpected behavior. Alter only if you are aware of its result. Verification steps Verify that the page displays Succeeded message along with the option to Create StorageSystem . Note For the upgraded clusters, since the storage system is automatically created, do not create it again. On the notification popup, click Refresh web console link to reflect the OpenShift Data Foundation changes in the OpenShift console. Verify the state of the pods on the OpenShift Web Console. Click Workloads Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Wait for all the pods in the openshift-storage namespace to restart and reach Running state. Verify that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage OpenShift Data foundation Storage Systems tab and then click on the storage system name. Check both Block and File and Object tabs for the green tick on the status card. Green tick indicates that the storage cluster, object service and data resiliency are all healthy. Important In case the console plugin option was not automatically enabled after you installed the OpenShift Data Foundation Operator, you need to enable it. For more information on how to enable the console plugin, see Enabling the Red Hat OpenShift Data Foundation console plugin . After updating external mode deployments, you must also update the external secret. For instructions, see Updating the OpenShift Data Foundation external secret . Additional Resources If you face any issues while updating OpenShift Data Foundation, see the Commonly required logs for troubleshooting section in the Troubleshooting guide . 16.3. Updating Red Hat OpenShift Data Foundation 4.9.x to 4.9.y This chapter helps you to upgrade between the z-stream release for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what's not. For Internal and Internal-attached deployments, upgrading OpenShift Container Storage upgrades all OpenShift Container Storage services including the backend Ceph Storage cluster. For External mode deployments, upgrading OpenShift Container Storage only upgrades the OpenShift Container Storage service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately. Hence, we recommend upgrading RHCS along with OpenShift Container Storage in order to get new feature support, security fixes, and other bug fixes. Since we do not have a strong dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. See solution to know more about Red Hat Ceph Storage releases. When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic . If the update strategy is set to Manual then use the following procedure. Prerequisites Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.9.X, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage OpenShift Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy. Ensure that all OpenShift Data Foundation Pods, including the operator pods, are in Running state in the openshift-storage namespace. To view the state of the pods, on the OpenShift Web Console, click Workloads Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster. Procedure On the OpenShift Web Console, navigate to Operators Installed Operators . Select openshift-storage project. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click the OpenShift Data Foundation operator name. Click the Subscription tab. If the Upgrade Status shows require approval , click on requires approval link. On the InstallPlan Details page, click Preview Install Plan . Review the install plan and click Approve . Wait for the Status to change from Unknown to Created . Verification steps Verify that the Version below the OpenShift Data Foundation name and the operator status is the latest version. Navigate to Operators Installed Operators and select the openshift-storage project. When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick. Verify that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage OpenShift Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy Important In case the console plugin option was not automatically enabled after you installed the OpenShift Data Foundation Operator, you need to enable it. For more information on how to enable the console plugin, see Enabling the Red Hat OpenShift Data Foundation console plugin . If verification steps fail, contact Red Hat Support . 16.4. Changing the update approval strategy To ensure that the storage system gets updated automatically when a new update is available in the same channel, we recommend keeping the update approval strategy to Automatic . Changing the update approval strategy to Manual will need manual approval for each upgrade. Procedure Navigate to Operators Installed Operators . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click on OpenShift Data Foundation operator name Go to the Subscription tab. Click on the pencil icon for changing the Update approval . Select the update approval strategy and click Save . Verification steps Verify that the Update approval shows the newly selected approval strategy below it.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_and_managing_openshift_data_foundation_using_google_cloud/upgrading-your-cluster_rhodf
Chapter 3. Distribution selection
Chapter 3. Distribution selection Red Hat provides several distributions of Red Hat build of OpenJDK. This module helps you select the distribution that is right for your needs. All distributions of OpenJDK contain the JDK Flight Recorder (JFR) feature. This feature produces diagnostics and profiling data that can be consumed by other applications, such as JDK Mission Control (JMC). Red Hat build of OpenJDK RPMs for RHEL 8 RPM distributions of Red Hat build of OpenJDK 8, Red Hat build of OpenJDK 11, Red Hat build of OpenJDK 17, and Red Hat build of OpenJDK 21 for RHEL 8. Red Hat build of OpenJDK 8 JRE portable archive for RHEL Portable Red Hat build of OpenJDK 8 JRE archive distribution for RHEL 7 and 8 hosts. Red Hat build of OpenJDK 8 portable archive for RHEL Portable Red Hat build of OpenJDK 8 archive distribution for RHEL 7 and 8 hosts. Red Hat build of OpenJDK 11 JRE portable archive for RHEL Portable Red Hat build of OpenJDK 11 JRE archive distribution for RHEL 7 and 8 hosts. Red Hat build of OpenJDK 11 portable archive for RHEL Portable Red Hat build of OpenJDK 11 archive distribution for RHEL 7 and 8 hosts. Red Hat build of OpenJDK 17 JRE portable archive for RHEL Portable Red Hat build of OpenJDK 17 JRE archive distribution for RHEL 7 and 8 hosts. Red Hat build of OpenJDK 17 portable archive for RHEL Portable Red Hat build of OpenJDK 17 archive distribution for RHEL 7 and 8 hosts. Red Hat build of OpenJDK 21 JRE portable archive for RHEL Portable Red Hat build of OpenJDK 21 JRE archive distribution for RHEL 8 and 9 hosts. Red Hat build of OpenJDK 21 portable archive for RHEL Portable Red Hat build of OpenJDK 21 archive distribution for RHEL 8 and 9 hosts. Red Hat build of OpenJDK archive for Windows Red Hat build of OpenJDK 8, Red Hat build of OpenJDK 11, Red Hat build of OpenJDK 17, and Red Hat build of OpenJDK 21 distributions for all supported Windows hosts. Recommended for cases where multiple Red Hat build of OpenJDK versions may be installed on a host. This distribution includes the following: Java Web Start Mission Control Red Hat build of OpenJDK installers for Windows Red Hat build of OpenJDK 8, Red Hat build of OpenJDK 11, Red Hat build of OpenJDK 17, and Red Hat build of OpenJDK 21 MSI installers for all supported Windows hosts. Optionally installs Java Web Start and sets environment variables. Suitable for system wide installs of a single Red Hat build of OpenJDK version. Additional resources For more information about the JDK Flight Recorder (JFR), see Introduction to JDK Flight Recorder . For more information about the JDK Flight Recorder (JFR), see Introduction to JDK Mission Control . JDK Mission Control is available for RHEL with Red Hat Software Collections 3.2 . Where is JDK Mission Control (JMC) in JDK 21? Revised on 2024-05-09 14:48:08 UTC
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/getting_started_with_red_hat_build_of_openjdk_21/openjdk-distribution-selection
Chapter 6. Managing Jupyter notebook servers
Chapter 6. Managing Jupyter notebook servers 6.1. Accessing the Jupyter administration interface You can use the Jupyter administration interface to control notebook servers in your Red Hat OpenShift AI environment. Prerequisite You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. Procedure To access the Jupyter administration interface from OpenShift AI, perform the following actions: In OpenShift AI, in the Applications section of the left menu, click Enabled . Locate the Jupyter tile and click Launch application . On the page that opens when you launch Jupyter, click the Administration tab. The Administration page opens. To access the Jupyter administration interface from JupyterLab, perform the following actions: Click File Hub Control Panel . On the page that opens in OpenShift AI, click the Administration tab. The Administration page opens. Verification You can see the Jupyter administration interface. 6.2. Starting notebook servers owned by other users OpenShift AI administrators can start a notebook server for another existing user from the Jupyter administration interface. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. You have launched the Jupyter application, as described in Starting a Jupyter notebook server . Procedure On the page that opens when you launch Jupyter, click the Administration tab. On the Administration tab, perform the following actions: In the Users section, locate the user whose notebook server you want to start. Click Start server beside the relevant user. Complete the Start a notebook server page. Optional: Select the Start server in current tab checkbox if necessary. Click Start server . After the server starts, you see one of the following behaviors: If you previously selected the Start server in current tab checkbox, the JupyterLab interface opens in the current tab of your web browser. If you did not previously select the Start server in current tab checkbox, the Starting server dialog box prompts you to open the server in a new browser tab or in the current tab. The JupyterLab interface opens according to your selection. Verification The JupyterLab interface opens. 6.3. Accessing notebook servers owned by other users OpenShift AI administrators can access notebook servers that are owned by other users to correct configuration errors or to help them troubleshoot problems with their environment. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. You have launched the Jupyter application, as described in Starting a Jupyter notebook server . The notebook server that you want to access is running. Procedure On the page that opens when you launch Jupyter, click the Administration tab. On the Administration page, perform the following actions: In the Users section, locate the user that the notebook server belongs to. Click View server beside the relevant user. On the Notebook server control panel page, click Access notebook server . Verification The user's notebook server opens in JupyterLab. 6.4. Stopping notebook servers owned by other users OpenShift AI administrators can stop notebook servers that are owned by other users to reduce resource consumption on the cluster, or as part of removing a user and their resources from the cluster. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. You have launched the Jupyter application, as described in Starting a Jupyter notebook server . The notebook server that you want to stop is running. Procedure On the page that opens when you launch Jupyter, click the Administration tab. Stop one or more servers. If you want to stop one or more specific servers, perform the following actions: In the Users section, locate the user that the notebook server belongs to. To stop the notebook server, perform one of the following actions: Click the action menu ( ... ) beside the relevant user and select Stop server . Click View server beside the relevant user and then click Stop notebook server . The Stop server dialog box appears. Click Stop server . If you want to stop all servers, perform the following actions: Click the Stop all servers button. Click OK to confirm stopping all servers. Verification The Stop server link beside each server changes to a Start server link when the notebook server has stopped. 6.5. Stopping idle notebooks You can reduce resource usage in your OpenShift AI deployment by stopping notebook servers that have been idle (without logged in users) for a period of time. This is useful when resource demand in the cluster is high. By default, idle notebooks are not stopped after a specific time limit. Note If you have configured your cluster settings to disconnect all users from a cluster after a specified time limit, then this setting takes precedence over the idle notebook time limit. Users are logged out of the cluster when their session duration reaches the cluster-wide time limit. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. Procedure From the OpenShift AI dashboard, click Settings Cluster settings . Under Stop idle notebooks , select Stop idle notebooks after . Enter a time limit, in hours and minutes , for when idle notebooks are stopped. Click Save changes . Verification The notebook-controller-culler-config ConfigMap, located in the redhat-ods-applications project on the Workloads ConfigMaps page, contains the following culling configuration settings: ENABLE_CULLING : Specifies if the culling feature is enabled or disabled (this is false by default). IDLENESS_CHECK_PERIOD : The polling frequency to check for a notebook's last known activity (in minutes). CULL_IDLE_TIME : The maximum allotted time to scale an inactive notebook to zero (in minutes). Idle notebooks stop at the time limit that you set. 6.6. Adding notebook pod tolerations If you want to dedicate certain machine pools to only running notebook pods, you can allow notebook pods to be scheduled on specific nodes by adding a toleration. Taints and tolerations allow a node to control which pods should (or should not) be scheduled on them. For more information, see Understanding taints and tolerations . This capability is useful if you want to make sure that notebook servers are placed on nodes that can handle their needs. By preventing other workloads from running on these specific nodes, you can ensure that the necessary resources are available to users who need to work with large notebook sizes. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. You are familiar with OpenShift taints and tolerations, as described in Understanding taints and tolerations . Procedure From the OpenShift AI dashboard, click Settings Cluster settings . Under Notebook pod tolerations , select Add a toleration to notebook pods to allow them to be scheduled to tainted nodes . In the Toleration key for notebook pods field, enter a toleration key. The key is any string, up to 253 characters. The key must begin with a letter or number, and can contain letters, numbers, hyphens, dots, and underscores. For example, notebooks-only . Click Save changes . The toleration key is applied to new notebook pods when they are created. For existing notebook pods, the toleration key is applied when the notebook pods are restarted. If you are using Jupyter, see Updating notebook server settings by restarting your server . If you are using a workbench in a data science project, see Starting a workbench . step In OpenShift, add a matching taint key (with any value) to the machine pools that you want to dedicate to notebooks. For more information, see Controlling pod placement using node taints . For more information, see Adding taints to a machine pool . Verification In the OpenShift console, for a pod that is running, click Workloads Pods . Otherwise, for a pod that is stopped, click Workloads StatefulSet . Search for your workbench pod name and then click the name to open the pod details page. Confirm that the assigned Node and Tolerations are correct. 6.7. Troubleshooting common problems in Jupyter for administrators If your users are experiencing errors in Red Hat OpenShift AI relating to Jupyter, their notebooks, or their notebook server, read this section to understand what could be causing the problem, and how to resolve the problem. If you cannot see the problem here or in the release notes, contact Red Hat Support. 6.7.1. A user receives a 404: Page not found error when logging in to Jupyter Problem If you have configured OpenShift AI user groups, the user name might not be added to the default user group for OpenShift AI. Diagnosis Check whether the user is part of the default user group. Find the names of groups allowed access to Jupyter. Log in to the OpenShift web console. Click User Management Groups . Click the name of your user group, for example, rhoai-users . The Group details page for that group appears. Click the Details tab for the group and confirm that the Users section for the relevant group contains the users who have permission to access Jupyter. Resolution If the user is not added to any of the groups with permission access to Jupyter, follow Adding users to OpenShift AI user groups to add them. If the user is already added to a group with permission to access Jupyter, contact Red Hat Support. 6.7.2. A user's notebook server does not start Problem The OpenShift cluster that hosts the user's notebook server might not have access to enough resources, or the Jupyter pod may have failed. Diagnosis Log in to the OpenShift web console. Delete and restart the notebook server pod for this user. Click Workloads Pods and set the Project to rhods-notebooks . Search for the notebook server pod that belongs to this user, for example, jupyter-nb-<username>-* . If the notebook server pod exists, an intermittent failure may have occurred in the notebook server pod. If the notebook server pod for the user does not exist, continue with diagnosis. Check the resources currently available in the OpenShift cluster against the resources required by the selected notebook server image. If worker nodes with sufficient CPU and RAM are available for scheduling in the cluster, continue with diagnosis. Check the state of the Jupyter pod. Resolution If there was an intermittent failure of the notebook server pod: Delete the notebook server pod that belongs to the user. Ask the user to start their notebook server again. If the notebook server does not have sufficient resources to run the selected notebook server image, either add more resources to the OpenShift cluster, or choose a smaller image size. If the Jupyter pod is in a FAILED state: Retrieve the logs for the jupyter-nb-* pod and send them to Red Hat Support for further evaluation. Delete the jupyter-nb-* pod. If none of the resolutions apply, contact Red Hat Support. 6.7.3. The user receives a database or disk is full error or a no space left on device error when they run notebook cells Problem The user might have run out of storage space on their notebook server. Diagnosis Log in to Jupyter and start the notebook server that belongs to the user having problems. If the notebook server does not start, follow these steps to check whether the user has run out of storage space: Log in to the OpenShift web console. Click Workloads Pods and set the Project to rhods-notebooks . Click the notebook server pod that belongs to this user, for example, jupyter-nb-<idp>-<username>-* . Click Logs . The user has exceeded their available capacity if you see lines similar to the following: Resolution Increase the user's available storage by expanding their persistent volume: Expanding persistent volumes Work with the user to identify files that can be deleted from the /opt/app-root/src directory on their notebook server to free up their existing storage space. Note When you delete files using the JupyterLab file explorer, the files move to the hidden /opt/app-root/src/.local/share/Trash/files folder in the persistent storage for the notebook. To free up storage space for notebooks, you must permanently delete these files.
[ "Unexpected error while saving file: XXXX database or disk is full" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/managing_resources/managing-notebook-servers_notebook-mgmt
B.62.2. RHSA-2010:0979 - Moderate: openssl security update
B.62.2. RHSA-2010:0979 - Moderate: openssl security update Updated openssl packages that fix one security issue are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. OpenSSL is a toolkit that implements the Secure Sockets Layer (SSL v2/v3) and Transport Layer Security (TLS v1) protocols, as well as a full-strength, general purpose cryptography library. CVE-2010-4180 A ciphersuite downgrade flaw was found in the OpenSSL SSL/TLS server code. A remote attacker could possibly use this flaw to change the ciphersuite associated with a cached session stored on the server, if the server enabled the SSL_OP_NETSCAPE_REUSE_CIPHER_CHANGE_BUG option, possibly forcing the client to use a weaker ciphersuite after resuming the session. Note Note that with this update, setting the SSL_OP_NETSCAPE_REUSE_CIPHER_CHANGE_BUG option has no effect and this bug workaround can no longer be enabled. All OpenSSL users should upgrade to these updated packages, which contain a backported patch to resolve this issue. For the update to take effect, all services linked to the OpenSSL library must be restarted, or the system rebooted.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/rhsa-2010-0979
12.24. Object Translator
12.24. Object Translator 12.24.1. Object Translator The Object translator is a bridge for reading Java objects from external sources, such as Map Cache, and delivering them to the engine for processing. To assist in providing that bridge, the OBJECTTABLE function must be used to transform the Java object into rows and columns. These are the types of object translators: map-cache - supports a local cache that is of type Map and using Key searching. This translator is implemented by the org.teiid.translator.object.ObjectExecutionFactory class. Note See the Red Hat JBoss Data Grid resource adapter for this translator. It can be configured to look up the cache container through JNDI or created sources (such as ConfigurationFileName or RemoteServerList). 12.24.2. Object Translator: Execution Properties The following execution properties are relevant to translating from JBoss Data Grid. Table 12.20. Execution Properties Name Description Required Default SupportsLuceneSearching Setting to true assumes your objects are annotated and Hibernate/Lucene will be used to search the cache N false 12.24.3. Object Translator: Supported Capabilities The following are the connector capabilities when Key Searching is used: SELECT command CompareCriteria - only EQ InCriteria The following are the connector capabilities when Hibernate/Lucene Searching is enabled: SELECT command CompareCriteria - EQ, NE, LT, GT, etc. InCriteria OrCriteria And/Or Criteria Like Criteria INSERT, UPDATE, DELETE 12.24.4. Object Translator: Usage Retrieve objects from a cache and transform into rows and columns. The primary object returned by the cache should have a name in source of 'this'. All other columns will have their name in source (which defaults to the column name) interpreted as the path to the column value from the primary object. All columns that are not the primary key nor covered by a lucene index should be marked as SEARCHABLE 'Unsearchable'.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/sect-object_translator
Getting Started
Getting Started Red Hat Enterprise Linux AI 1.3 Introduction to RHEL AI with product architecture, and hardware requirements Red Hat RHEL AI Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.3/html/getting_started/index
Chapter 6. Migrating JBoss EAP 6.4 Configurations to JBoss EAP 7.4
Chapter 6. Migrating JBoss EAP 6.4 Configurations to JBoss EAP 7.4 6.1. Migrating a JBoss EAP 6.4 Standalone Server to JBoss EAP 7.4 By default, the JBoss Server Migration Tool performs the following tasks when migrating a standalone server configuration from JBoss EAP 6.4 to JBoss EAP 7.4. Remove any unsupported subsystems . Migrate any referenced modules . Migrate any referenced paths . Migrate the jacorb subsystem . Migrate the web subsystem . Migrate the messaging subsystem . Update the infinispan subsystem . Update the ee subsystem . Update the Jakarta Enterprise Beans subsystem . Update the jgroups subsystem . Update the remoting subsystem . Update the transactions subsystem . Update the undertow subsystem . Update the messaging-activemq subsystem . Add the batch-jberet subsystem . Add the core-management subsystem . Add the discovery Subsystem . Add the ee-security Subsystem . Add the elytron subsystem . Add the request-controller subsystem . Add the security-manager subsystem . Add the singleton subsystem . Set up HTTP Upgrade management . Set up the private interface . Add socket binding port expressions . Migrate compatible security realms . Add the default SSL server identity to the ApplicationRealm . Migrate deployments . 6.1.1. Remove Unsupported Subsystems The following JBoss EAP 6.4 subsystems are not supported by JBoss EAP 7.4 : Subsystem Name Configuration Namespace Extension Module cmp urn:jboss:domain:cmp:* org.jboss.as.cmp configadmin urn:jboss:domain:configadmin:* org.jboss.as.configadmin jaxr urn:jboss:domain:jaxr:* org.jboss.as.jaxr osgi urn:jboss:domain:osgi:* org.jboss.as.osgi threads urn:jboss:domain:threads:* org.jboss.as.threads The JBoss Server Migration Tool removes all unsupported subsystem configurations and extensions from migrated server configurations. The tool logs each subsystem and extension to its log file and to the console as it is removed. To skip removal of the unsupported subsystems, set the subsystems.remove-unsupported-subsystems.skip environment property to true . You can override the default behavior of the JBoss Server Migration Tool and specify which subsystems and extensions should be included or excluded during the migration using the following environment properties. Property Name Property Description extensions.excludes A list of module names of extensions that should never be migrated, for example, com.example.extension1,com.example.extension3 . extensions.includes A list of module names of extensions that should always be migrated, for example, com.example.extension2,com.example.extension4 . subsystems.excludes A list of subsystem namespaces, stripped of the version, that should never be migrated, for example, urn:jboss:domain:logging, urn:jboss:domain:ejb3 . subsystems.includes A list of subsystem namespaces, stripped of the version, that should always be migrated, for example, urn:jboss:domain:security, urn:jboss:domain:ee . 6.1.2. Migrate Referenced Modules A configuration that is migrated from a source server to a target server might reference or depend on a module that is not installed on the target server. The JBoss Server Migration Tool detects this and automatically migrates the referenced modules, plus their dependent modules, from the source server to the target server. A module referenced by a standalone server configuration is migrated using the following process. A module referenced by a security realm configuration is migrated as a plug-in module. A module referenced by the datasource subsystem configuration is migrated as a datasource driver module. A module referenced by the ee subsystem configuration is migrated as a global module. A module referenced by the naming subsystem configuration is migrated as an object factory module. A module referenced by the messaging subsystem configuration is migrated as a Jakarta Messaging bridge module. A module referenced by a vault configuration is migrated to the new configuration. Any extension that is not installed on the target configuration is migrated to the target server configuration. The console logs a message noting the module ID for any module that is migrated. It is possible to exclude the migration of specific modules by specifying the module ID in the modules.excludes environment property. See Configuring the Migration of Modules for more information. 6.1.3. Migrate Referenced Paths A configuration that is migrated from a source server to a target server might reference or depend on file paths and directories that must also be migrated to the target server. The JBoss Server Migration Tool does not migrate absolute path references. It only migrates files or directories that are configured as relative to the source configuration. The console logs a message noting each path that is migrated. The JBoss Server Migration Tool automatically migrates the following path references: Vault keystore and encrypted file's directory. To skip the migration of referenced paths, set the paths.migrate-paths-requested-by-configuration.vault.skip environment property to true . 6.1.4. Migrate the Jacorb Subsystem The jacorb subsystem is deprecated in JBoss EAP 7.4 and is replaced by the iiop-openjdk subsystem. By default, the JBoss Server Migration Tool automatically migrates the jacorb subsystem configuration to its replacement iiop-openjdk subsystem configuration and logs the results to its log file and to the console. To skip the automatic migration to the iiop-openjdk subsystem configuration, set the subsystem.jacorb.migrate.skip environment property value to true . 6.1.5. Migrate the Web Subsystem The web subsystem is deprecated in JBoss EAP 7.4 and is replaced by the undertow subsystem. By default, the JBoss Server Migration Tool automatically migrates the web subsystem configuration to its replacement undertow subsystem configuration and logs the results to its log file and to the console. To skip automatic migration of the web subsystem, set the subsystem.web.migrate.skip environment property value to true . 6.1.6. Migrate the Messaging Subsystem The messaging subsystem is deprecated in JBoss EAP 7.4 and is replaced by the messaging-activemq subsystem. The JBoss Server Migration Tool automatically migrates the messaging subsystem configuration to its replacement messaging-activemq subsystem configuration and logs the results to its log file and to the console. To skip automatic migration of the messaging subsystem, set the subsystem.messaging.migrate.skip environment property value to true . 6.1.7. Update the Infinispan Subsystem The JBoss Server Migration Tool updates the infinispan subsystem configuration to better align with the default JBoss EAP 7.4 configurations. It adds the Jakarta Enterprise Beans cache container, which is present in the JBoss EAP 7.4 default configuration, to configurations where it is not already included. It adds the server cache container, which is present in the JBoss EAP 7.4 default configuration. It updates the module name in the Hibernate cache container configuration. It adds the concurrent cache to the web cache container, which is present in the JBoss EAP 7.4 default configuration. The JBoss Server Migration Tool automatically updates the infinispan subsystem configuration and logs the results to its log file and to the console. You can customize the update of the infinispan system by setting the following environment properties. Property Name Property Description subsystem.infinispan.update.skip If set to true , skip the update of the infinispan subsystem. subsystem.infinispan.update.add-infinispan-ejb-cache.skip <<<<<<< HEAD If set to true , do not add the EJB cache container. ======= If set to true , do not add the Jakarta Enterprise Beans cache container. >>>>>>> JBEAP-21560 subsystem.infinispan.update.add-infinispan-server-cache.skip If set to true , do not add the server cache container. subsystem.infinispan.update.fix-hibernate-cache-module-name.skip If set to true , do not update the module name in the Hibernate cache container configuration. subsystem.infinispan.update-infinispan-web-cache 6.1.8. Update the EE Subsystem The JBoss Server Migration Tool updates the ee subsystem to configure the Jakarta EE features supported in JBoss EAP 7.4. It configures instances of Jakarta EE concurrency utilities, such as container-managed executors, that are present in the JBoss EAP 7.4 default configuration and logs the results to its log file and to the console. It defines the default resources, such as the default datasource, that are present in default JBoss EAP 6.4 configuration. If the resources are not found, the tool lists all available resources in the configuration, and then provides a prompt to select a resource from the list or to provide the Java Naming and Directory Interface address of the resource that should be set as the default. Note Java Naming and Directory Interface names that are specified are assumed to be valid. Java Naming and Directory Interface names are not validated by the tool. The JBoss Server Migration Tool automatically updates the ee subsystem configuration and logs the results to its log file and to the console. You can customize the update of the ee system by setting the following environment properties. Property Name Property Description subsystem.ee.update.skip If set to true , skip the update of the ee subsystem. subsystem.ee.update.setup-ee-concurrency-utilities.skip If set to true , do not add the default instances of concurrency utilities. subsystem.ee.update.setup-javaee7-default-bindings.skip If set to true , do not set up Jakarta EE default resources. subsystem.ee.update.setup-javaee7-default-bindings.defaultDataSourceName Specifies the name of the default datasource to look for in the source configuration. subsystem.ee.update.setup-javaee7-default-bindings.defaultDataSourceJndiName Specifies the Java Naming and Directory Interface name for the default datasource. subsystem.ee.update.setup-javaee7-default-bindings.defaultJmsConnectionFactoryName Specifies the name of the default Jakarta Messaging connection factory. subsystem.ee.update.setup-javaee7-default-bindings.defaultJmsConnectionFactoryJndiName Specifies the Java Naming and Directory Interface name for the default Jakarta Messaging connection factory. Configuring Concurrency Utilities in the EE Subsystem If you choose to configure the Jakarta EE concurrency utilities, then the tool automatically configures the instances that are present in the default JBoss EAP 7.4 configurations and logs the results to its log file and to the console. Configuring Default Resources in the EE Subsystem When defining the Jakarta EE default resources the tool automatically selects those that are present in the default JBoss EAP 7.4 configuration. If no default resource is found, the tool lists all resources that are available in the configuration, and then provides a prompt to select the default resource or to provide the Java Naming and Directory Interface address of the resource that should be set as the default. The following is an example of the interaction that occurs when migrating a configuration file with an ExampleDS datasource. Note If you run the JBoss Server Migration Tool in non-interactive mode and the expected JBoss EAP 6.4 default resources, such as the default Jakarta Messaging connection factory, are not available, the tool does not configure those resources. 6.1.9. Update the Jakarta Enterprise Beans Subsystem The JBoss Server Migration Tool makes the following updates to the Jakarta Enterprise Beans subsystem to better align with the default JBoss EAP 7.4 configurations. It updates the remote service configuration to reference the HTTP connector. It configures the default-sfsb-passivation-disabled-cache attribute to use the default-sfsb-cache . It replaces the legacy passivation store and cache configurations with the JBoss EAP 7.4 default values. The JBoss Server Migration Tool automatically updates the Jakarta Enterprise Beans subsystem configuration and logs the results to its log file and to the console. Upon successful update of the Jakarta Enterprise Beans subsystem configuration, the JBoss Server Migration Tool logs the results to its log file and to the console. You can customize the update of the Jakarta Enterprise Beans subsystem by setting the following environment properties. Property Name Property Description subsystem.ejb3.update.skip If set to true , skip the update of the Jakarta Enterprise Beans subsystem. subsystem.ejb3.update.add-infinispan-passivation-store-and-distributable-cache.skip If set to true , do not replace the passivation-store and cache configurations. subsystem.ejb3.update.setup-default-sfsb-passivation-disabled-cache.skip If set to true , do not update the default-sfsb-passivation-disabled-cache configuration. subsystem.ejb3.update.activate-ejb3-remoting-http-connector.skip If set to true , do not update the ejb3 subsystem remoting configuration. 6.1.10. Update the JGroups Subsystem The JBoss Server Migration Tool updates the jgroups subsystem to align with the JBoss EAP 7.4 configurations. It replaces the MERGE2 protocol with MERGE3 . It replaces the FD protocol with FD_ALL . It replaces the pbcast.NAKACK protocol with pbcast.NAKACK2 . It replaces the UNICAST2 protocol with UNICAST3 . It removes the RSVP protocol. It replaces the FRAG2 protocol with the FRAG3 protocol. Upon successful migration of the jgroups subsystem configuration, the JBoss Server Migration Tool logs the results to its log file and to the console. To skip the automatic migration of the jgroups subsystem, set the subsystem.jgroups.update.skip environment property to true . 6.1.11. Update the Remoting Subsystem JBoss EAP 7.4 includes an HTTP connector that replaces all legacy remoting protocols and ports using a single port. The JBoss Server Migration Tool automatically updates the remoting subsystem to use the HTTP connector. To skip the automatic update of the remoting subsystem configuration, set the subsystem.remoting.update.skip environment property to true . 6.1.12. Update the Transactions Subsystem The JBoss Server Migration Tool updates the transactions subsystem with the configuration changes required by the JBoss EAP 7.4 server. The JBoss Server Migration Tool removes the path and relative-to attributes from the transactions subsystem and replaces them with the equivalent object-store-path and object-store-relative-to attributes. To skip the automatic update of the transactions subsystem configuration, set the subsystem.transactions.update-xml-object-store-paths.skip environment property to true . 6.1.13. Update the Undertow Subsystem In addition to migrating the web subsystem for JBoss EAP 7.4, the JBoss Server Migration Tool updates its replacement undertow subsystem to add the features it supports. It sets the default HTTP listener redirect socket. It adds support for Jakarta WebSockets. It sets the default HTTPS listener. It adds support for HTTP2. It removes the Server response header. It removes the X-Powered-By response header. It sets the default HTTP Invoker . The JBoss Server Migration Tool automatically updates the undertow subsystem configuration and logs the results to its log file and to the console. Upon successful migration of the undertow subsystem configuration, the JBoss Server Migration Tool logs the results to its log file and to the console. You can customize the update of the undertow system by setting the following environment properties. Property Name Property Description subsystem.undertow.update.skip If set to true , skip the update of the undertow subsystem. subsystem.undertow.update.set-default-http-listener-redirect-socket.skip If set to true , do not set the default HTTP listener redirect socket. subsystem.undertow.update.add-undertow-websockets.skip If set to true , do not add support for WebSockets. subsystem.undertow.update.add-undertow-https-listener.skip If set to true , do not set the default HTTPS listener. subsystem.undertow.update.enable-http2.skip If set to true , do not add support for HTTP2. subsystem.undertow.update.add-response-header.server-header.skip If set to true , do not set the default Server response header. subsystem.undertow.update.add-response-header.x-powered-by-header.skip If set to true , do not set the default X-Powered-By response header. subsystem.undertow.update.add-http-invoker.skip If set to true , do not set the default HTTP Invoker . 6.1.14. Update the Messaging-ActiveMQ Subsystem In addition to migrating the messaging subsystem for JBoss EAP 7.4, the JBoss Server Migration Tool updates its replacement messaging-activemq subsystem to add the new features it supports. It adds the default HTTP connector and acceptor to enable the HTTP-based remote messaging clients. The JBoss Server Migration Tool automatically updates the messaging-activemq subsystem configuration and logs the results to its log file and to the console. To skip the automatic update of the messaging-activemq subsystem, set the subsystem.messaging-activemq.update.skip environment property to true . 6.1.15. Add the Batch JBeret Subsystem The JBoss EAP 7.4 batch-jberet subsystem provides support for the Jakarta Batch 1.0 specification . The JBoss Server Migration Tool automatically adds the default batch-jberet subsystem configuration to the migrated configuration. To skip the addition of the batch-jberet subsystem configuration, set the subsystem.batch-jberet.add.skip environment property to true . 6.1.16. Add the Core Management Subsystem The JBoss EAP 7.4 core-management subsystem provides management-related resources, which were previously configured in the management core service. Examples of these resources include the ability to view a history of configuration changes made to the server and the ability to monitor for server lifecycle events. The JBoss Server Migration Tool automatically adds the default core-management subsystem configuration to the migrated configuration file. To skip the addition of the core-management subsystem configuration, set the subsystem.core-management.add.skip environment property to true . 6.1.17. Add the Discovery Subsystem The JBoss Server Migration Tool automatically adds the default discovery subsystem configuration to the migrated configuration file. To skip the addition of the discovery subsystem configuration, set the subsystem.discovery.add.skip environment property to true . 6.1.18. Add the EE Security Subsystem The JBoss EAP 7.4 ee-security subsystem provides support and compliance for Jakarta Security . The JBoss Server Migration Tool automatically adds the default ee-security subsystem configuration to the migrated configuration file. To skip the addition of the ee-security subsystem configuration, set the subsystem.ee-security.add.skip environment property to true . 6.1.19. Add the Elytron Subsystem The JBoss EAP 7.4 elytron subsystem provides a single unified security framework that can manage and configure access for both standalone servers and managed domains. It can also be used to configure security access for applications deployed to JBoss EAP servers. The JBoss Server Migration Tool automatically adds the default elytron subsystem configuration to the migrated configuration file. To skip the addition of the elytron subsystem configuration, set the subsystem.elytron.add.skip environment property to true . 6.1.20. Add the health subsystem The JBoss EAP 7.4 health subsystem provides support for a server's health functionality. The JBoss Server Migration Tool automatically adds the default health subsystem configuration to the migrated configuration file. To skip the addition of the health subsystem configuration, set the subsystem.health.add.skip environment property to true . After you add the health subsystem to JBoss EAP 7.4, you'll see the following message in your web console: 6.1.21. Add the metrics subsystem The JBoss EAP 7.4 metrics subsystem provides support for a server's metric functionality. The JBoss Server Migration Tool automatically adds the default metrics subsystem configuration to the migrated configuration file. To skip the addition of the metrics subsystem configuration, set the subsystem.metrics.add.skip environment property to true . After you add the metrics subsystem to JBoss EAP 7.4, you'll see the following message in your web console: 6.1.22. Add the Request Controller Subsystem The JBoss EAP 7.4 request-controller subsystem provides congestion control and graceful shutdown functionality. The JBoss Server Migration Tool automatically adds the default request-controller subsystem configuration to the migrated configuration file. To skip the addition of the request-controller subsystem configuration, set the subsystem.request-controller.add.skip environment property to true . 6.1.23. Add the Security Manager Subsystem The JBoss EAP 7.4 security-manager subsystem provides support for Jakarta Security permissions. The JBoss Server Migration Tool automatically adds the default security-manager subsystem configuration to the migrated configuration file. To skip the addition of the security-manager subsystem configuration, set the subsystem.security-manager.add.skip environment property to true . 6.1.24. Add the Singleton Subsystem The JBoss EAP 7.4 singleton subsystem provides singleton functionality. The JBoss Server Migration Tool automatically adds the default singleton subsystem configuration to the migrated configuration file. To skip the addition of the singleton subsystem configuration, set the subsystem.singleton.add.skip environment property to true . 6.1.25. Set Up HTTP Upgrade Management The addition of Undertow in JBoss EAP 7.4 added HTTP Upgrade, which allows for multiple protocols to be multiplexed over a single port. This means a management client can make an initial connection over HTTP, but then send a request to upgrade that connection to another protocol. The JBoss Server Migration Tool automatically updates the configuration to support HTTP Upgrade management. To skip configuration of HTTP Upgrade management, set the management.setup-http-upgrade.skip environment property to true . 6.1.26. Set Up the Private Interface The JBoss EAP 7.4 default configuration uses a private interface on all jgroups socket bindings. The JBoss Server Migration Tool automatically updates the migrated jgroups socket bindings to use same configuration. To skip the configuration of the private interface, set the interface.private.setup.skip environment property to true . 6.1.27. Add Socket Binding Port Expressions The JBoss EAP 7.4 default configurations use value expressions for the port attribute of the following socket bindings: ajp http https The JBoss Server Migration Tool automatically adds these value expressions to the migrated server configurations. To skip update of the socket binding port expressions, set the socket-bindings.add-port-expressions.skip environment property to true . 6.1.28. Add Socket Binding Multicast Address Expressions The JBoss EAP 7.4 default configuration uses value expressions in the multicast-address attribute of mod_cluster socket bindings. The JBoss Server Migration Tool automatically adds these value expressions to the migrated configuration files. To skip the addition of these expressions, set the socket-bindings.multicast-address.add-expressions.skip environment property to true . 6.1.29. Migrate Compatible Security Realms Because the JBoss EAP 7.4 security realm configurations are fully compatible with the JBoss EAP 6.4 security realm configurations, they require no update by the JBoss Server Migration Tool. However, if the application-users.properties , application-roles.properties , mgmt-users.properties , and mgmt-groups.properties files are not referenced using an absolute path, the tool copies them to the path expected by the migrated configuration file. To skip the security realms migration, set the security-realms.migrate-properties.skip environment property to true . 6.1.30. Add the Default SSL Server Identity to the ApplicationRealm The JBoss EAP 7.4 default configuration includes an SSL server identity for the default ApplicationRealm security realm. The JBoss Server Migration Tool automatically adds this identity to the migrated configuration files. To skip the addition of this identity, set the security-realm.ApplicationRealm.add-ssl-server-identity.skip environment property to true . 6.1.31. Migrate Deployments The JBoss Server Migration Tool can migrate the following types of standalone server deployment configurations. Deployments it references, also known as persistent deployments . Deployments found in directories monitored by its deployment scanners . Deployment overlays it references. The migration of a deployment consists of installing related file resources on the target server, and possibly updating the migrated configuration. The JBoss Server Migration Tool is preconfigured to skip deployments by default when running in non-interactive mode. To enable migration of deployments, set the deployments.migrate-deployments.skip environment property to false . Important Be aware that when you run the JBoss Server Migration Tool in interactive mode and enter invalid input, the resulting behavior depends on the value of the deployments.migrate-deployments environment property. If deployments.migrate-deployments.skip is set to false and you enter invalid input, the tool will try to migrate the deployments. If deployments.migrate-deployments.skip is set to true and you enter invalid input, the tool will skip the deployments migration. To enable the migration of specific types of deployments, see the following sections. Warning The JBoss Server Migration Tool does not determine whether deployed resources are compatible with the target server. This means that applications or resources might not deploy, might not work as expected, or might not work at all. Also be aware that artifacts such as JBoss EAP 6.4 *-jms.xml configuration files are copied without modification and can cause the JBoss EAP server to boot with errors. Red Hat recommends that you use the Migration Toolkit for Applications (MTA) to analyze deployments to determine compatibility among different JBoss EAP servers. For more information, see Product Documentation for Migration Toolkit for Applications . 6.1.31.1. Migrate Persistent Deployments To enable migration of persistent deployments when running in non-interactive mode, set the deployments.migrate-persistent-deployments.skip environment property to false . The JBoss Server Migration Tool searches for any persistent deployment references and lists them to the console. The processing workflow then depends on whether you are running the tool in interactive mode or in non-interactive mode , as described below. Migrating Persistent Deployments in Non-interactive Mode If you run the tool in non-interactive mode, the tool uses the preconfigured properties to determine whether to migrate the persistent deployments. Persistent deployments are migrated only if both the deployments.migrate-deployments.skip and deployments.migrate-persistent-deployments.skip properties are set to false . Migrating Persistent Deployments in Interactive Mode If you run the tool in interactive mode, the JBoss Server Migration Tool prompts you for each deployment using the following workflow. After printing the persistent deployments it finds to the console, you see the following prompt. Respond with yes to skip migration of persistent deployments. All deployment references are removed from the migrated configuration and you end this part of the migration process. Respond with no to continue with the migration. If you choose to continue, you see the following prompt. Respond with yes to automatically migrate all deployments and end this part of the migration process. Respond with no to continue with the migration. If you choose to continue, you receive a prompt asking to confirm the migration for each referenced deployment. Respond with yes to migrate the deployment. Respond with no to remove the deployment from the migrated configuration. 6.1.31.2. Migrate Deployment Scanner Deployments Deployment scanners, which are only used in standalone server configurations, monitor a directory for new files and manage their deployment automatically or through special deployment marker files. To enable migration of deployments that are located in directories watched by a deployment scanner when running in non-interactive mode, set the deployments.migrate-deployment-scanner-deployments.skip environment property to false . When migrating a standalone server configuration, the JBoss Server Migration Tool first searches for any configured deployment scanners. For each scanner found, it searches its monitored directories for deployments marked as deployed and prints the results to the console. The processing workflow then depends on whether you are running the tool in interactive mode or in non-interactive mode , as described below. Migrating Deployment Scanner Deployments in Non-interactive Mode If you run the tool in non-interactive mode, the tool uses the preconfigured properties to determine whether to migrate the deployment scanner deployments. Deployment scanner deployments are migrated only if both the deployments.migrate-deployments.skip and deployments.migrate-deployment-scanner-deployments.skip properties are set to false . Migrating Deployment Scanner Deployments in Interactive Mode If you run the tool in interactive mode, the JBoss Server Migration Tool prompts you for each deployment using the following workflow. After printing the deployment scanner deployments it finds to the console, you see the following prompt. Respond with yes to skip migration of deployment scanner deployments. All deployment references are removed from the migrated configuration and you end this part of the migration process. Respond with no to continue with the migration. If you choose to continue, you see the following prompt. Respond with yes to automatically migrate all deployments and end this part of the migration process. Respond with no to continue with the migration. If you choose to continue, you receive a prompt asking to confirm the migration for each referenced deployment. Respond with yes to migrate the deployment. Respond with no to remove the deployment from the migrated configuration. 6.1.31.3. Migrate Deployment Overlays The migration of deployment overlays is a fully automated process. If you have enabled migration of deployments by setting the deployments.migrate-deployments.skip environment property to false , the JBoss Server Migration Tool searches for deployment overlays referenced in the standalone server configuration that are linked to migrated deployments. It automatically migrates those that are found, removes those that are not referenced, and logs the results to its log file and to the console. 6.2. Migrating a JBoss EAP 6.4 managed domain to JBoss EAP 7.4 Warning When you use the JBoss Server Migration Tool, migrate your domain controller before you migrate your hosts to ensure your domain controller must use the later version of EAP when compared to the version used by hosts. For example, a domain controller running on EAP 7.3 cannot handle a host running on EAP 7.4. Review Configure a JBoss EAP 7.x Domain Controller to Administer JBoss EAP 6 Instances in the Configuration Guide for JBoss EAP. Pay particular attention to the section entitled Prevent the JBoss EAP 6 Instances from Receiving JBoss EAP 7 Updates . For more information and to learn about the supported configurations, see Managing Multiple JBoss EAP Versions in the Configuration Guide for JBoss EAP. By default, the JBoss Server Migration Tool performs the following tasks when migrating a managed domain configuration from JBoss EAP 6.4 to JBoss EAP 7.4 Remove any unsupported subsystems . Migrate any referenced modules . Migrate any referenced paths . Migrate the jacorb subsystem . Migrate the web subsystem . Migrate the messaging subsystem . Update the infinispan subsystem . Update the ee subsystem . Update the Jakarta Enterprise Beans subsystem . Update the jgroups subsystem . Update the remoting subsystem . Update the transactions subsystem . Update the undertow subsystem . Update the messaging-activemq subsystem . Add the batch-jberet subsystem . Add the core-management subsystem . Add the elytron subsystem . Add the request-controller subsystem . Add the security-manager subsystem . Add the singleton subsystem . Update the unsecure interface . Set up the private interface . Add socket binding port expressions . Add socket binding multicast address expressions . Add the load balancer profile . Add the host excludes configuration . Remove the PermGen attributes from the JVM configurations . Migrate deployments . 6.2.1. Remove Unsupported Subsystems The following JBoss EAP 6.4 subsystems are not supported by JBoss EAP 7.4 : Subsystem Name Configuration Namespace Extension Module cmp urn:jboss:domain:cmp:* org.jboss.as.cmp configadmin urn:jboss:domain:configadmin:* org.jboss.as.configadmin jaxr urn:jboss:domain:jaxr:* org.jboss.as.jaxr osgi urn:jboss:domain:osgi:* org.jboss.as.osgi threads urn:jboss:domain:threads:* org.jboss.as.threads The JBoss Server Migration Tool removes all unsupported subsystem configurations and extensions from migrated server configurations. The tool logs each subsystem and extension to its log file and to the console as it is removed. To skip removal of the unsupported subsystems, set the subsystems.remove-unsupported-subsystems.skip environment property to true . You can override the default behavior of the JBoss Server Migration Tool and specify which subsystems and extensions should be included or excluded during the migration using the following environment properties. Property Name Property Description extensions.excludes A list of module names of extensions that should never be migrated, for example, com.example.extension1,com.example.extension3 . extensions.includes A list of module names of extensions that should always be migrated, for example, com.example.extension2,com.example.extension4 . subsystems.excludes A list of subsystem namespaces, stripped of the version, that should never be migrated, for example, urn:jboss:domain:logging, urn:jboss:domain:ejb3 . subsystems.includes A list of subsystem namespaces, stripped of the version, that should always be migrated, for example, urn:jboss:domain:security, urn:jboss:domain:ee . 6.2.2. Migrate Referenced Modules A configuration that is migrated from a source server to a target server might reference or depend on a module that is not installed on the target server. The JBoss Server Migration Tool detects this and automatically migrates the referenced modules, plus their dependent modules, from the source server to the target server. A module referenced by a managed domain configuration is migrated using the following process. A module referenced by a security realm configuration is migrated as a plug-in module. A module referenced by the datasource subsystem configuration is migrated as a datasource driver module. A module referenced by the ee subsystem configuration is migrated as a global module. A module referenced by the naming subsystem configuration is migrated as an object factory module. A module referenced by the messaging subsystem configuration is migrated as a Jakarta Messaging bridge module. A module referenced by a vault configuration is migrated to the new configuration. Any extension that is not installed on the target configuration is migrated to the target server configuration. The console logs a message noting the module ID for any module that is migrated. It is possible to exclude the migration of specific modules by specifying the module ID in the modules.excludes environment property. See Configuring the Migration of Modules for more information. 6.2.3. Migrate Referenced Paths A configuration that is migrated from a source server to a target server might reference or depend on file paths and directories that must also be migrated to the target server. The JBoss Server Migration Tool does not migrate absolute path references. It only migrates files or directories that are configured as relative to the source configuration. The console logs a message noting each path that is migrated. The JBoss Server Migration Tool automatically migrates the following path references: Vault keystore and encrypted file's directory. To skip the migration of referenced paths, set the paths.migrate-paths-requested-by-configuration.vault.skip environment property to true . 6.2.4. Migrate the Jacorb Subsystem The jacorb subsystem is deprecated in JBoss EAP 7.4 and is replaced by the iiop-openjdk subsystem. By default, the JBoss Server Migration Tool automatically migrates the jacorb subsystem configuration to its replacement iiop-openjdk subsystem configuration and logs the results to its log file and to the console. To skip the automatic migration to the iiop-openjdk subsystem configuration, set the subsystem.jacorb.migrate.skip environment property value to true . 6.2.5. Migrate the Web Subsystem The web subsystem is deprecated in JBoss EAP 7.4 and is replaced by the undertow subsystem. By default, the JBoss Server Migration Tool automatically migrates the web subsystem configuration to its replacement undertow subsystem configuration and logs the results to its log file and to the console. To skip automatic migration of the web subsystem, set the subsystem.web.migrate.skip environment property value to true . 6.2.6. Migrate the Messaging Subsystem The messaging subsystem is deprecated in JBoss EAP 7.4 and is replaced by the messaging-activemq subsystem. The JBoss Server Migration Tool automatically migrates the messaging subsystem configuration to its replacement messaging-activemq subsystem configuration and logs the results to its log file and to the console. To skip automatic migration of the messaging subsystem, set the subsystem.messaging.migrate.skip environment property value to true . 6.2.7. Update the Infinispan Subsystem The JBoss Server Migration Tool updates the infinispan subsystem configuration to better align with the default JBoss EAP 7.4 configurations. It adds the Jakarta Enterprise Beans cache container, which is present in the JBoss EAP 7.4 default configuration, to configurations where it is not already included. It adds the server cache container, which is present in the JBoss EAP 7.4 default configuration. It updates the module name in the Hibernate cache container configuration. It adds the concurrent cache to the web cache container, which is present in the JBoss EAP 7.4 default configuration. The JBoss Server Migration Tool automatically updates the infinispan subsystem configuration and logs the results to its log file and to the console. You can customize the update of the infinispan system by setting the following environment properties. Property Name Property Description subsystem.infinispan.update.skip If set to true , skip the update of the infinispan subsystem. subsystem.infinispan.update.add-infinispan-ejb-cache.skip <<<<<<< HEAD If set to true , do not add the EJB cache container. ======= If set to true , do not add the Jakarta Enterprise Beans cache container. >>>>>>> JBEAP-21560 subsystem.infinispan.update.add-infinispan-server-cache.skip If set to true , do not add the server cache container. subsystem.infinispan.update.fix-hibernate-cache-module-name.skip If set to true , do not update the module name in the Hibernate cache container configuration. subsystem.infinispan.update-infinispan-web-cache 6.2.8. Update the EE Subsystem The JBoss Server Migration Tool updates the ee subsystem to configure the Jakarta EE features supported in JBoss EAP 7.4. It configures instances of Jakarta EE concurrency utilities, such as container-managed executors, that are present in the JBoss EAP 7.4 default configuration and logs the results to its log file and to the console. It defines the default resources, such as the default datasource, that are present in default JBoss EAP 6.4 configuration. If the resources are not found, the tool lists all available resources in the configuration, and then provides a prompt to select a resource from the list or to provide the Java Naming and Directory Interface address of the resource that should be set as the default. Note Java Naming and Directory Interface names that are specified are assumed to be valid. Java Naming and Directory Interface names are not validated by the tool. The JBoss Server Migration Tool automatically updates the ee subsystem configuration and logs the results to its log file and to the console. You can customize the update of the ee system by setting the following environment properties. Property Name Property Description subsystem.ee.update.skip If set to true , skip the update of the ee subsystem. subsystem.ee.update.setup-ee-concurrency-utilities.skip If set to true , do not add the default instances of concurrency utilities. subsystem.ee.update.setup-javaee7-default-bindings.skip If set to true , do not set up Jakarta EE default resources. subsystem.ee.update.setup-javaee7-default-bindings.defaultDataSourceName Specifies the name of the default datasource to look for in the source configuration. subsystem.ee.update.setup-javaee7-default-bindings.defaultDataSourceJndiName Specifies the Java Naming and Directory Interface name for the default datasource. subsystem.ee.update.setup-javaee7-default-bindings.defaultJmsConnectionFactoryName Specifies the name of the default Jakarta Messaging connection factory. subsystem.ee.update.setup-javaee7-default-bindings.defaultJmsConnectionFactoryJndiName Specifies the Java Naming and Directory Interface name for the default Jakarta Messaging connection factory. Configuring Concurrency Utilities in the EE Subsystem If you choose to configure the Jakarta EE concurrency utilities, then the tool automatically configures the instances that are present in the default JBoss EAP 7.4 configurations and logs the results to its log file and to the console. Configuring Default Resources in the EE Subsystem When defining the Jakarta EE default resources the tool automatically selects those that are present in the default JBoss EAP 7.4 configuration. If no default resource is found, the tool lists all resources that are available in the configuration, and then provides a prompt to select the default resource or to provide the Java Naming and Directory Interface address of the resource that should be set as the default. The following is an example of the interaction that occurs when migrating a configuration file with an ExampleDS datasource. Note If you run the JBoss Server Migration Tool in non-interactive mode and the expected JBoss EAP 6.4 default resources, such as the default Jakarta Messaging connection factory, are not available, the tool does not configure those resources. 6.2.9. Update the Jakarta Enterprise Beans Subsystem The JBoss Server Migration Tool makes the following updates to the Jakarta Enterprise Beans subsystem to better align with the default JBoss EAP 7.4 configurations. It updates the remote service configuration to reference the HTTP connector. It configures the default-sfsb-passivation-disabled-cache attribute to use the default-sfsb-cache . It replaces the legacy passivation store and cache configurations with the JBoss EAP 7.4 default values. The JBoss Server Migration Tool automatically updates the Jakarta Enterprise Beans subsystem configuration and logs the results to its log file and to the console. Upon successful update of the Jakarta Enterprise Beans subsystem configuration, the JBoss Server Migration Tool logs the results to its log file and to the console. You can customize the update of the Jakarta Enterprise Beans subsystem by setting the following environment properties. Property Name Property Description subsystem.ejb3.update.skip If set to true , skip the update of the Jakarta Enterprise Beans subsystem. subsystem.ejb3.update.add-infinispan-passivation-store-and-distributable-cache.skip If set to true , do not replace the passivation-store and cache configurations. subsystem.ejb3.update.setup-default-sfsb-passivation-disabled-cache.skip If set to true , do not update the default-sfsb-passivation-disabled-cache configuration. subsystem.ejb3.update.activate-ejb3-remoting-http-connector.skip If set to true , do not update the ejb3 subsystem remoting configuration. 6.2.10. Update the JGroups Subsystem The JBoss Server Migration Tool updates the jgroups subsystem to align with the JBoss EAP 7.4 configurations. It replaces the MERGE2 protocol with MERGE3 . It replaces the FD protocol with FD_ALL . It replaces the pbcast.NAKACK protocol with pbcast.NAKACK2 . It replaces the UNICAST2 protocol with UNICAST3 . It removes the RSVP protocol. It replaces the FRAG2 protocol with the FRAG3 protocol. Upon successful migration of the jgroups subsystem configuration, the JBoss Server Migration Tool logs the results to its log file and to the console. To skip the automatic migration of the jgroups subsystem, set the subsystem.jgroups.update.skip environment property to true . 6.2.11. Update the Remoting Subsystem JBoss EAP 7.4 includes an HTTP connector that replaces all legacy remoting protocols and ports using a single port. The JBoss Server Migration Tool automatically updates the remoting subsystem to use the HTTP connector. To skip the automatic update of the remoting subsystem configuration, set the subsystem.remoting.update.skip environment property to true . 6.2.12. Update the Transactions Subsystem The JBoss Server Migration Tool updates the transactions subsystem with the configuration changes required by the JBoss EAP 7.4 server. The JBoss Server Migration Tool removes the path and relative-to attributes from the transactions subsystem and replaces them with the equivalent object-store-path and object-store-relative-to attributes. To skip the automatic update of the transactions subsystem configuration, set the subsystem.transactions.update-xml-object-store-paths.skip environment property to true . 6.2.13. Update the Undertow Subsystem In addition to migrating the web subsystem for JBoss EAP 7.4, the JBoss Server Migration Tool updates its replacement undertow subsystem to add the features it supports. It sets the default HTTP listener redirect socket. It adds support for Jakarta WebSockets. It sets the default HTTPS listener. It adds support for HTTP2. It removes the Server response header. It removes the X-Powered-By response header. It sets the default HTTP Invoker . The JBoss Server Migration Tool automatically updates the undertow subsystem configuration and logs the results to its log file and to the console. Upon successful migration of the undertow subsystem configuration, the JBoss Server Migration Tool logs the results to its log file and to the console. You can customize the update of the undertow system by setting the following environment properties. Property Name Property Description subsystem.undertow.update.skip If set to true , skip the update of the undertow subsystem. subsystem.undertow.update.set-default-http-listener-redirect-socket.skip If set to true , do not set the default HTTP listener redirect socket. subsystem.undertow.update.add-undertow-websockets.skip If set to true , do not add support for WebSockets. subsystem.undertow.update.add-undertow-https-listener.skip If set to true , do not set the default HTTPS listener. subsystem.undertow.update.enable-http2.skip If set to true , do not add support for HTTP2. subsystem.undertow.update.add-response-header.server-header.skip If set to true , do not set the default Server response header. subsystem.undertow.update.add-response-header.x-powered-by-header.skip If set to true , do not set the default X-Powered-By response header. subsystem.undertow.update.add-http-invoker.skip If set to true , do not set the default HTTP Invoker . 6.2.14. Update the Messaging-ActiveMQ Subsystem In addition to migrating the messaging subsystem for JBoss EAP 7.4, the JBoss Server Migration Tool updates its replacement messaging-activemq subsystem to add the new features it supports. It adds the default HTTP connector and acceptor to enable the HTTP-based remote messaging clients. The JBoss Server Migration Tool automatically updates the messaging-activemq subsystem configuration and logs the results to its log file and to the console. To skip the automatic update of the messaging-activemq subsystem, set the subsystem.messaging-activemq.update.skip environment property to true . 6.2.15. Add the Batch JBeret Subsystem The JBoss EAP 7.4 batch-jberet subsystem provides support for the Jakarta Batch 1.0 specification . The JBoss Server Migration Tool automatically adds the default batch-jberet subsystem configuration to the migrated configuration. To skip the addition of the batch-jberet subsystem configuration, set the subsystem.batch-jberet.add.skip environment property to true . 6.2.16. Add the Core Management Subsystem The JBoss EAP 7.4 core-management subsystem provides management-related resources, which were previously configured in the management core service. Examples of these resources include the ability to view a history of configuration changes made to the server and the ability to monitor for server lifecycle events. The JBoss Server Migration Tool automatically adds the default core-management subsystem configuration to the migrated configuration file. To skip the addition of the core-management subsystem configuration, set the subsystem.core-management.add.skip environment property to true . 6.2.17. Add the Discovery Subsystem The JBoss Server Migration Tool automatically adds the default discovery subsystem configuration to the migrated configuration file. To skip the addition of the discovery subsystem configuration, set the subsystem.discovery.add.skip environment property to true . 6.2.18. Add the EE Security Subsystem The JBoss EAP 7.4 ee-security subsystem provides support and compliance for Jakarta Security . The JBoss Server Migration Tool automatically adds the default ee-security subsystem configuration to the migrated configuration file. To skip the addition of the ee-security subsystem configuration, set the subsystem.ee-security.add.skip environment property to true . 6.2.19. Add the Elytron Subsystem The JBoss EAP 7.4 elytron subsystem provides a single unified security framework that can manage and configure access for both standalone servers and managed domains. It can also be used to configure security access for applications deployed to JBoss EAP servers. The JBoss Server Migration Tool automatically adds the default elytron subsystem configuration to the migrated configuration file. To skip the addition of the elytron subsystem configuration, set the subsystem.elytron.add.skip environment property to true . 6.2.20. Add the Request Controller Subsystem The JBoss EAP 7.4 request-controller subsystem provides congestion control and graceful shutdown functionality. The JBoss Server Migration Tool automatically adds the default request-controller subsystem configuration to the migrated configuration file. To skip the addition of the request-controller subsystem configuration, set the subsystem.request-controller.add.skip environment property to true . 6.2.21. Add the Security Manager Subsystem The JBoss EAP 7.4 security-manager subsystem provides support for Jakarta Security permissions. The JBoss Server Migration Tool automatically adds the default security-manager subsystem configuration to the migrated configuration file. To skip the addition of the security-manager subsystem configuration, set the subsystem.security-manager.add.skip environment property to true . 6.2.22. Add the Singleton Subsystem The JBoss EAP 7.4 singleton subsystem provides singleton functionality. The JBoss Server Migration Tool automatically adds the default singleton subsystem configuration to the migrated configuration file. To skip the addition of the singleton subsystem configuration, set the subsystem.singleton.add.skip environment property to true . 6.2.23. Update the Unsecure Interface The JBoss Server Migration Tool automatically updates the unsecure interface configuration to align with the JBoss EAP 7.4 default configuration. To skip configuration of the unsecure interface, set the interface.unsecure.update.skip environment property to true . 6.2.24. Set Up the Private Interface The JBoss EAP 7.4 default configuration uses a private interface on all jgroups socket bindings. The JBoss Server Migration Tool automatically updates the migrated jgroups socket bindings to use same configuration. To skip the configuration of the private interface, set the interface.private.setup.skip environment property to true . 6.2.25. Add Socket Binding Port Expressions The JBoss EAP 7.4 default configurations use value expressions for the port attribute of the following socket bindings: ajp http https The JBoss Server Migration Tool automatically adds these value expressions to the migrated server configurations. To skip update of the socket binding port expressions, set the socket-bindings.add-port-expressions.skip environment property to true . 6.2.26. Add Socket Binding Multicast Address Expressions The JBoss EAP 7.4 default configuration uses value expressions in the multicast-address attribute of mod_cluster socket bindings. The JBoss Server Migration Tool automatically adds these value expressions to the migrated configuration files. To skip the addition of these expressions, set the socket-bindings.multicast-address.add-expressions.skip environment property to true . 6.2.27. Add the Load Balancer Profile JBoss EAP 7.4 includes a default profile specifically tailored for hosts that serve as load balancers. The JBoss Server Migration Tool automatically adds and configures this profile to all migrated managed domain configurations. To skip the addition of this profile, set the profile.load-balancer.add.skip environment property to true . 6.2.28. Add Host Excludes The JBoss EAP 7.4 domain controller can potentially include functionality that is not supported by hosts running on older versions of the server. The host-exclude configuration specifies the resources that should be hidden from those older versions. When migrating a domain controller configuration, the JBoss Server Migration Tool adds to or replaces the source server's host-exclude configuration with the configuration of the target JBoss EAP 7.4 server. The JBoss Server Migration Tool automatically updates the host-exclude configuration and logs the results to its log file and to the console. 6.2.29. Remove the PermGen Attributes from the JVM Configurations The usage of PermGen attributes in JVM configurations is deprecated in JBoss EAP 7. The JBoss Server Migration Tool automatically removes them from all JVM configurations for all server groups. To skip removal of the PermGen attributes, set the jvms.remove-permgen-attributes.skip environment property value to true . 6.2.30. Migrate Deployments The JBoss Server Migration Tool can migrate the following types of managed domain deployment configurations. Deployments it references, also known as persistent deployments . Deployment overlays it references. The migration of a deployment consists of installing related file resources on the target server, and possibly updating the migrated configuration. The JBoss Server Migration Tool is preconfigured to skip deployments by default when running in non-interactive mode. To enable migration of deployments, set the deployments.migrate-deployments.skip environment property to false . Important Be aware that when you run the JBoss Server Migration Tool in interactive mode and enter invalid input, the resulting behavior depends on the value of the deployments.migrate-deployments environment property. If deployments.migrate-deployments.skip is set to false and you enter invalid input, the tool will try to migrate the deployments. If deployments.migrate-deployments.skip is set to true and you enter invalid input, the tool will skip the deployments migration. To enable the migration of specific types of deployments, see the following sections. Warning The JBoss Server Migration Tool does not determine whether deployed resources are compatible with the target server. This means that applications or resources might not deploy, might not work as expected, or might not work at all. Also be aware that artifacts such as JBoss EAP 6.4 *-jms.xml configuration files are copied without modification and can cause the JBoss EAP server to boot with errors. Red Hat recommends that you use the Migration Toolkit for Applications (MTA) to analyze deployments to determine compatibility among different JBoss EAP servers. For more information, see Product Documentation for Migration Toolkit for Applications . 6.2.30.1. Migrate Persistent Deployments To enable migration of persistent deployments when running in non-interactive mode, set the deployments.migrate-persistent-deployments.skip environment property to false . The JBoss Server Migration Tool searches for any persistent deployment references and lists them to the console. The processing workflow then depends on whether you are running the tool in interactive mode or in non-interactive mode , as described below. Migrating Persistent Deployments in Non-interactive Mode If you run the tool in non-interactive mode, the tool uses the preconfigured properties to determine whether to migrate the persistent deployments. Persistent deployments are migrated only if both the deployments.migrate-deployments.skip and deployments.migrate-persistent-deployments.skip properties are set to false . Migrating Persistent Deployments in Interactive Mode If you run the tool in interactive mode, the JBoss Server Migration Tool prompts you for each deployment using the following workflow. After printing the persistent deployments it finds to the console, you see the following prompt. Respond with yes to skip migration of persistent deployments. All deployment references are removed from the migrated configuration and you end this part of the migration process. Respond with no to continue with the migration. If you choose to continue, you see the following prompt. Respond with yes to automatically migrate all deployments and end this part of the migration process. Respond with no to continue with the migration. If you choose to continue, you receive a prompt asking to confirm the migration for each referenced deployment. Respond with yes to migrate the deployment. Respond with no to remove the deployment from the migrated configuration. 6.2.30.2. Migrate Deployment Overlays The migration of deployment overlays is a fully automated process. If you have enabled migration of deployments by setting the deployments.migrate-deployments.skip environment property to false , the JBoss Server Migration Tool searches for deployment overlays referenced in the standalone server configuration that are linked to migrated deployments. It automatically migrates those that are found, removes those that are not referenced, and logs the results to its log file and to the console. 6.3. Migrating a JBoss EAP 6.4 Host Configuration to JBoss EAP 7.4 By default, the JBoss Server Migration Tool performs the following tasks when migrating a host server configuration from JBoss EAP 6.4 to JBoss EAP 7.4. Migrate any referenced modules . Migrate any referenced paths . Add the core-management subsystem . Add the elytron subsystem . Add the jmx subsystem . Remove the unsecure interface . Set up HTTP Upgrade management . Remove the PermGen attributes from the JVM configurations . Migrate compatible security realms . Add the default SSL server identity to the ApplicationRealm . 6.3.1. Migrate Referenced Modules A configuration that is migrated from a source server to a target server might reference or depend on a module that is not installed on the target server. The JBoss Server Migration Tool detects this and automatically migrates the referenced modules, plus their dependent modules, from the source server to the target server. A module referenced by a host server configuration is migrated using the following process. A module referenced by a security realm configuration is migrated as a plug-in module. The console logs a message noting the module ID for any module that is migrated. It is possible to exclude the migration of specific modules by specifying the module ID in the modules.excludes environment property. See Configuring the Migration of Modules for more information. 6.3.2. Migrate Referenced Paths A configuration that is migrated from a source server to a target server might reference or depend on file paths and directories that must also be migrated to the target server. The JBoss Server Migration Tool does not migrate absolute path references. It only migrates files or directories that are configured as relative to the source configuration. The console logs a message noting each path that is migrated. The JBoss Server Migration Tool automatically migrates the following path references: Vault keystore and encrypted file's directory. To skip the migration of referenced paths, set the paths.migrate-paths-requested-by-configuration.vault.skip environment property to true . 6.3.3. Add the Core Management Subsystem The JBoss EAP 7.4 core-management subsystem provides management-related resources, which were previously configured in the management core service. Examples of these resources include the ability to view a history of configuration changes made to the server and the ability to monitor for server lifecycle events. The JBoss Server Migration Tool automatically adds the default core-management subsystem configuration to the migrated configuration file. To skip the addition of the core-management subsystem configuration, set the subsystem.core-management.add.skip environment property to true . 6.3.4. Add the Elytron Subsystem The JBoss EAP 7.4 elytron subsystem provides a single unified security framework that can manage and configure access for both standalone servers and managed domains. It can also be used to configure security access for applications deployed to JBoss EAP servers. The JBoss Server Migration Tool automatically adds the default elytron subsystem configuration to the migrated configuration file. To skip the addition of the elytron subsystem configuration, set the subsystem.elytron.add.skip environment property to true . 6.3.5. Add the JMX Subsystem to the Host Configuration The JBoss EAP 7.4 jmx subsystem provides the ability to manage and monitor systems. The JBoss Server Migration Tool automatically adds this subsystem to the migrated configuration file. To skip the addition of the jmx subsystem configuration, set the subsystem.jmx.add.skip environment property to true . 6.3.6. Remove the unsecure Interface The JBoss Server Migration Tool automatically removes the unsecure interface configuration to align with the JBoss EAP 7.4 default configuration. To skip removal of the unsecure interface, set the interface.unsecure.remove.skip environment property to true . 6.3.7. Set Up HTTP Upgrade Management The addition of Undertow in JBoss EAP 7.4 added HTTP Upgrade, which allows for multiple protocols to be multiplexed over a single port. This means a management client can make an initial connection over HTTP, but then send a request to upgrade that connection to another protocol. The JBoss Server Migration Tool automatically updates the configuration to support HTTP Upgrade management. To skip configuration of HTTP Upgrade management, set the management.setup-http-upgrade.skip environment property to true . 6.3.8. Remove the PermGen Attributes from the JVM Configurations The usage of PermGen attributes in JVM configurations is deprecated in JBoss EAP 7. The JBoss Server Migration Tool automatically removes them from all JVM configurations for all server groups. To skip removal of the PermGen attributes, set the jvms.remove-permgen-attributes.skip environment property value to true . 6.3.9. Migrate Compatible Security Realms Because the JBoss EAP 7.4 security realm configurations are fully compatible with the JBoss EAP 6.4 security realm configurations, they require no update by the JBoss Server Migration Tool. However, if the application-users.properties , application-roles.properties , mgmt-users.properties , and mgmt-groups.properties files are not referenced using an absolute path, the tool copies them to the path expected by the migrated configuration file. To skip the security realms migration, set the security-realms.migrate-properties.skip environment property to true . 6.3.10. Add the Default SSL Server Identity to the ApplicationRealm The JBoss EAP 7.4 default configuration includes an SSL server identity for the default ApplicationRealm security realm. The JBoss Server Migration Tool automatically adds this identity to the migrated configuration files. To skip the addition of this identity, set the security-realm.ApplicationRealm.add-ssl-server-identity.skip environment property to true .
[ "INFO [ServerMigrationTask#49] Default ContextService added to EE subsystem configuration. INFO [ServerMigrationTask#49] Default ManagedThreadFactory added to EE subsystem configuration. INFO [ServerMigrationTask#49] Default ManagedExecutorService added to EE subsystem configuration. INFO [ServerMigrationTask#49] Default ManagedScheduledExecutorService added to EE subsystem configuration.", "INFO [ServerMigrationTask#50] Java EE Default Datasource configured with Java Naming and Directory Interface and name java:jboss/datasources/ExampleDS.", "INFO [ServerMigrationTask#22] Default datasource not found. 0. ExampleDS 1. Unconfigured data source, I want to enter the Java Naming and Directory Interface name Please select Java EE's Default Datasource: (0): 0 INFO [ServerMigrationTask#22] Java EE Default Datasource configured with Java Naming and Directory Interface name java:jboss/datasources/ExampleDS. Save this Java EE Default Datasource Java Naming and Directory Interface name and use it when migrating other config files? yes/no? y", "INFO Subsystem ejb3 updated.", "INFO Subsystem jgroups updated.", "INFO Subsystem undertow updated.", "INFO Subsystem health added.", "INFO Subsystem metrics added.", "INFO [ServerMigrationTask#67] Persistent deployments found: [cmtool-helloworld3.war, cmtool-helloworld4.war, cmtool-helloworld2.war, cmtool-helloworld1.war]", "This tool is not able to assert if persistent deployments found are compatible with the target server, skip persistent deployments migration? yes/no?", "Migrate all persistent deployments found? yes/no?", "Migrate persistent deployment 'helloworld01.war'? yes/no?", "INFO [ServerMigrationTask#68] Removed persistent deployment from configuration /deployment=helloworld01.war", "This tool is not able to assert if the scanner's deployments found are compatible with the target server, skip scanner's deployments migration? yes/no?", "Migrate all scanner's deployments found? yes/no?", "Migrate scanner's deployment 'helloworld02.war'? yes/no?", "INFO [ServerMigrationTask#69] Resource with path EAP_HOME /standalone/deployments/helloworld02.war migrated.", "INFO [ServerMigrationTask#49] Default ContextService added to EE subsystem configuration. INFO [ServerMigrationTask#49] Default ManagedThreadFactory added to EE subsystem configuration. INFO [ServerMigrationTask#49] Default ManagedExecutorService added to EE subsystem configuration. INFO [ServerMigrationTask#49] Default ManagedScheduledExecutorService added to EE subsystem configuration.", "INFO [ServerMigrationTask#50] Java EE Default Datasource configured with Java Naming and Directory Interface and name java:jboss/datasources/ExampleDS.", "INFO [ServerMigrationTask#22] Default datasource not found. 0. ExampleDS 1. Unconfigured data source, I want to enter the Java Naming and Directory Interface name Please select Java EE's Default Datasource: (0): 0 INFO [ServerMigrationTask#22] Java EE Default Datasource configured with Java Naming and Directory Interface name java:jboss/datasources/ExampleDS. Save this Java EE Default Datasource Java Naming and Directory Interface name and use it when migrating other config files? yes/no? y", "INFO Subsystem ejb3 updated.", "INFO Subsystem jgroups updated.", "INFO Subsystem undertow updated.", "INFO Host-excludes configuration added.", "INFO [ServerMigrationTask#67] Persistent deployments found: [cmtool-helloworld3.war, cmtool-helloworld4.war, cmtool-helloworld2.war, cmtool-helloworld1.war]", "This tool is not able to assert if persistent deployments found are compatible with the target server, skip persistent deployments migration? yes/no?", "Migrate all persistent deployments found? yes/no?", "Migrate persistent deployment 'helloworld01.war'? yes/no?", "INFO [ServerMigrationTask#68] Removed persistent deployment from configuration /deployment=helloworld01.war" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/using_the_jboss_server_migration_tool/migrating_jboss_eap_6_4_configurations_to_jboss_eap_7_4
Chapter 6. Configuring Cluster Resources
Chapter 6. Configuring Cluster Resources This chapter provides information on configuring resources in a cluster. 6.1. Resource Creation Use the following command to create a cluster resource. When you specify the --group option, the resource is added to the resource group named. If the group does not exist, this creates the group and adds this resource to the group. For information on resource groups, see Section 6.5, "Resource Groups" . The --before and --after options specify the position of the added resource relative to a resource that already exists in a resource group. Specifying the --disabled option indicates that the resource is not started automatically. The following command creates a resource with the name VirtualIP of standard ocf , provider heartbeat , and type IPaddr2 . The floating address of this resource is 192.168.0.120, the system will check whether the resource is running every 30 seconds. Alternately, you can omit the standard and provider fields and use the following command. This will default to a standard of ocf and a provider of heartbeat . Use the following command to delete a configured resource. For example, the following command deletes an existing resource with a resource ID of VirtualIP For information on the resource_id , standard , provider , and type fields of the pcs resource create command, see Section 6.2, "Resource Properties" . For information on defining resource parameters for individual resources, see Section 6.3, "Resource-Specific Parameters" . For information on defining resource meta options, which are used by the cluster to decide how a resource should behave, see Section 6.4, "Resource Meta Options" . For information on defining the operations to perform on a resource, see Section 6.6, "Resource Operations" . Specifying the clone option creates a clone resource. Specifying the master option creates a master/slave resource. For information on resource clones and resources with multiple modes, see Chapter 9, Advanced Configuration .
[ "pcs resource create resource_id [ standard :[ provider :]] type [ resource_options ] [op operation_action operation_options [ operation_action operation options ]...] [meta meta_options ...] [clone [ clone_options ] | master [ master_options ] | --group group_name [--before resource_id | --after resource_id ] | [bundle bundle_id ] [--disabled] [--wait[= n ]]", "pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.120 cidr_netmask=24 op monitor interval=30s", "pcs resource create VirtualIP IPaddr2 ip=192.168.0.120 cidr_netmask=24 op monitor interval=30s", "pcs resource delete resource_id", "pcs resource delete VirtualIP" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/ch-clustresources-HAAR
Chapter 23. Isolating CPUs using tuned-profiles-real-time
Chapter 23. Isolating CPUs using tuned-profiles-real-time To give application threads the most execution time possible, you can isolate CPUs. Therefore, remove as many extraneous tasks from a CPU as possible. Isolating CPUs generally involves: Removing all user-space threads. Removing any unbound kernel threads. Kernel related bound threads are linked to a specific CPU and cannot not be moved). Removing interrupts by modifying the /proc/irq/N/smp_affinity property of each Interrupt Request (IRQ) number N in the system. By using the isolated_cores=cpulist configuration option of the tuned-profiles-realtime package, you can automate operations to isolate a CPU. Prerequisites You have administrator privileges. 23.1. Choosing CPUs to isolate Choosing the CPUs to isolate requires careful consideration of the CPU topology of the system. Different use cases require different configuration: If you have a multi-threaded application where threads need to communicate with one another by sharing cache, they need to be kept on the same NUMA node or physical socket. If you run multiple unrelated real-time applications, separating the CPUs by NUMA node or socket can be suitable. The hwloc package provides utilities that are useful for getting information about CPUs, including lstopo-no-graphics and numactl . Prerequisites The hwloc package are installed. Procedure View the layout of available CPUs in physical packages: Figure 23.1. Showing the layout of CPUs using lstopo-no-graphics This command is useful for multi-threaded applications, because it shows how many cores and sockets are available and the logical distance of the NUMA nodes. Additionally, the hwloc-gui package provides the lstopo utility, which produces graphical output. View more information about the CPUs, such as the distance between nodes: Additional resources hwloc(7) man page on your system 23.2. Isolating CPUs using TuneD's isolated_cores option The initial mechanism for isolating CPUs is specifying the boot parameter isolcpus=cpulist on the kernel boot command line. The recommended way to do this for RHEL for Real Time is to use the TuneD daemon and its tuned-profiles-realtime package. Note In tuned-profiles-realtime version 2.19 and later, the built-in function calc_isolated_cores applies the initial CPU setup automatically. The /etc/tuned/realtime-variables.conf configuration file includes the default variable content as isolated_cores=USD{f:calc_isolated_cores:2} . By default, calc_isolated_cores reserves one core per socket for housekeeping and isolates the rest. If you must change the default configuration, comment out the isolated_cores=USD{f:calc_isolated_cores:2} line in /etc/tuned/realtime-variables.conf configuration file and follow the procedure steps for Isolating CPUs using TuneD's isolated_cores option. Prerequisites The TuneD and tuned-profiles-realtime packages are installed. You have root permissions on the system. Procedure As a root user, open /etc/tuned/realtime-variables.conf in a text editor. Set isolated_cores= cpulist to specify the CPUs that you want to isolate. You can use CPU numbers and ranges. Examples: This isolates cores 0, 1, 2, 3, 5, and 7. In a two socket system with 8 cores, where NUMA node 0 has cores 0-3 and NUMA node 1 has cores 4-8, to allocate two cores for a multi-threaded application, specify: This prevents any user-space threads from being assigned to CPUs 4 and 5. To pick CPUs from different NUMA nodes for unrelated applications, specify: This prevents any user-space threads from being assigned to CPUs 0 and 4. Activate the real-time TuneD profile using the tuned-adm utility. Reboot the machine for changes to take effect. Verification Search for the isolcpus parameter in the kernel command line: 23.3. Isolating CPUs using the nohz and nohz_full parameters The nohz and nohz_full parameters modify activity on specified CPUs. To enable these kernel boot parameters, you need to use one of the following TuneD profiles: realtime-virtual-host , realtime-virtual-guest , or cpu-partitioning . nohz=on Reduces timer activity on a particular set of CPUs. The nohz parameter is mainly used to reduce timer interrupts on idle CPUs. This helps battery life by allowing idle CPUs to run in reduced power mode. While not being directly useful for real-time response time, the nohz parameter does not directly impact real-time response time negatively. But the nohz parameter is required to activate the nohz_full parameter that does have positive implications for real-time performance. nohz_full= cpulist The nohz_full parameter treats the timer ticks of a list of specified CPUs differently. If a CPU is specified as a nohz_full CPU and there is only one runnable task on the CPU, then the kernel stops sending timer ticks to that CPU. As a result, more time may be spent running the application and less time spent servicing interrupts and context switching. Additional resources Configuring Kernel Tick Time
[ "lstopo-no-graphics --no-io --no-legend --of txt", "numactl --hardware available: 2 nodes (0-1) node 0 cpus: 0 1 2 3 node 0 size: 16159 MB node 0 free: 6323 MB node 1 cpus: 4 5 6 7 node 1 size: 16384 MB node 1 free: 10289 MB node distances: node 0 1 0: 10 21 1: 21 10", "isolated_cores=0-3,5,7", "isolated_cores=4,5", "isolated_cores=0,4", "tuned-adm profile realtime", "cat /proc/cmdline | grep isolcpus BOOT_IMAGE=/vmlinuz-4.18.0-305.rt7.72.el8.x86_64 root=/dev/mapper/rhel_foo-root ro crashkernel=auto rd.lvm.lv=rhel_foo/root rd.lvm.lv=rhel_foo/swap console=ttyS0,115200n81 isolcpus=0,4" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/optimizing_rhel_9_for_real_time_for_low_latency_operation/assembly_isolating-cpus-using-tuned-profiles-realtime_optimizing-rhel9-for-real-time-for-low-latency-operation
Chapter 16. Open Container Initiative support
Chapter 16. Open Container Initiative support Container registries were originally designed to support container images in the Docker image format. To promote the use of additional runtimes apart from Docker, the Open Container Initiative (OCI) was created to provide a standardization surrounding container runtimes and image formats. Most container registries support the OCI standardization as it is based on the Docker image manifest V2, Schema 2 format. In addition to container images, a variety of artifacts have emerged that support not just individual applications, but also the Kubernetes platform as a whole. These range from Open Policy Agent (OPA) policies for security and governance to Helm charts and Operators that aid in application deployment. Red Hat Quay is a private container registry that not only stores container images, but also supports an entire ecosystem of tooling to aid in the management of containers. Red Hat Quay strives to be as compatible as possible with the OCI 1.1 Image and Distribution specifications , and supports common media types like Helm charts (as long as they pushed with a version of Helm that supports OCI) and a variety of arbitrary media types within the manifest or layer components of container images. Support for OCI media types differs from iterations of Red Hat Quay, when the registry was more strict about accepted media types. Because Red Hat Quay now works with a wider array of media types, including those that were previously outside the scope of its support, it is now more versatile accommodating not only standard container image formats but also emerging or unconventional types. In addition to its expanded support for novel media types, Red Hat Quay ensures compatibility with Docker images, including V2_2 and V2_1 formats. This compatibility with Docker V2_2 and V2_1 images demonstrates Red Hat Quay's' commitment to providing a seamless experience for Docker users. Moreover, Red Hat Quay continues to extend its support for Docker V1 pulls, catering to users who might still rely on this earlier version of Docker images. Support for OCI artifacts are enabled by default. The following examples show you how to use some media types, which can be used as examples for using other OCI media types. 16.1. Helm and OCI prerequisites Helm simplifies how applications are packaged and deployed. Helm uses a packaging format called Charts which contain the Kubernetes resources representing an application. Red Hat Quay supports Helm charts so long as they are a version supported by OCI. Use the following procedures to pre-configure your system to use Helm and other OCI media types. The most recent version of Helm can be downloaded from the Helm releases page. After you have downloaded Helm, you must enable your system to trust SSL/TLS certificates used by Red Hat Quay. 16.1.1. Enabling your system to trust SSL/TLS certificates used by Red Hat Quay Communication between the Helm client and Red Hat Quay is facilitated over HTTPS. As of Helm 3.5, support is only available for registries communicating over HTTPS with trusted certificates. In addition, the operating system must trust the certificates exposed by the registry. You must ensure that your operating system has been configured to trust the certificates used by Red Hat Quay. Use the following procedure to enable your system to trust the custom certificates. Procedure Enter the following command to copy the rootCA.pem file to the /etc/pki/ca-trust/source/anchors/ folder: USD sudo cp rootCA.pem /etc/pki/ca-trust/source/anchors/ Enter the following command to update the CA trust store: USD sudo update-ca-trust extract 16.2. Using Helm charts Use the following example to download and push an etherpad chart from the Red Hat Community of Practice (CoP) repository. Prerequisites You have logged into Red Hat Quay. Procedure Add a chart repository by entering the following command: USD helm repo add redhat-cop https://redhat-cop.github.io/helm-charts Enter the following command to update the information of available charts locally from the chart repository: USD helm repo update Enter the following command to pull a chart from a repository: USD helm pull redhat-cop/etherpad --version=0.0.4 --untar Enter the following command to package the chart into a chart archive: USD helm package ./etherpad Example output Successfully packaged chart and saved it to: /home/user/linux-amd64/etherpad-0.0.4.tgz Log in to Red Hat Quay using helm registry login : USD helm registry login quay370.apps.quayperf370.perfscale.devcluster.openshift.com Push the chart to your repository using the helm push command: USD helm push etherpad-0.0.4.tgz oci://quay370.apps.quayperf370.perfscale.devcluster.openshift.com Example output: Pushed: quay370.apps.quayperf370.perfscale.devcluster.openshift.com/etherpad:0.0.4 Digest: sha256:a6667ff2a0e2bd7aa4813db9ac854b5124ff1c458d170b70c2d2375325f2451b Ensure that the push worked by deleting the local copy, and then pulling the chart from the repository: USD rm -rf etherpad-0.0.4.tgz USD helm pull oci://quay370.apps.quayperf370.perfscale.devcluster.openshift.com/etherpad --version 0.0.4 Example output: Pulled: quay370.apps.quayperf370.perfscale.devcluster.openshift.com/etherpad:0.0.4 Digest: sha256:4f627399685880daf30cf77b6026dc129034d68c7676c7e07020b70cf7130902 16.3. Annotation parsing Some OCI media types do not utilize labels and, as such, critical information such as expiration timestamps are not included. Red Hat Quay supports metadata passed through annotations to accommodate OCI media types that do not include these labels for metadata transmission. Tools such as ORAS (OCI Registry as Storage) can now be used to embed information with artifact types to help ensure that images operate properly, for example, to expire. The following procedure uses ORAS to add an expiration date to an OCI media artifact. Important If you pushed an image with podman push , and then add an annotation with oras , the MIME type is changed. Consequently, you will not be able to pull the same image with podman pull because Podman does not recognize that MIME type. Prerequisites You have downloaded the oras CLI. For more information, see Installation . You have pushed an OCI media artifact to your Red Hat Quay repository. Procedure By default, some OCI media types, like application/vnd.oci.image.manifest.v1+json , do not use certain labels, like expiration timestamps. You can use a CLI tool like ORAS ( oras ) to add annotations to OCI media types. For example: USD oras push --annotation "quay.expires-after=2d" \ 1 --annotation "expiration = 2d" \ 2 quay.io/<organization_name>/<repository>/<image_name>:<tag> 1 Set the expiration time for 2 days, indicated by 2d . 2 Adds the expiration label. Example output [✓] Exists application/vnd.oci.empty.v1+json 2/2 B 100.00% 0s └─ sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a [✓] Uploaded application/vnd.oci.image.manifest.v1+json 561/561 B 100.00% 511ms └─ sha256:9b4f2d43b62534423894d077f0ff0e9e496540ec8b52b568ea8b757fc9e7996b Pushed [registry] quay.io/stevsmit/testorg3/oci-image:v1 ArtifactType: application/vnd.unknown.artifact.v1 Digest: sha256:9b4f2d43b62534423894d077f0ff0e9e496540ec8b52b568ea8b757fc9e7996b Verification Pull the image with oras . For example: USD oras pull quay.io/<organization_name>/<repository>/<image_name>:<tag> Inspect the changes using oras . For example: USD oras manifest fetch quay.io/<organization_name>/<repository>/<image_name>:<tag> Example output {"schemaVersion":2,"mediaType":"application/vnd.oci.image.manifest.v1+json","artifactType":"application/vnd.unknown.artifact.v1","config":{"mediaType":"application/vnd.oci.empty.v1+json","digest":"sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a","size":2,"data":"e30="},"layers":[{"mediaType":"application/vnd.oci.empty.v1+json","digest":"sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a","size":2,"data":"e30="}],"annotations":{"org.opencontainers.image.created":"2024-07-11T15:22:42Z","version ":" 8.11"}} 16.4. Attaching referrers to an image tag The following procedure shows you how to attach referrers to an image tag using different schemas supported by the OCI distribution spec 1.1 using the oras CLI. This is useful for attaching and managing additional metadata like referrers to container images. Prerequisites You have downloaded the oras CLI. For more information, see Installation . You have access to an OCI media artifact. Procedure Tag an OCI media artifact by entering the following command: USD podman tag <myartifact_image> <quay-server.example.com>/<organization_name>/<repository>/<image_name>:<tag> Push the artifact to your Red Hat Quay registry. For example: USD podman push <myartifact_image> <quay-server.example.com>/<organization_name>/<repository>/<image_name>:<tag> Enter the following command to attach a manifest using the OCI 1.1 referrers API schema with oras : USD oras attach --artifact-type <MIME_type> --distribution-spec v1.1-referrers-api <myartifact_image> \ <quay-server.example.com>/<organization_name>/<repository>/<image_name>:<tag> \ <example_file>.txt Example output -spec v1.1-referrers-api quay.io/testorg3/myartifact-image:v1.0 hi.txt [✓] Exists hi.txt 3/3 B 100.00% 0s └─ sha256:98ea6e4f216f2fb4b69fff9b3a44842c38686ca685f3f55dc48c5d3fb1107be4 [✓] Exists application/vnd.oci.empty.v1+json 2/2 B 100.00% 0s └─ sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a [✓] Uploaded application/vnd.oci.image.manifest.v1+json 723/723 B 100.00% 677ms └─ sha256:31c38e6adcc59a3cfbd2ef971792aaf124cbde8118e25133e9f9c9c4cd1d00c6 Attached to [registry] quay.io/testorg3/myartifact-image@sha256:db440c57edfad40c682f9186ab1c1075707ce7a6fdda24a89cb8c10eaad424da Digest: sha256:31c38e6adcc59a3cfbd2ef971792aaf124cbde8118e25133e9f9c9c4cd1d00c6 Enter the following command to attach a manifest using the OCI 1.1 referrers tag schema: USD oras attach --artifact-type <MIME_type> --distribution-spec v1.1-referrers-tag \ <myartifact_image> <quay-server.example.com>/<organization_name>/<repository>/<image_name>:<tag> \ <example_file>.txt Example output [✓] Exists hi.txt 3/3 B 100.00% 0s └─ sha256:98ea6e4f216f2fb4b69fff9b3a44842c38686ca685f3f55dc48c5d3fb1107be4 [✓] Exists application/vnd.oci.empty.v1+json 2/2 B 100.00% 0s └─ sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a [✓] Uploaded application/vnd.oci.image.manifest.v1+json 723/723 B 100.00% 465ms └─ sha256:2d4b54201c8b134711ab051389f5ba24c75c2e6b0f0ff157fce8ffdfe104f383 Attached to [registry] quay.io/testorg3/myartifact-image@sha256:db440c57edfad40c682f9186ab1c1075707ce7a6fdda24a89cb8c10eaad424da Digest: sha256:2d4b54201c8b134711ab051389f5ba24c75c2e6b0f0ff157fce8ffdfe104f383 Enter the following command to discoverer referrers of the artifact using the tag schema: USD oras discover --insecure --distribution-spec v1.1-referrers-tag \ <quay-server.example.com>/<organization_name>/<repository>/<image_name>:<tag> Example output quay.io/testorg3/myartifact-image@sha256:db440c57edfad40c682f9186ab1c1075707ce7a6fdda24a89cb8c10eaad424da └── doc/example └── sha256:2d4b54201c8b134711ab051389f5ba24c75c2e6b0f0ff157fce8ffdfe104f383 Enter the following command to discoverer referrers of the artifact using the API schema: USD oras discover --distribution-spec v1.1-referrers-api \ <quay-server.example.com>/<organization_name>/<repository>/<image_name>:<tag> Example output Discovered 3 artifacts referencing v1.0 Digest: sha256:db440c57edfad40c682f9186ab1c1075707ce7a6fdda24a89cb8c10eaad424da Artifact Type Digest sha256:2d4b54201c8b134711ab051389f5ba24c75c2e6b0f0ff157fce8ffdfe104f383 sha256:22b7e167793808f83db66f7d35fbe0088b34560f34f8ead36019a4cc48fd346b sha256:bb2b7e7c3a58fd9ba60349473b3a746f9fe78995a88cb329fc2fd1fd892ea4e4 Optional. You can also discover referrers by using the /v2/<organization_name>/<repository_name>/referrers/<sha256_digest> endpoint. For this to work, you must generate a v2 API token and set FEATURE_REFERRERS_API: true in your config.yaml file. Update your config.yaml file to include the FEATURE_REFERRERS_API field. For example: # ... FEATURE_REFERRERS_API: true # ... Enter the following command to Base64 encode your credentials: USD echo -n '<username>:<password>' | base64 Example output abcdeWFkbWluOjE5ODlraWROZXQxIQ== Enter the following command to use the base64 encoded token and modify the URL endpoint to your Red Hat Quay server: USD curl --location '<quay-server.example.com>/v2/auth?service=<quay-server.example.com>&scope=repository:quay/listocireferrs:pull,push' --header 'Authorization: Basic <base64_username:password_encode_token>' -k | jq Example output { "token": "<example_token_output>..." } Enter the following command, using the v2 API token, to list OCI referrers of a manifest under a repository: USD GET https://<quay-server.example.com>/v2/<organization_name>/<repository_name>/referrers/sha256:0de63ba2d98ab328218a1b6373def69ec0d0e7535866f50589111285f2bf3fb8 --header 'Authorization: Bearer <v2_bearer_token> -k | jq Example output { "schemaVersion": 2, "mediaType": "application/vnd.oci.image.index.v1+json", "manifests": [ { "mediaType": "application/vnd.oci.image.manifest.v1+json", "digest": "sha256:2d4b54201c8b134711ab051389f5ba24c75c2e6b0f0ff157fce8ffdfe104f383", "size": 793 }, ] }
[ "sudo cp rootCA.pem /etc/pki/ca-trust/source/anchors/", "sudo update-ca-trust extract", "helm repo add redhat-cop https://redhat-cop.github.io/helm-charts", "helm repo update", "helm pull redhat-cop/etherpad --version=0.0.4 --untar", "helm package ./etherpad", "Successfully packaged chart and saved it to: /home/user/linux-amd64/etherpad-0.0.4.tgz", "helm registry login quay370.apps.quayperf370.perfscale.devcluster.openshift.com", "helm push etherpad-0.0.4.tgz oci://quay370.apps.quayperf370.perfscale.devcluster.openshift.com", "Pushed: quay370.apps.quayperf370.perfscale.devcluster.openshift.com/etherpad:0.0.4 Digest: sha256:a6667ff2a0e2bd7aa4813db9ac854b5124ff1c458d170b70c2d2375325f2451b", "rm -rf etherpad-0.0.4.tgz", "helm pull oci://quay370.apps.quayperf370.perfscale.devcluster.openshift.com/etherpad --version 0.0.4", "Pulled: quay370.apps.quayperf370.perfscale.devcluster.openshift.com/etherpad:0.0.4 Digest: sha256:4f627399685880daf30cf77b6026dc129034d68c7676c7e07020b70cf7130902", "oras push --annotation \"quay.expires-after=2d\" \\ 1 --annotation \"expiration = 2d\" \\ 2 quay.io/<organization_name>/<repository>/<image_name>:<tag>", "[✓] Exists application/vnd.oci.empty.v1+json 2/2 B 100.00% 0s └─ sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a [✓] Uploaded application/vnd.oci.image.manifest.v1+json 561/561 B 100.00% 511ms └─ sha256:9b4f2d43b62534423894d077f0ff0e9e496540ec8b52b568ea8b757fc9e7996b Pushed [registry] quay.io/stevsmit/testorg3/oci-image:v1 ArtifactType: application/vnd.unknown.artifact.v1 Digest: sha256:9b4f2d43b62534423894d077f0ff0e9e496540ec8b52b568ea8b757fc9e7996b", "oras pull quay.io/<organization_name>/<repository>/<image_name>:<tag>", "oras manifest fetch quay.io/<organization_name>/<repository>/<image_name>:<tag>", "{\"schemaVersion\":2,\"mediaType\":\"application/vnd.oci.image.manifest.v1+json\",\"artifactType\":\"application/vnd.unknown.artifact.v1\",\"config\":{\"mediaType\":\"application/vnd.oci.empty.v1+json\",\"digest\":\"sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a\",\"size\":2,\"data\":\"e30=\"},\"layers\":[{\"mediaType\":\"application/vnd.oci.empty.v1+json\",\"digest\":\"sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a\",\"size\":2,\"data\":\"e30=\"}],\"annotations\":{\"org.opencontainers.image.created\":\"2024-07-11T15:22:42Z\",\"version \":\" 8.11\"}}", "podman tag <myartifact_image> <quay-server.example.com>/<organization_name>/<repository>/<image_name>:<tag>", "podman push <myartifact_image> <quay-server.example.com>/<organization_name>/<repository>/<image_name>:<tag>", "oras attach --artifact-type <MIME_type> --distribution-spec v1.1-referrers-api <myartifact_image> <quay-server.example.com>/<organization_name>/<repository>/<image_name>:<tag> <example_file>.txt", "-spec v1.1-referrers-api quay.io/testorg3/myartifact-image:v1.0 hi.txt [✓] Exists hi.txt 3/3 B 100.00% 0s └─ sha256:98ea6e4f216f2fb4b69fff9b3a44842c38686ca685f3f55dc48c5d3fb1107be4 [✓] Exists application/vnd.oci.empty.v1+json 2/2 B 100.00% 0s └─ sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a [✓] Uploaded application/vnd.oci.image.manifest.v1+json 723/723 B 100.00% 677ms └─ sha256:31c38e6adcc59a3cfbd2ef971792aaf124cbde8118e25133e9f9c9c4cd1d00c6 Attached to [registry] quay.io/testorg3/myartifact-image@sha256:db440c57edfad40c682f9186ab1c1075707ce7a6fdda24a89cb8c10eaad424da Digest: sha256:31c38e6adcc59a3cfbd2ef971792aaf124cbde8118e25133e9f9c9c4cd1d00c6", "oras attach --artifact-type <MIME_type> --distribution-spec v1.1-referrers-tag <myartifact_image> <quay-server.example.com>/<organization_name>/<repository>/<image_name>:<tag> <example_file>.txt", "[✓] Exists hi.txt 3/3 B 100.00% 0s └─ sha256:98ea6e4f216f2fb4b69fff9b3a44842c38686ca685f3f55dc48c5d3fb1107be4 [✓] Exists application/vnd.oci.empty.v1+json 2/2 B 100.00% 0s └─ sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a [✓] Uploaded application/vnd.oci.image.manifest.v1+json 723/723 B 100.00% 465ms └─ sha256:2d4b54201c8b134711ab051389f5ba24c75c2e6b0f0ff157fce8ffdfe104f383 Attached to [registry] quay.io/testorg3/myartifact-image@sha256:db440c57edfad40c682f9186ab1c1075707ce7a6fdda24a89cb8c10eaad424da Digest: sha256:2d4b54201c8b134711ab051389f5ba24c75c2e6b0f0ff157fce8ffdfe104f383", "oras discover --insecure --distribution-spec v1.1-referrers-tag <quay-server.example.com>/<organization_name>/<repository>/<image_name>:<tag>", "quay.io/testorg3/myartifact-image@sha256:db440c57edfad40c682f9186ab1c1075707ce7a6fdda24a89cb8c10eaad424da └── doc/example └── sha256:2d4b54201c8b134711ab051389f5ba24c75c2e6b0f0ff157fce8ffdfe104f383", "oras discover --distribution-spec v1.1-referrers-api <quay-server.example.com>/<organization_name>/<repository>/<image_name>:<tag>", "Discovered 3 artifacts referencing v1.0 Digest: sha256:db440c57edfad40c682f9186ab1c1075707ce7a6fdda24a89cb8c10eaad424da Artifact Type Digest sha256:2d4b54201c8b134711ab051389f5ba24c75c2e6b0f0ff157fce8ffdfe104f383 sha256:22b7e167793808f83db66f7d35fbe0088b34560f34f8ead36019a4cc48fd346b sha256:bb2b7e7c3a58fd9ba60349473b3a746f9fe78995a88cb329fc2fd1fd892ea4e4", "FEATURE_REFERRERS_API: true", "echo -n '<username>:<password>' | base64", "abcdeWFkbWluOjE5ODlraWROZXQxIQ==", "curl --location '<quay-server.example.com>/v2/auth?service=<quay-server.example.com>&scope=repository:quay/listocireferrs:pull,push' --header 'Authorization: Basic <base64_username:password_encode_token>' -k | jq", "{ \"token\": \"<example_token_output>...\" }", "GET https://<quay-server.example.com>/v2/<organization_name>/<repository_name>/referrers/sha256:0de63ba2d98ab328218a1b6373def69ec0d0e7535866f50589111285f2bf3fb8 --header 'Authorization: Bearer <v2_bearer_token> -k | jq", "{ \"schemaVersion\": 2, \"mediaType\": \"application/vnd.oci.image.index.v1+json\", \"manifests\": [ { \"mediaType\": \"application/vnd.oci.image.manifest.v1+json\", \"digest\": \"sha256:2d4b54201c8b134711ab051389f5ba24c75c2e6b0f0ff157fce8ffdfe104f383\", \"size\": 793 }, ] }" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/use_red_hat_quay/oci-intro
Chapter 5. ConfigMap [v1]
Chapter 5. ConfigMap [v1] Description ConfigMap holds configuration data for pods to consume. Type object 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources binaryData object (string) BinaryData contains the binary data. Each key must consist of alphanumeric characters, '-', '_' or '.'. BinaryData can contain byte sequences that are not in the UTF-8 range. The keys stored in BinaryData must not overlap with the ones in the Data field, this is enforced during validation process. Using this field will require 1.10+ apiserver and kubelet. data object (string) Data contains the configuration data. Each key must consist of alphanumeric characters, '-', '_' or '.'. Values with non-UTF-8 byte sequences must use the BinaryData field. The keys stored in Data must not overlap with the keys in the BinaryData field, this is enforced during validation process. immutable boolean Immutable, if set to true, ensures that data stored in the ConfigMap cannot be updated (only object metadata can be modified). If not set to true, the field can be modified at any time. Defaulted to nil. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 5.2. API endpoints The following API endpoints are available: /api/v1/configmaps GET : list or watch objects of kind ConfigMap /api/v1/watch/configmaps GET : watch individual changes to a list of ConfigMap. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/configmaps DELETE : delete collection of ConfigMap GET : list or watch objects of kind ConfigMap POST : create a ConfigMap /api/v1/watch/namespaces/{namespace}/configmaps GET : watch individual changes to a list of ConfigMap. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/configmaps/{name} DELETE : delete a ConfigMap GET : read the specified ConfigMap PATCH : partially update the specified ConfigMap PUT : replace the specified ConfigMap /api/v1/watch/namespaces/{namespace}/configmaps/{name} GET : watch changes to an object of kind ConfigMap. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 5.2.1. /api/v1/configmaps HTTP method GET Description list or watch objects of kind ConfigMap Table 5.1. HTTP responses HTTP code Reponse body 200 - OK ConfigMapList schema 401 - Unauthorized Empty 5.2.2. /api/v1/watch/configmaps HTTP method GET Description watch individual changes to a list of ConfigMap. deprecated: use the 'watch' parameter with a list operation instead. Table 5.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.3. /api/v1/namespaces/{namespace}/configmaps HTTP method DELETE Description delete collection of ConfigMap Table 5.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ConfigMap Table 5.5. HTTP responses HTTP code Reponse body 200 - OK ConfigMapList schema 401 - Unauthorized Empty HTTP method POST Description create a ConfigMap Table 5.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.7. Body parameters Parameter Type Description body ConfigMap schema Table 5.8. HTTP responses HTTP code Reponse body 200 - OK ConfigMap schema 201 - Created ConfigMap schema 202 - Accepted ConfigMap schema 401 - Unauthorized Empty 5.2.4. /api/v1/watch/namespaces/{namespace}/configmaps HTTP method GET Description watch individual changes to a list of ConfigMap. deprecated: use the 'watch' parameter with a list operation instead. Table 5.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.5. /api/v1/namespaces/{namespace}/configmaps/{name} Table 5.10. Global path parameters Parameter Type Description name string name of the ConfigMap HTTP method DELETE Description delete a ConfigMap Table 5.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.12. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConfigMap Table 5.13. HTTP responses HTTP code Reponse body 200 - OK ConfigMap schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConfigMap Table 5.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.15. HTTP responses HTTP code Reponse body 200 - OK ConfigMap schema 201 - Created ConfigMap schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConfigMap Table 5.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.17. Body parameters Parameter Type Description body ConfigMap schema Table 5.18. HTTP responses HTTP code Reponse body 200 - OK ConfigMap schema 201 - Created ConfigMap schema 401 - Unauthorized Empty 5.2.6. /api/v1/watch/namespaces/{namespace}/configmaps/{name} Table 5.19. Global path parameters Parameter Type Description name string name of the ConfigMap HTTP method GET Description watch changes to an object of kind ConfigMap. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 5.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/metadata_apis/configmap-v1
Chapter 4. Knative CLI
Chapter 4. Knative CLI 4.1. Installing the Knative CLI The Knative ( kn ) CLI does not have its own login mechanism. To log in to the cluster, you must install the OpenShift CLI ( oc ) and use the oc login command. Installation options for the CLIs may vary depending on your operating system. For more information on installing the oc CLI for your operating system and logging in with oc , see the OpenShift CLI getting started documentation. OpenShift Serverless cannot be installed using the Knative ( kn ) CLI. A cluster administrator must install the OpenShift Serverless Operator and set up the Knative components, as described in the Installing the OpenShift Serverless Operator documentation. Important If you try to use an older version of the Knative ( kn ) CLI with a newer OpenShift Serverless release, the API is not found and an error occurs. For example, if you use the 1.23.0 release of the Knative ( kn ) CLI, which uses version 1.2, with the 1.24.0 OpenShift Serverless release, which uses the 1.3 versions of the Knative Serving and Knative Eventing APIs, the CLI does not work because it continues to look for the outdated 1.2 API versions. Ensure that you are using the latest Knative ( kn ) CLI version for your OpenShift Serverless release to avoid issues. 4.1.1. Installing the Knative CLI using the OpenShift Container Platform web console Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to install the Knative ( kn ) CLI. After the OpenShift Serverless Operator is installed, you will see a link to download the Knative ( kn ) CLI for Linux (amd64, s390x, ppc64le), macOS, or Windows from the Command Line Tools page in the OpenShift Container Platform web console. Prerequisites You have logged in to the OpenShift Container Platform web console. The OpenShift Serverless Operator and Knative Serving are installed on your OpenShift Container Platform cluster. Important If libc is not available, you might see the following error when you run CLI commands: USD kn: No such file or directory If you want to use the verification steps for this procedure, you must install the OpenShift ( oc ) CLI. Procedure Download the Knative ( kn ) CLI from the Command Line Tools page. You can access the Command Line Tools page by clicking the icon in the top right corner of the web console and selecting Command Line Tools in the list. Unpack the archive: USD tar -xf <file> Move the kn binary to a directory on your PATH . To check your PATH , run: USD echo USDPATH Verification Run the following commands to check that the correct Knative CLI resources and route have been created: USD oc get ConsoleCLIDownload Example output NAME DISPLAY NAME AGE kn kn - OpenShift Serverless Command Line Interface (CLI) 2022-09-20T08:41:18Z oc-cli-downloads oc - OpenShift Command Line Interface (CLI) 2022-09-20T08:00:20Z USD oc get route -n openshift-serverless Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD kn kn-openshift-serverless.apps.example.com knative-openshift-metrics-3 http-cli edge/Redirect None 4.1.2. Installing the Knative CLI for Linux by using an RPM package manager For Red Hat Enterprise Linux (RHEL), you can install the Knative ( kn ) CLI as an RPM by using a package manager, such as yum or dnf . This allows the Knative CLI version to be automatically managed by the system. For example, using a command like dnf upgrade upgrades all packages, including kn , if a new version is available. Prerequisites You have an active OpenShift Container Platform subscription on your Red Hat account. Procedure Register with Red Hat Subscription Manager: # subscription-manager register Pull the latest subscription data: # subscription-manager refresh Attach the subscription to the registered system: # subscription-manager attach --pool=<pool_id> 1 1 Pool ID for an active OpenShift Container Platform subscription Enable the repositories required by the Knative ( kn ) CLI: Linux (x86_64, amd64) # subscription-manager repos --enable="openshift-serverless-1-for-rhel-8-x86_64-rpms" Linux on IBM Z and LinuxONE (s390x) # subscription-manager repos --enable="openshift-serverless-1-for-rhel-8-s390x-rpms" Linux on IBM Power (ppc64le) # subscription-manager repos --enable="openshift-serverless-1-for-rhel-8-ppc64le-rpms" Install the Knative ( kn ) CLI as an RPM by using a package manager: Example yum command # yum install openshift-serverless-clients 4.1.3. Installing the Knative CLI for Linux If you are using a Linux distribution that does not have RPM or another package manager installed, you can install the Knative ( kn ) CLI as a binary file. To do this, you must download and unpack a tar.gz archive and add the binary to a directory on your PATH . Prerequisites If you are not using RHEL or Fedora, ensure that libc is installed in a directory on your library path. Important If libc is not available, you might see the following error when you run CLI commands: USD kn: No such file or directory Procedure Download the relevant Knative ( kn ) CLI tar.gz archive: Linux (x86_64, amd64) Linux on IBM Z and LinuxONE (s390x) Linux on IBM Power (ppc64le) Unpack the archive: USD tar -xf <filename> Move the kn binary to a directory on your PATH . To check your PATH , run: USD echo USDPATH 4.1.4. Installing the Knative CLI for macOS If you are using macOS, you can install the Knative ( kn ) CLI as a binary file. To do this, you must download and unpack a tar.gz archive and add the binary to a directory on your PATH . Procedure Download the Knative ( kn ) CLI tar.gz archive . Unpack and extract the archive. Move the kn binary to a directory on your PATH . To check your PATH , open a terminal window and run: USD echo USDPATH 4.1.5. Installing the Knative CLI for Windows If you are using Windows, you can install the Knative ( kn ) CLI as a binary file. To do this, you must download and unpack a ZIP archive and add the binary to a directory on your PATH . Procedure Download the Knative ( kn ) CLI ZIP archive . Extract the archive with a ZIP program. Move the kn binary to a directory on your PATH . To check your PATH , open the command prompt and run the command: C:\> path 4.2. Configuring the Knative CLI You can customize your Knative ( kn ) CLI setup by creating a config.yaml configuration file. You can provide this configuration by using the --config flag, otherwise the configuration is picked up from a default location. The default configuration location conforms to the XDG Base Directory Specification , and is different for UNIX systems and Windows systems. For UNIX systems: If the XDG_CONFIG_HOME environment variable is set, the default configuration location that the Knative ( kn ) CLI looks for is USDXDG_CONFIG_HOME/kn . If the XDG_CONFIG_HOME environment variable is not set, the Knative ( kn ) CLI looks for the configuration in the home directory of the user at USDHOME/.config/kn/config.yaml . For Windows systems, the default Knative ( kn ) CLI configuration location is %APPDATA%\kn . Example configuration file plugins: path-lookup: true 1 directory: ~/.config/kn/plugins 2 eventing: sink-mappings: 3 - prefix: svc 4 group: core 5 version: v1 6 resource: services 7 1 Specifies whether the Knative ( kn ) CLI should look for plug-ins in the PATH environment variable. This is a boolean configuration option. The default value is false . 2 Specifies the directory where the Knative ( kn ) CLI looks for plug-ins. The default path depends on the operating system, as described previously. This can be any directory that is visible to the user. 3 The sink-mappings spec defines the Kubernetes addressable resource that is used when you use the --sink flag with a Knative ( kn ) CLI command. 4 The prefix you want to use to describe your sink. svc for a service, channel , and broker are predefined prefixes for the Knative ( kn ) CLI. 5 The API group of the Kubernetes resource. 6 The version of the Kubernetes resource. 7 The plural name of the Kubernetes resource type. For example, services or brokers . 4.3. Knative CLI plug-ins The Knative ( kn ) CLI supports the use of plug-ins, which enable you to extend the functionality of your kn installation by adding custom commands and other shared commands that are not part of the core distribution. Knative ( kn ) CLI plug-ins are used in the same way as the main kn functionality. Currently, Red Hat supports the kn-source-kafka plug-in and the kn-event plug-in. Important The kn-event plug-in is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . 4.3.1. Building events by using the kn-event plug-in You can use the builder-like interface of the kn event build command to build an event. You can then send that event at a later time or use it in another context. Prerequisites You have installed the Knative ( kn ) CLI. Procedure Build an event: USD kn event build --field <field-name>=<value> --type <type-name> --id <id> --output <format> where: The --field flag adds data to the event as a field-value pair. You can use it multiple times. The --type flag enables you to specify a string that designates the type of the event. The --id flag specifies the ID of the event. You can use the json or yaml arguments with the --output flag to change the output format of the event. All of these flags are optional. Building a simple event USD kn event build -o yaml Resultant event in the YAML format data: {} datacontenttype: application/json id: 81a402a2-9c29-4c27-b8ed-246a253c9e58 source: kn-event/v0.4.0 specversion: "1.0" time: "2021-10-15T10:42:57.713226203Z" type: dev.knative.cli.plugin.event.generic Building a sample transaction event USD kn event build \ --field operation.type=local-wire-transfer \ --field operation.amount=2345.40 \ --field operation.from=87656231 \ --field operation.to=2344121 \ --field automated=true \ --field signature='FGzCPLvYWdEgsdpb3qXkaVp7Da0=' \ --type org.example.bank.bar \ --id USD(head -c 10 < /dev/urandom | base64 -w 0) \ --output json Resultant event in the JSON format { "specversion": "1.0", "id": "RjtL8UH66X+UJg==", "source": "kn-event/v0.4.0", "type": "org.example.bank.bar", "datacontenttype": "application/json", "time": "2021-10-15T10:43:23.113187943Z", "data": { "automated": true, "operation": { "amount": "2345.40", "from": 87656231, "to": 2344121, "type": "local-wire-transfer" }, "signature": "FGzCPLvYWdEgsdpb3qXkaVp7Da0=" } } 4.3.2. Sending events by using the kn-event plug-in You can use the kn event send command to send an event. The events can be sent either to publicly available addresses or to addressable resources inside a cluster, such as Kubernetes services, as well as Knative services, brokers, and channels. The command uses the same builder-like interface as the kn event build command. Prerequisites You have installed the Knative ( kn ) CLI. Procedure Send an event: USD kn event send --field <field-name>=<value> --type <type-name> --id <id> --to-url <url> --to <cluster-resource> --namespace <namespace> where: The --field flag adds data to the event as a field-value pair. You can use it multiple times. The --type flag enables you to specify a string that designates the type of the event. The --id flag specifies the ID of the event. If you are sending the event to a publicly accessible destination, specify the URL using the --to-url flag. If you are sending the event to an in-cluster Kubernetes resource, specify the destination using the --to flag. Specify the Kubernetes resource using the <Kind>:<ApiVersion>:<name> format. The --namespace flag specifies the namespace. If omitted, the namespace is taken from the current context. All of these flags are optional, except for the destination specification, for which you need to use either --to-url or --to . The following example shows sending an event to a URL: Example command USD kn event send \ --field player.id=6354aa60-ddb1-452e-8c13-24893667de20 \ --field player.game=2345 \ --field points=456 \ --type org.example.gaming.foo \ --to-url http://ce-api.foo.example.com/ The following example shows sending an event to an in-cluster resource: Example command USD kn event send \ --type org.example.kn.ping \ --id USD(uuidgen) \ --field event.type=test \ --field event.data=98765 \ --to Service:serving.knative.dev/v1:event-display 4.4. Knative Serving CLI commands You can use the following Knative ( kn ) CLI commands to complete Knative Serving tasks on the cluster. 4.4.1. kn service commands You can use the following commands to create and manage Knative services. 4.4.1.1. Creating serverless applications by using the Knative CLI Using the Knative ( kn ) CLI to create serverless applications provides a more streamlined and intuitive user interface over modifying YAML files directly. You can use the kn service create command to create a basic serverless application. Prerequisites OpenShift Serverless Operator and Knative Serving are installed on your cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a Knative service: USD kn service create <service-name> --image <image> --tag <tag-value> Where: --image is the URI of the image for the application. --tag is an optional flag that can be used to add a tag to the initial revision that is created with the service. Example command USD kn service create event-display \ --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest Example output Creating service 'event-display' in namespace 'default': 0.271s The Route is still working to reflect the latest desired specification. 0.580s Configuration "event-display" is waiting for a Revision to become ready. 3.857s ... 3.861s Ingress has not yet been reconciled. 4.270s Ready to serve. Service 'event-display' created with latest revision 'event-display-bxshg-1' and URL: http://event-display-default.apps-crc.testing 4.4.1.2. Updating serverless applications by using the Knative CLI You can use the kn service update command for interactive sessions on the command line as you build up a service incrementally. In contrast to the kn service apply command, when using the kn service update command you only have to specify the changes that you want to update, rather than the full configuration for the Knative service. Example commands Update a service by adding a new environment variable: USD kn service update <service_name> --env <key>=<value> Update a service by adding a new port: USD kn service update <service_name> --port 80 Update a service by adding new request and limit parameters: USD kn service update <service_name> --request cpu=500m --limit memory=1024Mi --limit cpu=1000m Assign the latest tag to a revision: USD kn service update <service_name> --tag <revision_name>=latest Update a tag from testing to staging for the latest READY revision of a service: USD kn service update <service_name> --untag testing --tag @latest=staging Add the test tag to a revision that receives 10% of traffic, and send the rest of the traffic to the latest READY revision of a service: USD kn service update <service_name> --tag <revision_name>=test --traffic test=10,@latest=90 4.4.1.3. Applying service declarations You can declaratively configure a Knative service by using the kn service apply command. If the service does not exist it is created, otherwise the existing service is updated with the options that have been changed. The kn service apply command is especially useful for shell scripts or in a continuous integration pipeline, where users typically want to fully specify the state of the service in a single command to declare the target state. When using kn service apply you must provide the full configuration for the Knative service. This is different from the kn service update command, which only requires you to specify in the command the options that you want to update. Example commands Create a service: USD kn service apply <service_name> --image <image> Add an environment variable to a service: USD kn service apply <service_name> --image <image> --env <key>=<value> Read the service declaration from a JSON or YAML file: USD kn service apply <service_name> -f <filename> 4.4.1.4. Describing serverless applications by using the Knative CLI You can describe a Knative service by using the kn service describe command. Example commands Describe a service: USD kn service describe --verbose <service_name> The --verbose flag is optional but can be included to provide a more detailed description. The difference between a regular and verbose output is shown in the following examples: Example output without --verbose flag Name: hello Namespace: default Age: 2m URL: http://hello-default.apps.ocp.example.com Revisions: 100% @latest (hello-00001) [1] (2m) Image: docker.io/openshift/hello-openshift (pinned to aaea76) Conditions: OK TYPE AGE REASON ++ Ready 1m ++ ConfigurationsReady 1m ++ RoutesReady 1m Example output with --verbose flag Name: hello Namespace: default Annotations: serving.knative.dev/creator=system:admin serving.knative.dev/lastModifier=system:admin Age: 3m URL: http://hello-default.apps.ocp.example.com Cluster: http://hello.default.svc.cluster.local Revisions: 100% @latest (hello-00001) [1] (3m) Image: docker.io/openshift/hello-openshift (pinned to aaea76) Env: RESPONSE=Hello Serverless! Conditions: OK TYPE AGE REASON ++ Ready 3m ++ ConfigurationsReady 3m ++ RoutesReady 3m Describe a service in YAML format: USD kn service describe <service_name> -o yaml Describe a service in JSON format: USD kn service describe <service_name> -o json Print the service URL only: USD kn service describe <service_name> -o url 4.4.2. About the Knative CLI offline mode When you execute kn service commands, the changes immediately propagate to the cluster. However, as an alternative, you can execute kn service commands in offline mode. When you create a service in offline mode, no changes happen on the cluster, and instead the service descriptor file is created on your local machine. Important The offline mode of the Knative CLI is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . After the descriptor file is created, you can manually modify it and track it in a version control system. You can also propagate changes to the cluster by using the kn service create -f , kn service apply -f , or oc apply -f commands on the descriptor files. The offline mode has several uses: You can manually modify the descriptor file before using it to make changes on the cluster. You can locally track the descriptor file of a service in a version control system. This enables you to reuse the descriptor file in places other than the target cluster, for example in continuous integration (CI) pipelines, development environments, or demos. You can examine the created descriptor files to learn about Knative services. In particular, you can see how the resulting service is influenced by the different arguments passed to the kn command. The offline mode has its advantages: it is fast, and does not require a connection to the cluster. However, offline mode lacks server-side validation. Consequently, you cannot, for example, verify that the service name is unique or that the specified image can be pulled. 4.4.2.1. Creating a service using offline mode You can execute kn service commands in offline mode, so that no changes happen on the cluster, and instead the service descriptor file is created on your local machine. After the descriptor file is created, you can modify the file before propagating changes to the cluster. Important The offline mode of the Knative CLI is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . Prerequisites OpenShift Serverless Operator and Knative Serving are installed on your cluster. You have installed the Knative ( kn ) CLI. Procedure In offline mode, create a local Knative service descriptor file: USD kn service create event-display \ --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest \ --target ./ \ --namespace test Example output Service 'event-display' created in namespace 'test'. The --target ./ flag enables offline mode and specifies ./ as the directory for storing the new directory tree. If you do not specify an existing directory, but use a filename, such as --target my-service.yaml , then no directory tree is created. Instead, only the service descriptor file my-service.yaml is created in the current directory. The filename can have the .yaml , .yml , or .json extension. Choosing .json creates the service descriptor file in the JSON format. The --namespace test option places the new service in the test namespace. If you do not use --namespace , and you are logged in to an OpenShift cluster, the descriptor file is created in the current namespace. Otherwise, the descriptor file is created in the default namespace. Examine the created directory structure: USD tree ./ Example output ./ └── test └── ksvc └── event-display.yaml 2 directories, 1 file The current ./ directory specified with --target contains the new test/ directory that is named after the specified namespace. The test/ directory contains the ksvc directory, named after the resource type. The ksvc directory contains the descriptor file event-display.yaml , named according to the specified service name. Examine the generated service descriptor file: USD cat test/ksvc/event-display.yaml Example output apiVersion: serving.knative.dev/v1 kind: Service metadata: creationTimestamp: null name: event-display namespace: test spec: template: metadata: annotations: client.knative.dev/user-image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest creationTimestamp: null spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest name: "" resources: {} status: {} List information about the new service: USD kn service describe event-display --target ./ --namespace test Example output Name: event-display Namespace: test Age: URL: Revisions: Conditions: OK TYPE AGE REASON The --target ./ option specifies the root directory for the directory structure containing namespace subdirectories. Alternatively, you can directly specify a YAML or JSON filename with the --target option. The accepted file extensions are .yaml , .yml , and .json . The --namespace option specifies the namespace, which communicates to kn the subdirectory that contains the necessary service descriptor file. If you do not use --namespace , and you are logged in to an OpenShift cluster, kn searches for the service in the subdirectory that is named after the current namespace. Otherwise, kn searches in the default/ subdirectory. Use the service descriptor file to create the service on the cluster: USD kn service create -f test/ksvc/event-display.yaml Example output Creating service 'event-display' in namespace 'test': 0.058s The Route is still working to reflect the latest desired specification. 0.098s ... 0.168s Configuration "event-display" is waiting for a Revision to become ready. 23.377s ... 23.419s Ingress has not yet been reconciled. 23.534s Waiting for load balancer to be ready 23.723s Ready to serve. Service 'event-display' created to latest revision 'event-display-00001' is available at URL: http://event-display-test.apps.example.com 4.4.3. kn container commands You can use the following commands to create and manage multiple containers in a Knative service spec. 4.4.3.1. Knative client multi-container support You can use the kn container add command to print YAML container spec to standard output. This command is useful for multi-container use cases because it can be used along with other standard kn flags to create definitions. The kn container add command accepts all container-related flags that are supported for use with the kn service create command. The kn container add command can also be chained by using UNIX pipes ( | ) to create multiple container definitions at once. Example commands Add a container from an image and print it to standard output: USD kn container add <container_name> --image <image_uri> Example command USD kn container add sidecar --image docker.io/example/sidecar Example output containers: - image: docker.io/example/sidecar name: sidecar resources: {} Chain two kn container add commands together, and then pass them to a kn service create command to create a Knative service with two containers: USD kn container add <first_container_name> --image <image_uri> | \ kn container add <second_container_name> --image <image_uri> | \ kn service create <service_name> --image <image_uri> --extra-containers - --extra-containers - specifies a special case where kn reads the pipe input instead of a YAML file. Example command USD kn container add sidecar --image docker.io/example/sidecar:first | \ kn container add second --image docker.io/example/sidecar:second | \ kn service create my-service --image docker.io/example/my-app:latest --extra-containers - The --extra-containers flag can also accept a path to a YAML file: USD kn service create <service_name> --image <image_uri> --extra-containers <filename> Example command USD kn service create my-service --image docker.io/example/my-app:latest --extra-containers my-extra-containers.yaml 4.4.4. kn domain commands You can use the following commands to create and manage domain mappings. 4.4.4.1. Creating a custom domain mapping by using the Knative CLI You can customize the domain for your Knative service by mapping a custom domain name that you own to a Knative service. You can use the Knative ( kn ) CLI to create a DomainMapping custom resource (CR) that maps to an Addressable target CR, such as a Knative service or a Knative route. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on your cluster. You have created a Knative service or route, and control a custom domain that you want to map to that CR. Note Your custom domain must point to the DNS of the OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Map a domain to a CR in the current namespace: USD kn domain create <domain_mapping_name> --ref <target_name> Example command USD kn domain create example.com --ref example-service The --ref flag specifies an Addressable target CR for domain mapping. If a prefix is not provided when using the --ref flag, it is assumed that the target is a Knative service in the current namespace. Map a domain to a Knative service in a specified namespace: USD kn domain create <domain_mapping_name> --ref <ksvc:service_name:service_namespace> Example command USD kn domain create example.com --ref ksvc:example-service:example-namespace Map a domain to a Knative route: USD kn domain create <domain_mapping_name> --ref <kroute:route_name> Example command USD kn domain create example.com --ref kroute:example-route 4.4.4.2. Managing custom domain mappings by using the Knative CLI After you have created a DomainMapping custom resource (CR), you can list existing CRs, view information about an existing CR, update CRs, or delete CRs by using the Knative ( kn ) CLI. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on your cluster. You have created at least one DomainMapping CR. You have installed the Knative ( kn ) CLI tool. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure List existing DomainMapping CRs: USD kn domain list -n <domain_mapping_namespace> View details of an existing DomainMapping CR: USD kn domain describe <domain_mapping_name> Update a DomainMapping CR to point to a new target: USD kn domain update --ref <target> Delete a DomainMapping CR: USD kn domain delete <domain_mapping_name> 4.5. Knative Eventing CLI commands You can use the following Knative ( kn ) CLI commands to complete Knative Eventing tasks on the cluster. 4.5.1. kn source commands You can use the following commands to list, create, and manage Knative event sources. 4.5.1.1. Listing available event source types by using the Knative CLI Using the Knative ( kn ) CLI provides a streamlined and intuitive user interface to view available event source types on your cluster. You can list event source types that can be created and used on your cluster by using the kn source list-types CLI command. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on the cluster. You have installed the Knative ( kn ) CLI. Procedure List the available event source types in the terminal: USD kn source list-types Example output TYPE NAME DESCRIPTION ApiServerSource apiserversources.sources.knative.dev Watch and send Kubernetes API events to a sink PingSource pingsources.sources.knative.dev Periodically send ping events to a sink SinkBinding sinkbindings.sources.knative.dev Binding for connecting a PodSpecable to a sink Optional: You can also list the available event source types in YAML format: USD kn source list-types -o yaml 4.5.1.2. Knative CLI sink flag When you create an event source by using the Knative ( kn ) CLI, you can specify a sink where events are sent to from that resource by using the --sink flag. The sink can be any addressable or callable resource that can receive incoming events from other resources. The following example creates a sink binding that uses a service, http://event-display.svc.cluster.local , as the sink: Example command using the sink flag USD kn source binding create bind-heartbeat \ --namespace sinkbinding-example \ --subject "Job:batch/v1:app=heartbeat-cron" \ --sink http://event-display.svc.cluster.local \ 1 --ce-override "sink=bound" 1 svc in http://event-display.svc.cluster.local determines that the sink is a Knative service. Other default sink prefixes include channel , and broker . 4.5.1.3. Creating and managing container sources by using the Knative CLI You can use the kn source container commands to create and manage container sources by using the Knative ( kn ) CLI. Using the Knative CLI to create event sources provides a more streamlined and intuitive user interface than modifying YAML files directly. Create a container source USD kn source container create <container_source_name> --image <image_uri> --sink <sink> Delete a container source USD kn source container delete <container_source_name> Describe a container source USD kn source container describe <container_source_name> List existing container sources USD kn source container list List existing container sources in YAML format USD kn source container list -o yaml Update a container source This command updates the image URI for an existing container source: USD kn source container update <container_source_name> --image <image_uri> 4.5.1.4. Creating an API server source by using the Knative CLI You can use the kn source apiserver create command to create an API server source by using the kn CLI. Using the kn CLI to create an API server source provides a more streamlined and intuitive user interface than modifying YAML files directly. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on the cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have installed the OpenShift CLI ( oc ). You have installed the Knative ( kn ) CLI. Procedure If you want to re-use an existing service account, you can modify your existing ServiceAccount resource to include the required permissions instead of creating a new resource. Create a service account, role, and role binding for the event source as a YAML file: apiVersion: v1 kind: ServiceAccount metadata: name: events-sa namespace: default 1 --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: event-watcher namespace: default 2 rules: - apiGroups: - "" resources: - events verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: k8s-ra-event-watcher namespace: default 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: event-watcher subjects: - kind: ServiceAccount name: events-sa namespace: default 4 1 2 3 4 Change this namespace to the namespace that you have selected for installing the event source. Apply the YAML file: USD oc apply -f <filename> Create an API server source that has an event sink. In the following example, the sink is a broker: USD kn source apiserver create <event_source_name> --sink broker:<broker_name> --resource "event:v1" --service-account <service_account_name> --mode Resource To check that the API server source is set up correctly, create a Knative service that dumps incoming messages to its log: USD kn service create <service_name> --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest If you used a broker as an event sink, create a trigger to filter events from the default broker to the service: USD kn trigger create <trigger_name> --sink ksvc:<service_name> Create events by launching a pod in the default namespace: USD oc create deployment hello-node --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest Check that the controller is mapped correctly by inspecting the output generated by the following command: USD kn source apiserver describe <source_name> Example output Name: mysource Namespace: default Annotations: sources.knative.dev/creator=developer, sources.knative.dev/lastModifier=developer Age: 3m ServiceAccountName: events-sa Mode: Resource Sink: Name: default Namespace: default Kind: Broker (eventing.knative.dev/v1) Resources: Kind: event (v1) Controller: false Conditions: OK TYPE AGE REASON ++ Ready 3m ++ Deployed 3m ++ SinkProvided 3m ++ SufficientPermissions 3m ++ EventTypesProvided 3m Verification You can verify that the Kubernetes events were sent to Knative by looking at the message dumper function logs. Get the pods: USD oc get pods View the message dumper function logs for the pods: USD oc logs USD(oc get pod -o name | grep event-display) -c user-container Example output ☁\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.apiserver.resource.update datacontenttype: application/json ... Data, { "apiVersion": "v1", "involvedObject": { "apiVersion": "v1", "fieldPath": "spec.containers{hello-node}", "kind": "Pod", "name": "hello-node", "namespace": "default", ..... }, "kind": "Event", "message": "Started container", "metadata": { "name": "hello-node.159d7608e3a3572c", "namespace": "default", .... }, "reason": "Started", ... } Deleting the API server source Delete the trigger: USD kn trigger delete <trigger_name> Delete the event source: USD kn source apiserver delete <source_name> Delete the service account, cluster role, and cluster binding: USD oc delete -f authentication.yaml 4.5.1.5. Creating a ping source by using the Knative CLI You can use the kn source ping create command to create a ping source by using the Knative ( kn ) CLI. Using the Knative CLI to create event sources provides a more streamlined and intuitive user interface than modifying YAML files directly. Prerequisites The OpenShift Serverless Operator, Knative Serving and Knative Eventing are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Optional: If you want to use the verification steps for this procedure, install the OpenShift CLI ( oc ). Procedure To verify that the ping source is working, create a simple Knative service that dumps incoming messages to the service logs: USD kn service create event-display \ --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest For each set of ping events that you want to request, create a ping source in the same namespace as the event consumer: USD kn source ping create test-ping-source \ --schedule "*/2 * * * *" \ --data '{"message": "Hello world!"}' \ --sink ksvc:event-display Check that the controller is mapped correctly by entering the following command and inspecting the output: USD kn source ping describe test-ping-source Example output Name: test-ping-source Namespace: default Annotations: sources.knative.dev/creator=developer, sources.knative.dev/lastModifier=developer Age: 15s Schedule: */2 * * * * Data: {"message": "Hello world!"} Sink: Name: event-display Namespace: default Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 8s ++ Deployed 8s ++ SinkProvided 15s ++ ValidSchedule 15s ++ EventTypeProvided 15s ++ ResourcesCorrect 15s Verification You can verify that the Kubernetes events were sent to the Knative event sink by looking at the logs of the sink pod. By default, Knative services terminate their pods if no traffic is received within a 60 second period. The example shown in this guide creates a ping source that sends a message every 2 minutes, so each message should be observed in a newly created pod. Watch for new pods created: USD watch oc get pods Cancel watching the pods using Ctrl+C, then look at the logs of the created pod: USD oc logs USD(oc get pod -o name | grep event-display) -c user-container Example output ☁\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.sources.ping source: /apis/v1/namespaces/default/pingsources/test-ping-source id: 99e4f4f6-08ff-4bff-acf1-47f61ded68c9 time: 2020-04-07T16:16:00.000601161Z datacontenttype: application/json Data, { "message": "Hello world!" } Deleting the ping source Delete the ping source: USD kn delete pingsources.sources.knative.dev <ping_source_name> 4.5.1.6. Creating a Kafka event source by using the Knative CLI You can use the kn source kafka create command to create a Kafka source by using the Knative ( kn ) CLI. Using the Knative CLI to create event sources provides a more streamlined and intuitive user interface than modifying YAML files directly. Prerequisites The OpenShift Serverless Operator, Knative Eventing, Knative Serving, and the KnativeKafka custom resource (CR) are installed on your cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have access to a Red Hat AMQ Streams (Kafka) cluster that produces the Kafka messages you want to import. You have installed the Knative ( kn ) CLI. Optional: You have installed the OpenShift CLI ( oc ) if you want to use the verification steps in this procedure. Procedure To verify that the Kafka event source is working, create a Knative service that dumps incoming events into the service logs: USD kn service create event-display \ --image quay.io/openshift-knative/knative-eventing-sources-event-display Create a KafkaSource CR: USD kn source kafka create <kafka_source_name> \ --servers <cluster_kafka_bootstrap>.kafka.svc:9092 \ --topics <topic_name> --consumergroup my-consumer-group \ --sink event-display Note Replace the placeholder values in this command with values for your source name, bootstrap servers, and topics. The --servers , --topics , and --consumergroup options specify the connection parameters to the Kafka cluster. The --consumergroup option is optional. Optional: View details about the KafkaSource CR you created: USD kn source kafka describe <kafka_source_name> Example output Name: example-kafka-source Namespace: kafka Age: 1h BootstrapServers: example-cluster-kafka-bootstrap.kafka.svc:9092 Topics: example-topic ConsumerGroup: example-consumer-group Sink: Name: event-display Namespace: default Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 1h ++ Deployed 1h ++ SinkProvided 1h Verification steps Trigger the Kafka instance to send a message to the topic: USD oc -n kafka run kafka-producer \ -ti --image=quay.io/strimzi/kafka:latest-kafka-2.7.0 --rm=true \ --restart=Never -- bin/kafka-console-producer.sh \ --broker-list <cluster_kafka_bootstrap>:9092 --topic my-topic Enter the message in the prompt. This command assumes that: The Kafka cluster is installed in the kafka namespace. The KafkaSource object has been configured to use the my-topic topic. Verify that the message arrived by viewing the logs: USD oc logs USD(oc get pod -o name | grep event-display) -c user-container Example output ☁\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.kafka.event source: /apis/v1/namespaces/default/kafkasources/example-kafka-source#example-topic subject: partition:46#0 id: partition:46/offset:0 time: 2021-03-10T11:21:49.4Z Extensions, traceparent: 00-161ff3815727d8755848ec01c866d1cd-7ff3916c44334678-00 Data, Hello! 4.6. Functions commands 4.6.1. Creating functions Before you can build and deploy a function, you must create it by using the Knative ( kn ) CLI. You can specify the path, runtime, template, and image registry as flags on the command line, or use the -c flag to start the interactive experience in the terminal. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have installed the Knative ( kn ) CLI. Procedure Create a function project: USD kn func create -r <repository> -l <runtime> -t <template> <path> Accepted runtime values include node , go , python , quarkus , and typescript . Accepted template values include http and events . Example command USD kn func create -l typescript -t events examplefunc Example output Project path: /home/user/demo/examplefunc Function name: examplefunc Runtime: typescript Template: events Writing events to /home/user/demo/examplefunc Alternatively, you can specify a repository that contains a custom template. Example command USD kn func create -r https://github.com/boson-project/templates/ -l node -t hello-world examplefunc Example output Project path: /home/user/demo/examplefunc Function name: examplefunc Runtime: node Template: hello-world Writing events to /home/user/demo/examplefunc 4.6.2. Running a function locally You can use the kn func run command to run a function locally in the current directory or in the directory specified by the --path flag. If the function that you are running has never previously been built, or if the project files have been modified since the last time it was built, the kn func run command builds the function before running it by default. Example command to run a function in the current directory USD kn func run Example command to run a function in a directory specified as a path USD kn func run --path=<directory_path> You can also force a rebuild of an existing image before running the function, even if there have been no changes to the project files, by using the --build flag: Example run command using the build flag USD kn func run --build If you set the build flag as false, this disables building of the image, and runs the function using the previously built image: Example run command using the build flag USD kn func run --build=false You can use the help command to learn more about kn func run command options: Build help command USD kn func help run 4.6.3. Building functions Before you can run a function, you must build the function project. If you are using the kn func run command, the function is built automatically. However, you can use the kn func build command to build a function without running it, which can be useful for advanced users or debugging scenarios. The kn func build command creates an OCI container image that can be run locally on your computer or on an OpenShift Container Platform cluster. This command uses the function project name and the image registry name to construct a fully qualified image name for your function. 4.6.3.1. Image container types By default, kn func build creates a container image by using Red Hat Source-to-Image (S2I) technology. Example build command using Red Hat Source-to-Image (S2I) USD kn func build You can use CNCF Cloud Native Buildpacks technology instead, by adding the --builder flag to the command and specifying the pack strategy: Example build command using CNCF Cloud Native Buildpacks USD kn func build --builder pack 4.6.3.2. Image registry types The OpenShift Container Registry is used by default as the image registry for storing function images. Example build command using OpenShift Container Registry USD kn func build Example output Building function image Function image has been built, image: registry.redhat.io/example/example-function:latest You can override using OpenShift Container Registry as the default image registry by using the --registry flag: Example build command overriding OpenShift Container Registry to use quay.io USD kn func build --registry quay.io/username Example output Building function image Function image has been built, image: quay.io/username/example-function:latest 4.6.3.3. Push flag You can add the --push flag to a kn func build command to automatically push the function image after it is successfully built: Example build command using OpenShift Container Registry USD kn func build --push 4.6.3.4. Help command You can use the help command to learn more about kn func build command options: Build help command USD kn func help build 4.6.4. Deploying functions You can deploy a function to your cluster as a Knative service by using the kn func deploy command. If the targeted function is already deployed, it is updated with a new container image that is pushed to a container image registry, and the Knative service is updated. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You must have already created and initialized the function that you want to deploy. Procedure Deploy a function: USD kn func deploy [-n <namespace> -p <path> -i <image>] Example output Function deployed at: http://func.example.com If no namespace is specified, the function is deployed in the current namespace. The function is deployed from the current directory, unless a path is specified. The Knative service name is derived from the project name, and cannot be changed using this command. 4.6.5. Listing existing functions You can list existing functions by using kn func list . If you want to list functions that have been deployed as Knative services, you can also use kn service list . Procedure List existing functions: USD kn func list [-n <namespace> -p <path>] Example output NAME NAMESPACE RUNTIME URL READY example-function default node http://example-function.default.apps.ci-ln-g9f36hb-d5d6b.origin-ci-int-aws.dev.rhcloud.com True List functions deployed as Knative services: USD kn service list -n <namespace> Example output NAME URL LATEST AGE CONDITIONS READY REASON example-function http://example-function.default.apps.ci-ln-g9f36hb-d5d6b.origin-ci-int-aws.dev.rhcloud.com example-function-gzl4c 16m 3 OK / 3 True 4.6.6. Describing a function The kn func info command prints information about a deployed function, such as the function name, image, namespace, Knative service information, route information, and event subscriptions. Procedure Describe a function: USD kn func info [-f <format> -n <namespace> -p <path>] Example command USD kn func info -p function/example-function Example output Function name: example-function Function is built in image: docker.io/user/example-function:latest Function is deployed as Knative Service: example-function Function is deployed in namespace: default Routes: http://example-function.default.apps.ci-ln-g9f36hb-d5d6b.origin-ci-int-aws.dev.rhcloud.com 4.6.7. Invoking a deployed function with a test event You can use the kn func invoke CLI command to send a test request to invoke a function either locally or on your OpenShift Container Platform cluster. This command can be used to test that a function is working and able to receive events correctly. Example command USD kn func invoke The kn func invoke command executes on the local directory by default, and assumes that this directory is a function project. 4.6.7.1. kn func invoke optional parameters You can specify optional parameters for the request by using the following kn func invoke CLI command flags. Flags Description -t , --target Specifies the target instance of the invoked function, for example, local or remote or https://staging.example.com/ . The default target is local . -f , --format Specifies the format of the message, for example, cloudevent or http . --id Specifies a unique string identifier for the request. -n , --namespace Specifies the namespace on the cluster. --source Specifies sender name for the request. This corresponds to the CloudEvent source attribute. --type Specifies the type of request, for example, boson.fn . This corresponds to the CloudEvent type attribute. --data Specifies content for the request. For CloudEvent requests, this is the CloudEvent data attribute. --file Specifies path to a local file containing data to be sent. --content-type Specifies the MIME content type for the request. -p , --path Specifies path to the project directory. -c , --confirm Enables prompting to interactively confirm all options. -v , --verbose Enables printing verbose output. -h , --help Prints information on usage of kn func invoke . 4.6.7.1.1. Main parameters The following parameters define the main properties of the kn func invoke command: Event target ( -t , --target ) The target instance of the invoked function. Accepts the local value for a locally deployed function, the remote value for a remotely deployed function, or a URL for a function deployed to an arbitrary endpoint. If a target is not specified, it defaults to local . Event message format ( -f , --format ) The message format for the event, such as http or cloudevent . This defaults to the format of the template that was used when creating the function. Event type ( --type ) The type of event that is sent. You can find information about the type parameter that is set in the documentation for each event producer. For example, the API server source might set the type parameter of produced events as dev.knative.apiserver.resource.update . Event source ( --source ) The unique event source that produced the event. This might be a URI for the event source, for example https://10.96.0.1/ , or the name of the event source. Event ID ( --id ) A random, unique ID that is created by the event producer. Event data ( --data ) Allows you to specify a data value for the event sent by the kn func invoke command. For example, you can specify a --data value such as "Hello World" so that the event contains this data string. By default, no data is included in the events created by kn func invoke . Note Functions that have been deployed to a cluster can respond to events from an existing event source that provides values for properties such as source and type . These events often have a data value in JSON format, which captures the domain specific context of the event. By using the CLI flags noted in this document, developers can simulate those events for local testing. You can also send event data using the --file flag to provide a local file containing data for the event. In this case, specify the content type using --content-type . Data content type ( --content-type ) If you are using the --data flag to add data for events, you can use the --content-type flag to specify what type of data is carried by the event. In the example, the data is plain text, so you might specify kn func invoke --data "Hello world!" --content-type "text/plain" . 4.6.7.1.2. Example commands This is the general invocation of the kn func invoke command: USD kn func invoke --type <event_type> --source <event_source> --data <event_data> --content-type <content_type> --id <event_ID> --format <format> --namespace <namespace> For example, to send a "Hello world!" event, you can run: USD kn func invoke --type ping --source example-ping --data "Hello world!" --content-type "text/plain" --id example-ID --format http --namespace my-ns 4.6.7.1.2.1. Specifying the file with data To specify the file on disk that contains the event data, use the --file and --content-type flags: USD kn func invoke --file <path> --content-type <content-type> For example, to send JSON data stored in the test.json file, use this command: USD kn func invoke --file ./test.json --content-type application/json 4.6.7.1.2.2. Specifying the function project You can specify a path to the function project by using the --path flag: USD kn func invoke --path <path_to_function> For example, to use the function project located in the ./example/example-function directory, use this command: USD kn func invoke --path ./example/example-function 4.6.7.1.2.3. Specifying where the target function is deployed By default, kn func invoke targets the local deployment of the function: USD kn func invoke To use a different deployment, use the --target flag: USD kn func invoke --target <target> For example, to use the function deployed on the cluster, use the --target remote flag: USD kn func invoke --target remote To use the function deployed at an arbitrary URL, use the --target <URL> flag: USD kn func invoke --target "https://my-event-broker.example.com" You can explicitly target the local deployment. In this case, if the function is not running locally, the command fails: USD kn func invoke --target local 4.6.8. Deleting a function You can delete a function from your cluster by using the kn func delete command. Procedure Delete a function: USD kn func delete [<function_name> -n <namespace> -p <path>] If the name or path of the function to delete is not specified, the current directory is searched for a func.yaml file that is used to determine the function to delete. If the namespace is not specified, it defaults to the namespace value in the func.yaml file.
[ "kn: No such file or directory", "tar -xf <file>", "echo USDPATH", "oc get ConsoleCLIDownload", "NAME DISPLAY NAME AGE kn kn - OpenShift Serverless Command Line Interface (CLI) 2022-09-20T08:41:18Z oc-cli-downloads oc - OpenShift Command Line Interface (CLI) 2022-09-20T08:00:20Z", "oc get route -n openshift-serverless", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD kn kn-openshift-serverless.apps.example.com knative-openshift-metrics-3 http-cli edge/Redirect None", "subscription-manager register", "subscription-manager refresh", "subscription-manager attach --pool=<pool_id> 1", "subscription-manager repos --enable=\"openshift-serverless-1-for-rhel-8-x86_64-rpms\"", "subscription-manager repos --enable=\"openshift-serverless-1-for-rhel-8-s390x-rpms\"", "subscription-manager repos --enable=\"openshift-serverless-1-for-rhel-8-ppc64le-rpms\"", "yum install openshift-serverless-clients", "kn: No such file or directory", "tar -xf <filename>", "echo USDPATH", "echo USDPATH", "C:\\> path", "plugins: path-lookup: true 1 directory: ~/.config/kn/plugins 2 eventing: sink-mappings: 3 - prefix: svc 4 group: core 5 version: v1 6 resource: services 7", "kn event build --field <field-name>=<value> --type <type-name> --id <id> --output <format>", "kn event build -o yaml", "data: {} datacontenttype: application/json id: 81a402a2-9c29-4c27-b8ed-246a253c9e58 source: kn-event/v0.4.0 specversion: \"1.0\" time: \"2021-10-15T10:42:57.713226203Z\" type: dev.knative.cli.plugin.event.generic", "kn event build --field operation.type=local-wire-transfer --field operation.amount=2345.40 --field operation.from=87656231 --field operation.to=2344121 --field automated=true --field signature='FGzCPLvYWdEgsdpb3qXkaVp7Da0=' --type org.example.bank.bar --id USD(head -c 10 < /dev/urandom | base64 -w 0) --output json", "{ \"specversion\": \"1.0\", \"id\": \"RjtL8UH66X+UJg==\", \"source\": \"kn-event/v0.4.0\", \"type\": \"org.example.bank.bar\", \"datacontenttype\": \"application/json\", \"time\": \"2021-10-15T10:43:23.113187943Z\", \"data\": { \"automated\": true, \"operation\": { \"amount\": \"2345.40\", \"from\": 87656231, \"to\": 2344121, \"type\": \"local-wire-transfer\" }, \"signature\": \"FGzCPLvYWdEgsdpb3qXkaVp7Da0=\" } }", "kn event send --field <field-name>=<value> --type <type-name> --id <id> --to-url <url> --to <cluster-resource> --namespace <namespace>", "kn event send --field player.id=6354aa60-ddb1-452e-8c13-24893667de20 --field player.game=2345 --field points=456 --type org.example.gaming.foo --to-url http://ce-api.foo.example.com/", "kn event send --type org.example.kn.ping --id USD(uuidgen) --field event.type=test --field event.data=98765 --to Service:serving.knative.dev/v1:event-display", "kn service create <service-name> --image <image> --tag <tag-value>", "kn service create event-display --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest", "Creating service 'event-display' in namespace 'default': 0.271s The Route is still working to reflect the latest desired specification. 0.580s Configuration \"event-display\" is waiting for a Revision to become ready. 3.857s 3.861s Ingress has not yet been reconciled. 4.270s Ready to serve. Service 'event-display' created with latest revision 'event-display-bxshg-1' and URL: http://event-display-default.apps-crc.testing", "kn service update <service_name> --env <key>=<value>", "kn service update <service_name> --port 80", "kn service update <service_name> --request cpu=500m --limit memory=1024Mi --limit cpu=1000m", "kn service update <service_name> --tag <revision_name>=latest", "kn service update <service_name> --untag testing --tag @latest=staging", "kn service update <service_name> --tag <revision_name>=test --traffic test=10,@latest=90", "kn service apply <service_name> --image <image>", "kn service apply <service_name> --image <image> --env <key>=<value>", "kn service apply <service_name> -f <filename>", "kn service describe --verbose <service_name>", "Name: hello Namespace: default Age: 2m URL: http://hello-default.apps.ocp.example.com Revisions: 100% @latest (hello-00001) [1] (2m) Image: docker.io/openshift/hello-openshift (pinned to aaea76) Conditions: OK TYPE AGE REASON ++ Ready 1m ++ ConfigurationsReady 1m ++ RoutesReady 1m", "Name: hello Namespace: default Annotations: serving.knative.dev/creator=system:admin serving.knative.dev/lastModifier=system:admin Age: 3m URL: http://hello-default.apps.ocp.example.com Cluster: http://hello.default.svc.cluster.local Revisions: 100% @latest (hello-00001) [1] (3m) Image: docker.io/openshift/hello-openshift (pinned to aaea76) Env: RESPONSE=Hello Serverless! Conditions: OK TYPE AGE REASON ++ Ready 3m ++ ConfigurationsReady 3m ++ RoutesReady 3m", "kn service describe <service_name> -o yaml", "kn service describe <service_name> -o json", "kn service describe <service_name> -o url", "kn service create event-display --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --target ./ --namespace test", "Service 'event-display' created in namespace 'test'.", "tree ./", "./ └── test └── ksvc └── event-display.yaml 2 directories, 1 file", "cat test/ksvc/event-display.yaml", "apiVersion: serving.knative.dev/v1 kind: Service metadata: creationTimestamp: null name: event-display namespace: test spec: template: metadata: annotations: client.knative.dev/user-image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest creationTimestamp: null spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest name: \"\" resources: {} status: {}", "kn service describe event-display --target ./ --namespace test", "Name: event-display Namespace: test Age: URL: Revisions: Conditions: OK TYPE AGE REASON", "kn service create -f test/ksvc/event-display.yaml", "Creating service 'event-display' in namespace 'test': 0.058s The Route is still working to reflect the latest desired specification. 0.098s 0.168s Configuration \"event-display\" is waiting for a Revision to become ready. 23.377s 23.419s Ingress has not yet been reconciled. 23.534s Waiting for load balancer to be ready 23.723s Ready to serve. Service 'event-display' created to latest revision 'event-display-00001' is available at URL: http://event-display-test.apps.example.com", "kn container add <container_name> --image <image_uri>", "kn container add sidecar --image docker.io/example/sidecar", "containers: - image: docker.io/example/sidecar name: sidecar resources: {}", "kn container add <first_container_name> --image <image_uri> | kn container add <second_container_name> --image <image_uri> | kn service create <service_name> --image <image_uri> --extra-containers -", "kn container add sidecar --image docker.io/example/sidecar:first | kn container add second --image docker.io/example/sidecar:second | kn service create my-service --image docker.io/example/my-app:latest --extra-containers -", "kn service create <service_name> --image <image_uri> --extra-containers <filename>", "kn service create my-service --image docker.io/example/my-app:latest --extra-containers my-extra-containers.yaml", "kn domain create <domain_mapping_name> --ref <target_name>", "kn domain create example.com --ref example-service", "kn domain create <domain_mapping_name> --ref <ksvc:service_name:service_namespace>", "kn domain create example.com --ref ksvc:example-service:example-namespace", "kn domain create <domain_mapping_name> --ref <kroute:route_name>", "kn domain create example.com --ref kroute:example-route", "kn domain list -n <domain_mapping_namespace>", "kn domain describe <domain_mapping_name>", "kn domain update --ref <target>", "kn domain delete <domain_mapping_name>", "kn source list-types", "TYPE NAME DESCRIPTION ApiServerSource apiserversources.sources.knative.dev Watch and send Kubernetes API events to a sink PingSource pingsources.sources.knative.dev Periodically send ping events to a sink SinkBinding sinkbindings.sources.knative.dev Binding for connecting a PodSpecable to a sink", "kn source list-types -o yaml", "kn source binding create bind-heartbeat --namespace sinkbinding-example --subject \"Job:batch/v1:app=heartbeat-cron\" --sink http://event-display.svc.cluster.local \\ 1 --ce-override \"sink=bound\"", "kn source container create <container_source_name> --image <image_uri> --sink <sink>", "kn source container delete <container_source_name>", "kn source container describe <container_source_name>", "kn source container list", "kn source container list -o yaml", "kn source container update <container_source_name> --image <image_uri>", "apiVersion: v1 kind: ServiceAccount metadata: name: events-sa namespace: default 1 --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: event-watcher namespace: default 2 rules: - apiGroups: - \"\" resources: - events verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: k8s-ra-event-watcher namespace: default 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: event-watcher subjects: - kind: ServiceAccount name: events-sa namespace: default 4", "oc apply -f <filename>", "kn source apiserver create <event_source_name> --sink broker:<broker_name> --resource \"event:v1\" --service-account <service_account_name> --mode Resource", "kn service create <service_name> --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest", "kn trigger create <trigger_name> --sink ksvc:<service_name>", "oc create deployment hello-node --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest", "kn source apiserver describe <source_name>", "Name: mysource Namespace: default Annotations: sources.knative.dev/creator=developer, sources.knative.dev/lastModifier=developer Age: 3m ServiceAccountName: events-sa Mode: Resource Sink: Name: default Namespace: default Kind: Broker (eventing.knative.dev/v1) Resources: Kind: event (v1) Controller: false Conditions: OK TYPE AGE REASON ++ Ready 3m ++ Deployed 3m ++ SinkProvided 3m ++ SufficientPermissions 3m ++ EventTypesProvided 3m", "oc get pods", "oc logs USD(oc get pod -o name | grep event-display) -c user-container", "☁\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.apiserver.resource.update datacontenttype: application/json Data, { \"apiVersion\": \"v1\", \"involvedObject\": { \"apiVersion\": \"v1\", \"fieldPath\": \"spec.containers{hello-node}\", \"kind\": \"Pod\", \"name\": \"hello-node\", \"namespace\": \"default\", .. }, \"kind\": \"Event\", \"message\": \"Started container\", \"metadata\": { \"name\": \"hello-node.159d7608e3a3572c\", \"namespace\": \"default\", . }, \"reason\": \"Started\", }", "kn trigger delete <trigger_name>", "kn source apiserver delete <source_name>", "oc delete -f authentication.yaml", "kn service create event-display --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest", "kn source ping create test-ping-source --schedule \"*/2 * * * *\" --data '{\"message\": \"Hello world!\"}' --sink ksvc:event-display", "kn source ping describe test-ping-source", "Name: test-ping-source Namespace: default Annotations: sources.knative.dev/creator=developer, sources.knative.dev/lastModifier=developer Age: 15s Schedule: */2 * * * * Data: {\"message\": \"Hello world!\"} Sink: Name: event-display Namespace: default Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 8s ++ Deployed 8s ++ SinkProvided 15s ++ ValidSchedule 15s ++ EventTypeProvided 15s ++ ResourcesCorrect 15s", "watch oc get pods", "oc logs USD(oc get pod -o name | grep event-display) -c user-container", "☁\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.sources.ping source: /apis/v1/namespaces/default/pingsources/test-ping-source id: 99e4f4f6-08ff-4bff-acf1-47f61ded68c9 time: 2020-04-07T16:16:00.000601161Z datacontenttype: application/json Data, { \"message\": \"Hello world!\" }", "kn delete pingsources.sources.knative.dev <ping_source_name>", "kn service create event-display --image quay.io/openshift-knative/knative-eventing-sources-event-display", "kn source kafka create <kafka_source_name> --servers <cluster_kafka_bootstrap>.kafka.svc:9092 --topics <topic_name> --consumergroup my-consumer-group --sink event-display", "kn source kafka describe <kafka_source_name>", "Name: example-kafka-source Namespace: kafka Age: 1h BootstrapServers: example-cluster-kafka-bootstrap.kafka.svc:9092 Topics: example-topic ConsumerGroup: example-consumer-group Sink: Name: event-display Namespace: default Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 1h ++ Deployed 1h ++ SinkProvided 1h", "oc -n kafka run kafka-producer -ti --image=quay.io/strimzi/kafka:latest-kafka-2.7.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --broker-list <cluster_kafka_bootstrap>:9092 --topic my-topic", "oc logs USD(oc get pod -o name | grep event-display) -c user-container", "☁\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.kafka.event source: /apis/v1/namespaces/default/kafkasources/example-kafka-source#example-topic subject: partition:46#0 id: partition:46/offset:0 time: 2021-03-10T11:21:49.4Z Extensions, traceparent: 00-161ff3815727d8755848ec01c866d1cd-7ff3916c44334678-00 Data, Hello!", "kn func create -r <repository> -l <runtime> -t <template> <path>", "kn func create -l typescript -t events examplefunc", "Project path: /home/user/demo/examplefunc Function name: examplefunc Runtime: typescript Template: events Writing events to /home/user/demo/examplefunc", "kn func create -r https://github.com/boson-project/templates/ -l node -t hello-world examplefunc", "Project path: /home/user/demo/examplefunc Function name: examplefunc Runtime: node Template: hello-world Writing events to /home/user/demo/examplefunc", "kn func run", "kn func run --path=<directory_path>", "kn func run --build", "kn func run --build=false", "kn func help run", "kn func build", "kn func build --builder pack", "kn func build", "Building function image Function image has been built, image: registry.redhat.io/example/example-function:latest", "kn func build --registry quay.io/username", "Building function image Function image has been built, image: quay.io/username/example-function:latest", "kn func build --push", "kn func help build", "kn func deploy [-n <namespace> -p <path> -i <image>]", "Function deployed at: http://func.example.com", "kn func list [-n <namespace> -p <path>]", "NAME NAMESPACE RUNTIME URL READY example-function default node http://example-function.default.apps.ci-ln-g9f36hb-d5d6b.origin-ci-int-aws.dev.rhcloud.com True", "kn service list -n <namespace>", "NAME URL LATEST AGE CONDITIONS READY REASON example-function http://example-function.default.apps.ci-ln-g9f36hb-d5d6b.origin-ci-int-aws.dev.rhcloud.com example-function-gzl4c 16m 3 OK / 3 True", "kn func info [-f <format> -n <namespace> -p <path>]", "kn func info -p function/example-function", "Function name: example-function Function is built in image: docker.io/user/example-function:latest Function is deployed as Knative Service: example-function Function is deployed in namespace: default Routes: http://example-function.default.apps.ci-ln-g9f36hb-d5d6b.origin-ci-int-aws.dev.rhcloud.com", "kn func invoke", "kn func invoke --type <event_type> --source <event_source> --data <event_data> --content-type <content_type> --id <event_ID> --format <format> --namespace <namespace>", "kn func invoke --type ping --source example-ping --data \"Hello world!\" --content-type \"text/plain\" --id example-ID --format http --namespace my-ns", "kn func invoke --file <path> --content-type <content-type>", "kn func invoke --file ./test.json --content-type application/json", "kn func invoke --path <path_to_function>", "kn func invoke --path ./example/example-function", "kn func invoke", "kn func invoke --target <target>", "kn func invoke --target remote", "kn func invoke --target \"https://my-event-broker.example.com\"", "kn func invoke --target local", "kn func delete [<function_name> -n <namespace> -p <path>]" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/serverless/knative-cli
Chapter 5. BuildRequest [build.openshift.io/v1]
Chapter 5. BuildRequest [build.openshift.io/v1] Description BuildRequest is the resource used to pass parameters to build generator Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources binary object BinaryBuildSource describes a binary file to be used for the Docker and Source build strategies, where the file will be extracted and used as the build source. dockerStrategyOptions object DockerStrategyOptions contains extra strategy options for container image builds env array (EnvVar) env contains additional environment variables you want to pass into a builder container. from ObjectReference from is the reference to the ImageStreamTag that triggered the build. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds lastVersion integer lastVersion (optional) is the LastVersion of the BuildConfig that was used to generate the build. If the BuildConfig in the generator doesn't match, a build will not be generated. metadata ObjectMeta revision object SourceRevision is the revision or commit information from the source for the build sourceStrategyOptions object SourceStrategyOptions contains extra strategy options for Source builds triggeredBy array triggeredBy describes which triggers started the most recent update to the build configuration and contains information about those triggers. triggeredBy[] object BuildTriggerCause holds information about a triggered build. It is used for displaying build trigger data for each build and build configuration in oc describe. It is also used to describe which triggers led to the most recent update in the build configuration. triggeredByImage ObjectReference triggeredByImage is the Image that triggered this build. 5.1.1. .binary Description BinaryBuildSource describes a binary file to be used for the Docker and Source build strategies, where the file will be extracted and used as the build source. Type object Property Type Description asFile string asFile indicates that the provided binary input should be considered a single file within the build input. For example, specifying "webapp.war" would place the provided binary as /webapp.war for the builder. If left empty, the Docker and Source build strategies assume this file is a zip, tar, or tar.gz file and extract it as the source. The custom strategy receives this binary as standard input. This filename may not contain slashes or be '..' or '.'. 5.1.2. .dockerStrategyOptions Description DockerStrategyOptions contains extra strategy options for container image builds Type object Property Type Description buildArgs array (EnvVar) Args contains any build arguments that are to be passed to Docker. See https://docs.docker.com/engine/reference/builder/#/arg for more details noCache boolean noCache overrides the docker-strategy noCache option in the build config 5.1.3. .revision Description SourceRevision is the revision or commit information from the source for the build Type object Required type Property Type Description git object GitSourceRevision is the commit information from a git source for a build type string type of the build source, may be one of 'Source', 'Dockerfile', 'Binary', or 'Images' 5.1.4. .revision.git Description GitSourceRevision is the commit information from a git source for a build Type object Property Type Description author object SourceControlUser defines the identity of a user of source control commit string commit is the commit hash identifying a specific commit committer object SourceControlUser defines the identity of a user of source control message string message is the description of a specific commit 5.1.5. .revision.git.author Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 5.1.6. .revision.git.committer Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 5.1.7. .sourceStrategyOptions Description SourceStrategyOptions contains extra strategy options for Source builds Type object Property Type Description incremental boolean incremental overrides the source-strategy incremental option in the build config 5.1.8. .triggeredBy Description triggeredBy describes which triggers started the most recent update to the build configuration and contains information about those triggers. Type array 5.1.9. .triggeredBy[] Description BuildTriggerCause holds information about a triggered build. It is used for displaying build trigger data for each build and build configuration in oc describe. It is also used to describe which triggers led to the most recent update in the build configuration. Type object Property Type Description bitbucketWebHook object BitbucketWebHookCause has information about a Bitbucket webhook that triggered a build. genericWebHook object GenericWebHookCause holds information about a generic WebHook that triggered a build. githubWebHook object GitHubWebHookCause has information about a GitHub webhook that triggered a build. gitlabWebHook object GitLabWebHookCause has information about a GitLab webhook that triggered a build. imageChangeBuild object ImageChangeCause contains information about the image that triggered a build message string message is used to store a human readable message for why the build was triggered. E.g.: "Manually triggered by user", "Configuration change",etc. 5.1.10. .triggeredBy[].bitbucketWebHook Description BitbucketWebHookCause has information about a Bitbucket webhook that triggered a build. Type object Property Type Description revision object SourceRevision is the revision or commit information from the source for the build secret string Secret is the obfuscated webhook secret that triggered a build. 5.1.11. .triggeredBy[].bitbucketWebHook.revision Description SourceRevision is the revision or commit information from the source for the build Type object Required type Property Type Description git object GitSourceRevision is the commit information from a git source for a build type string type of the build source, may be one of 'Source', 'Dockerfile', 'Binary', or 'Images' 5.1.12. .triggeredBy[].bitbucketWebHook.revision.git Description GitSourceRevision is the commit information from a git source for a build Type object Property Type Description author object SourceControlUser defines the identity of a user of source control commit string commit is the commit hash identifying a specific commit committer object SourceControlUser defines the identity of a user of source control message string message is the description of a specific commit 5.1.13. .triggeredBy[].bitbucketWebHook.revision.git.author Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 5.1.14. .triggeredBy[].bitbucketWebHook.revision.git.committer Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 5.1.15. .triggeredBy[].genericWebHook Description GenericWebHookCause holds information about a generic WebHook that triggered a build. Type object Property Type Description revision object SourceRevision is the revision or commit information from the source for the build secret string secret is the obfuscated webhook secret that triggered a build. 5.1.16. .triggeredBy[].genericWebHook.revision Description SourceRevision is the revision or commit information from the source for the build Type object Required type Property Type Description git object GitSourceRevision is the commit information from a git source for a build type string type of the build source, may be one of 'Source', 'Dockerfile', 'Binary', or 'Images' 5.1.17. .triggeredBy[].genericWebHook.revision.git Description GitSourceRevision is the commit information from a git source for a build Type object Property Type Description author object SourceControlUser defines the identity of a user of source control commit string commit is the commit hash identifying a specific commit committer object SourceControlUser defines the identity of a user of source control message string message is the description of a specific commit 5.1.18. .triggeredBy[].genericWebHook.revision.git.author Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 5.1.19. .triggeredBy[].genericWebHook.revision.git.committer Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 5.1.20. .triggeredBy[].githubWebHook Description GitHubWebHookCause has information about a GitHub webhook that triggered a build. Type object Property Type Description revision object SourceRevision is the revision or commit information from the source for the build secret string secret is the obfuscated webhook secret that triggered a build. 5.1.21. .triggeredBy[].githubWebHook.revision Description SourceRevision is the revision or commit information from the source for the build Type object Required type Property Type Description git object GitSourceRevision is the commit information from a git source for a build type string type of the build source, may be one of 'Source', 'Dockerfile', 'Binary', or 'Images' 5.1.22. .triggeredBy[].githubWebHook.revision.git Description GitSourceRevision is the commit information from a git source for a build Type object Property Type Description author object SourceControlUser defines the identity of a user of source control commit string commit is the commit hash identifying a specific commit committer object SourceControlUser defines the identity of a user of source control message string message is the description of a specific commit 5.1.23. .triggeredBy[].githubWebHook.revision.git.author Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 5.1.24. .triggeredBy[].githubWebHook.revision.git.committer Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 5.1.25. .triggeredBy[].gitlabWebHook Description GitLabWebHookCause has information about a GitLab webhook that triggered a build. Type object Property Type Description revision object SourceRevision is the revision or commit information from the source for the build secret string Secret is the obfuscated webhook secret that triggered a build. 5.1.26. .triggeredBy[].gitlabWebHook.revision Description SourceRevision is the revision or commit information from the source for the build Type object Required type Property Type Description git object GitSourceRevision is the commit information from a git source for a build type string type of the build source, may be one of 'Source', 'Dockerfile', 'Binary', or 'Images' 5.1.27. .triggeredBy[].gitlabWebHook.revision.git Description GitSourceRevision is the commit information from a git source for a build Type object Property Type Description author object SourceControlUser defines the identity of a user of source control commit string commit is the commit hash identifying a specific commit committer object SourceControlUser defines the identity of a user of source control message string message is the description of a specific commit 5.1.28. .triggeredBy[].gitlabWebHook.revision.git.author Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 5.1.29. .triggeredBy[].gitlabWebHook.revision.git.committer Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 5.1.30. .triggeredBy[].imageChangeBuild Description ImageChangeCause contains information about the image that triggered a build Type object Property Type Description fromRef ObjectReference fromRef contains detailed information about an image that triggered a build. imageID string imageID is the ID of the image that triggered a new build. 5.2. API endpoints The following API endpoints are available: /apis/build.openshift.io/v1/namespaces/{namespace}/builds/{name}/clone POST : create clone of a Build /apis/build.openshift.io/v1/namespaces/{namespace}/buildconfigs/{name}/instantiate POST : create instantiate of a BuildConfig 5.2.1. /apis/build.openshift.io/v1/namespaces/{namespace}/builds/{name}/clone Table 5.1. Global path parameters Parameter Type Description name string name of the BuildRequest namespace string object name and auth scope, such as for teams and projects Table 5.2. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create clone of a Build Table 5.3. Body parameters Parameter Type Description body BuildRequest schema Table 5.4. HTTP responses HTTP code Reponse body 200 - OK BuildRequest schema 201 - Created BuildRequest schema 202 - Accepted BuildRequest schema 401 - Unauthorized Empty 5.2.2. /apis/build.openshift.io/v1/namespaces/{namespace}/buildconfigs/{name}/instantiate Table 5.5. Global path parameters Parameter Type Description name string name of the BuildRequest namespace string object name and auth scope, such as for teams and projects Table 5.6. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create instantiate of a BuildConfig Table 5.7. Body parameters Parameter Type Description body BuildRequest schema Table 5.8. HTTP responses HTTP code Reponse body 200 - OK Build schema 201 - Created Build schema 202 - Accepted Build schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/workloads_apis/buildrequest-build-openshift-io-v1
Autoscaling for instances
Autoscaling for instances Red Hat OpenStack Services on OpenShift 18.0 Configuring autoscaling in Red Hat OpenStack Services on OpenShift OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/autoscaling_for_instances/index
Chapter 4. Installing managed clusters with RHACM and SiteConfig resources
Chapter 4. Installing managed clusters with RHACM and SiteConfig resources You can provision OpenShift Container Platform clusters at scale with Red Hat Advanced Cluster Management (RHACM) using the assisted service and the GitOps plugin policy generator with core-reduction technology enabled. The GitOps Zero Touch Provisioning (ZTP) pipeline performs the cluster installations. GitOps ZTP can be used in a disconnected environment. Important Using PolicyGenTemplate CRs to manage and deploy policies to managed clusters will be deprecated in an upcoming OpenShift Container Platform release. Equivalent and improved functionality is available using Red Hat Advanced Cluster Management (RHACM) and PolicyGenerator CRs. For more information about PolicyGenerator resources, see the RHACM Policy Generator documentation. Additional resources Configuring managed cluster policies by using PolicyGenerator resources Comparing RHACM PolicyGenerator and PolicyGenTemplate resource patching 4.1. GitOps ZTP and Topology Aware Lifecycle Manager GitOps Zero Touch Provisioning (ZTP) generates installation and configuration CRs from manifests stored in Git. These artifacts are applied to a centralized hub cluster where Red Hat Advanced Cluster Management (RHACM), the assisted service, and the Topology Aware Lifecycle Manager (TALM) use the CRs to install and configure the managed cluster. The configuration phase of the GitOps ZTP pipeline uses the TALM to orchestrate the application of the configuration CRs to the cluster. There are several key integration points between GitOps ZTP and the TALM. Inform policies By default, GitOps ZTP creates all policies with a remediation action of inform . These policies cause RHACM to report on compliance status of clusters relevant to the policies but does not apply the desired configuration. During the GitOps ZTP process, after OpenShift installation, the TALM steps through the created inform policies and enforces them on the target managed cluster(s). This applies the configuration to the managed cluster. Outside of the GitOps ZTP phase of the cluster lifecycle, this allows you to change policies without the risk of immediately rolling those changes out to affected managed clusters. You can control the timing and the set of remediated clusters by using TALM. Automatic creation of ClusterGroupUpgrade CRs To automate the initial configuration of newly deployed clusters, TALM monitors the state of all ManagedCluster CRs on the hub cluster. Any ManagedCluster CR that does not have a ztp-done label applied, including newly created ManagedCluster CRs, causes the TALM to automatically create a ClusterGroupUpgrade CR with the following characteristics: The ClusterGroupUpgrade CR is created and enabled in the ztp-install namespace. ClusterGroupUpgrade CR has the same name as the ManagedCluster CR. The cluster selector includes only the cluster associated with that ManagedCluster CR. The set of managed policies includes all policies that RHACM has bound to the cluster at the time the ClusterGroupUpgrade is created. Pre-caching is disabled. Timeout set to 4 hours (240 minutes). The automatic creation of an enabled ClusterGroupUpgrade ensures that initial zero-touch deployment of clusters proceeds without the need for user intervention. Additionally, the automatic creation of a ClusterGroupUpgrade CR for any ManagedCluster without the ztp-done label allows a failed GitOps ZTP installation to be restarted by simply deleting the ClusterGroupUpgrade CR for the cluster. Waves Each policy generated from a PolicyGenerator or PolicyGentemplate CR includes a ztp-deploy-wave annotation. This annotation is based on the same annotation from each CR which is included in that policy. The wave annotation is used to order the policies in the auto-generated ClusterGroupUpgrade CR. The wave annotation is not used other than for the auto-generated ClusterGroupUpgrade CR. Note All CRs in the same policy must have the same setting for the ztp-deploy-wave annotation. The default value of this annotation for each CR can be overridden in the PolicyGenerator or PolicyGentemplate . The wave annotation in the source CR is used for determining and setting the policy wave annotation. This annotation is removed from each built CR which is included in the generated policy at runtime. The TALM applies the configuration policies in the order specified by the wave annotations. The TALM waits for each policy to be compliant before moving to the policy. It is important to ensure that the wave annotation for each CR takes into account any prerequisites for those CRs to be applied to the cluster. For example, an Operator must be installed before or concurrently with the configuration for the Operator. Similarly, the CatalogSource for an Operator must be installed in a wave before or concurrently with the Operator Subscription. The default wave value for each CR takes these prerequisites into account. Multiple CRs and policies can share the same wave number. Having fewer policies can result in faster deployments and lower CPU usage. It is a best practice to group many CRs into relatively few waves. To check the default wave value in each source CR, run the following command against the out/source-crs directory that is extracted from the ztp-site-generate container image: USD grep -r "ztp-deploy-wave" out/source-crs Phase labels The ClusterGroupUpgrade CR is automatically created and includes directives to annotate the ManagedCluster CR with labels at the start and end of the GitOps ZTP process. When GitOps ZTP configuration postinstallation commences, the ManagedCluster has the ztp-running label applied. When all policies are remediated to the cluster and are fully compliant, these directives cause the TALM to remove the ztp-running label and apply the ztp-done label. For deployments that make use of the informDuValidator policy, the ztp-done label is applied when the cluster is fully ready for deployment of applications. This includes all reconciliation and resulting effects of the GitOps ZTP applied configuration CRs. The ztp-done label affects automatic ClusterGroupUpgrade CR creation by TALM. Do not manipulate this label after the initial GitOps ZTP installation of the cluster. Linked CRs The automatically created ClusterGroupUpgrade CR has the owner reference set as the ManagedCluster from which it was derived. This reference ensures that deleting the ManagedCluster CR causes the instance of the ClusterGroupUpgrade to be deleted along with any supporting resources. 4.2. Overview of deploying managed clusters with GitOps ZTP Red Hat Advanced Cluster Management (RHACM) uses GitOps Zero Touch Provisioning (ZTP) to deploy single-node OpenShift Container Platform clusters, three-node clusters, and standard clusters. You manage site configuration data as OpenShift Container Platform custom resources (CRs) in a Git repository. GitOps ZTP uses a declarative GitOps approach for a develop once, deploy anywhere model to deploy the managed clusters. The deployment of the clusters includes: Installing the host operating system (RHCOS) on a blank server Deploying OpenShift Container Platform Creating cluster policies and site subscriptions Making the necessary network configurations to the server operating system Deploying profile Operators and performing any needed software-related configuration, such as performance profile, PTP, and SR-IOV Overview of the managed site installation process After you apply the managed site custom resources (CRs) on the hub cluster, the following actions happen automatically: A Discovery image ISO file is generated and booted on the target host. When the ISO file successfully boots on the target host it reports the host hardware information to RHACM. After all hosts are discovered, OpenShift Container Platform is installed. When OpenShift Container Platform finishes installing, the hub installs the klusterlet service on the target cluster. The requested add-on services are installed on the target cluster. The Discovery image ISO process is complete when the Agent CR for the managed cluster is created on the hub cluster. Important The target bare-metal host must meet the networking, firmware, and hardware requirements listed in Recommended single-node OpenShift cluster configuration for vDU application workloads . 4.3. Creating the managed bare-metal host secrets Add the required Secret custom resources (CRs) for the managed bare-metal host to the hub cluster. You need a secret for the GitOps Zero Touch Provisioning (ZTP) pipeline to access the Baseboard Management Controller (BMC) and a secret for the assisted installer service to pull cluster installation images from the registry. Note The secrets are referenced from the SiteConfig CR by name. The namespace must match the SiteConfig namespace. Procedure Create a YAML secret file containing credentials for the host Baseboard Management Controller (BMC) and a pull secret required for installing OpenShift and all add-on cluster Operators: Save the following YAML as the file example-sno-secret.yaml : apiVersion: v1 kind: Secret metadata: name: example-sno-bmc-secret namespace: example-sno 1 data: 2 password: <base64_password> username: <base64_username> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno 3 data: .dockerconfigjson: <pull_secret> 4 type: kubernetes.io/dockerconfigjson 1 Must match the namespace configured in the related SiteConfig CR 2 Base64-encoded values for password and username 3 Must match the namespace configured in the related SiteConfig CR 4 Base64-encoded pull secret Add the relative path to example-sno-secret.yaml to the kustomization.yaml file that you use to install the cluster. 4.4. Configuring Discovery ISO kernel arguments for installations using GitOps ZTP The GitOps Zero Touch Provisioning (ZTP) workflow uses the Discovery ISO as part of the OpenShift Container Platform installation process on managed bare-metal hosts. You can edit the InfraEnv resource to specify kernel arguments for the Discovery ISO. This is useful for cluster installations with specific environmental requirements. For example, configure the rd.net.timeout.carrier kernel argument for the Discovery ISO to facilitate static networking for the cluster or to receive a DHCP address before downloading the root file system during installation. Note In OpenShift Container Platform 4.17, you can only add kernel arguments. You can not replace or delete kernel arguments. Prerequisites You have installed the OpenShift CLI (oc). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Create the InfraEnv CR and edit the spec.kernelArguments specification to configure kernel arguments. Save the following YAML in an InfraEnv-example.yaml file: Note The InfraEnv CR in this example uses template syntax such as {{ .Cluster.ClusterName }} that is populated based on values in the SiteConfig CR. The SiteConfig CR automatically populates values for these templates during deployment. Do not edit the templates manually. apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: annotations: argocd.argoproj.io/sync-wave: "1" name: "{{ .Cluster.ClusterName }}" namespace: "{{ .Cluster.ClusterName }}" spec: clusterRef: name: "{{ .Cluster.ClusterName }}" namespace: "{{ .Cluster.ClusterName }}" kernelArguments: - operation: append 1 value: audit=0 2 - operation: append value: trace=1 sshAuthorizedKey: "{{ .Site.SshPublicKey }}" proxy: "{{ .Cluster.ProxySettings }}" pullSecretRef: name: "{{ .Site.PullSecretRef.Name }}" ignitionConfigOverride: "{{ .Cluster.IgnitionConfigOverride }}" nmStateConfigLabelSelector: matchLabels: nmstate-label: "{{ .Cluster.ClusterName }}" additionalNTPSources: "{{ .Cluster.AdditionalNTPSources }}" 1 Specify the append operation to add a kernel argument. 2 Specify the kernel argument you want to configure. This example configures the audit kernel argument and the trace kernel argument. Commit the InfraEnv-example.yaml CR to the same location in your Git repository that has the SiteConfig CR and push your changes. The following example shows a sample Git repository structure: ~/example-ztp/install └── site-install ├── siteconfig-example.yaml ├── InfraEnv-example.yaml ... Edit the spec.clusters.crTemplates specification in the SiteConfig CR to reference the InfraEnv-example.yaml CR in your Git repository: clusters: crTemplates: InfraEnv: "InfraEnv-example.yaml" When you are ready to deploy your cluster by committing and pushing the SiteConfig CR, the build pipeline uses the custom InfraEnv-example CR in your Git repository to configure the infrastructure environment, including the custom kernel arguments. Verification To verify that the kernel arguments are applied, after the Discovery image verifies that OpenShift Container Platform is ready for installation, you can SSH to the target host before the installation process begins. At that point, you can view the kernel arguments for the Discovery ISO in the /proc/cmdline file. Begin an SSH session with the target host: USD ssh -i /path/to/privatekey core@<host_name> View the system's kernel arguments by using the following command: USD cat /proc/cmdline 4.5. Deploying a managed cluster with SiteConfig and GitOps ZTP Use the following procedure to create a SiteConfig custom resource (CR) and related files and initiate the GitOps Zero Touch Provisioning (ZTP) cluster deployment. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You configured the hub cluster for generating the required installation and policy CRs. You created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and you must configure it as a source repository for the ArgoCD application. See "Preparing the GitOps ZTP site configuration repository" for more information. Note When you create the source repository, ensure that you patch the ArgoCD application with the argocd/deployment/argocd-openshift-gitops-patch.json patch-file that you extract from the ztp-site-generate container. See "Configuring the hub cluster with ArgoCD". To be ready for provisioning managed clusters, you require the following for each bare-metal host: Network connectivity Your network requires DNS. Managed cluster hosts should be reachable from the hub cluster. Ensure that Layer 3 connectivity exists between the hub cluster and the managed cluster host. Baseboard Management Controller (BMC) details GitOps ZTP uses BMC username and password details to connect to the BMC during cluster installation. The GitOps ZTP plugin manages the ManagedCluster CRs on the hub cluster based on the SiteConfig CR in your site Git repo. You create individual BMCSecret CRs for each host manually. Procedure Create the required managed cluster secrets on the hub cluster. These resources must be in a namespace with a name matching the cluster name. For example, in out/argocd/example/siteconfig/example-sno.yaml , the cluster name and namespace is example-sno . Export the cluster namespace by running the following command: USD export CLUSTERNS=example-sno Create the namespace: USD oc create namespace USDCLUSTERNS Create pull secret and BMC Secret CRs for the managed cluster. The pull secret must contain all the credentials necessary for installing OpenShift Container Platform and all required Operators. See "Creating the managed bare-metal host secrets" for more information. Note The secrets are referenced from the SiteConfig custom resource (CR) by name. The namespace must match the SiteConfig namespace. Create a SiteConfig CR for your cluster in your local clone of the Git repository: Choose the appropriate example for your CR from the out/argocd/example/siteconfig/ folder. The folder includes example files for single node, three-node, and standard clusters: example-sno.yaml example-3node.yaml example-standard.yaml Change the cluster and host details in the example file to match the type of cluster you want. For example: Example single-node OpenShift SiteConfig CR # example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno --- apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "example-sno" namespace: "example-sno" spec: baseDomain: "example.com" pullSecretRef: name: "assisted-deployment-pull-secret" clusterImageSetNameRef: "openshift-4.16" sshPublicKey: "ssh-rsa AAAA..." clusters: - clusterName: "example-sno" networkType: "OVNKubernetes" # installConfigOverrides is a generic way of passing install-config # parameters through the siteConfig. The 'capabilities' field configures # the composable openshift feature. In this 'capabilities' setting, we # remove all the optional set of components. # Notes: # - OperatorLifecycleManager is needed for 4.15 and later # - NodeTuning is needed for 4.13 and later, not for 4.12 and earlier # - Ingress is needed for 4.16 and later installConfigOverrides: | { "capabilities": { "baselineCapabilitySet": "None", "additionalEnabledCapabilities": [ "NodeTuning", "OperatorLifecycleManager", "Ingress" ] } } # It is strongly recommended to include crun manifests as part of the additional install-time manifests for 4.13+. # The crun manifests can be obtained from source-crs/optional-extra-manifest/ and added to the git repo ie.sno-extra-manifest. # extraManifestPath: sno-extra-manifest clusterLabels: # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples du-profile: "latest" # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples in ../policygentemplates: # ../policygentemplates/common-ranGen.yaml will apply to all clusters with 'common: true' common: true # ../policygentemplates/group-du-sno-ranGen.yaml will apply to all clusters with 'group-du-sno: ""' group-du-sno: "" # ../policygentemplates/example-sno-site.yaml will apply to all clusters with 'sites: "example-sno"' # Normally this should match or contain the cluster name so it only applies to a single cluster sites: "example-sno" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 # Initiates the cluster for workload partitioning. Setting specific reserved/isolated CPUSets is done via PolicyTemplate # please see Workload Partitioning Feature for a complete guide. cpuPartitioningMode: AllNodes # Optionally; This can be used to override the KlusterletAddonConfig that is created for this cluster: #crTemplates: # KlusterletAddonConfig: "KlusterletAddonConfigOverride.yaml" nodes: - hostName: "example-node1.example.com" role: "master" # Optionally; This can be used to configure desired BIOS setting on a host: #biosConfigRef: # filePath: "example-hw.profile" bmcAddress: "idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1" bmcCredentialsName: name: "example-node1-bmh-secret" bootMACAddress: "AA:BB:CC:DD:EE:11" # Use UEFISecureBoot to enable secure boot. bootMode: "UEFISecureBoot" rootDeviceHints: deviceName: "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0" # disk partition at `/var/lib/containers` with ignitionConfigOverride. Some values must be updated. See DiskPartitionContainer.md for more details ignitionConfigOverride: | { "ignition": { "version": "3.2.0" }, "storage": { "disks": [ { "device": "/dev/disk/by-id/wwn-0x6b07b250ebb9d0002a33509f24af1f62", "partitions": [ { "label": "var-lib-containers", "sizeMiB": 0, "startMiB": 250000 } ], "wipeTable": false } ], "filesystems": [ { "device": "/dev/disk/by-partlabel/var-lib-containers", "format": "xfs", "mountOptions": [ "defaults", "prjquota" ], "path": "/var/lib/containers", "wipeFilesystem": true } ] }, "systemd": { "units": [ { "contents": "# Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target", "enabled": true, "name": "var-lib-containers.mount" } ] } } nodeNetwork: interfaces: - name: eno1 macAddress: "AA:BB:CC:DD:EE:11" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: enabled: true address: # For SNO sites with static IP addresses, the node-specific, # API and Ingress IPs should all be the same and configured on # the interface - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 -hop-interface: eno1 -hop-address: 1111:2222:3333:4444::1 table-id: 254 Note For more information about BMC addressing, see the "Additional resources" section. The installConfigOverrides and ignitionConfigOverride fields are expanded in the example for ease of readability. You can inspect the default set of extra-manifest MachineConfig CRs in out/argocd/extra-manifest . It is automatically applied to the cluster when it is installed. Optional: To provision additional install-time manifests on the provisioned cluster, create a directory in your Git repository, for example, sno-extra-manifest/ , and add your custom manifest CRs to this directory. If your SiteConfig.yaml refers to this directory in the extraManifestPath field, any CRs in this referenced directory are appended to the default set of extra manifests. Enabling the crun OCI container runtime For optimal cluster performance, enable crun for master and worker nodes in single-node OpenShift, single-node OpenShift with additional worker nodes, three-node OpenShift, and standard clusters. Enable crun in a ContainerRuntimeConfig CR as an additional Day 0 install-time manifest to avoid the cluster having to reboot. The enable-crun-master.yaml and enable-crun-worker.yaml CR files are in the out/source-crs/optional-extra-manifest/ folder that you can extract from the ztp-site-generate container. For more information, see "Customizing extra installation manifests in the GitOps ZTP pipeline". Add the SiteConfig CR to the kustomization.yaml file in the generators section, similar to the example shown in out/argocd/example/siteconfig/kustomization.yaml . Commit the SiteConfig CR and associated kustomization.yaml changes in your Git repository and push the changes. The ArgoCD pipeline detects the changes and begins the managed cluster deployment. Verification Verify that the custom roles and labels are applied after the node is deployed: USD oc describe node example-node.example.com Example output Name: example-node.example.com Roles: control-plane,example-label,master,worker Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux custom-label/parameter1=true kubernetes.io/arch=amd64 kubernetes.io/hostname=cnfdf03.telco5gran.eng.rdu2.redhat.com kubernetes.io/os=linux node-role.kubernetes.io/control-plane= node-role.kubernetes.io/example-label= 1 node-role.kubernetes.io/master= node-role.kubernetes.io/worker= node.openshift.io/os_id=rhcos 1 The custom label is applied to the node. Additional resources Single-node OpenShift SiteConfig CR installation reference 4.5.1. Accelerated provisioning of GitOps ZTP Important Accelerated provisioning of GitOps ZTP is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can reduce the time taken for cluster installation by using accelerated provisioning of GitOps ZTP for single-node OpenShift. Accelerated ZTP speeds up installation by applying Day 2 manifests derived from policies at an earlier stage. Important Accelerated provisioning of GitOps ZTP is supported only when installing single-node OpenShift with Assisted Installer. Otherwise this installation method will fail. 4.5.1.1. Activating accelerated ZTP You can activate accelerated ZTP using the spec.clusters.clusterLabels.accelerated-ztp label, as in the following example: Example Accelerated ZTP SiteConfig CR. apiVersion: ran.openshift.io/v2 kind: SiteConfig metadata: name: "example-sno" namespace: "example-sno" spec: baseDomain: "example.com" pullSecretRef: name: "assisted-deployment-pull-secret" clusterImageSetNameRef: "openshift-4.10" sshPublicKey: "ssh-rsa AAAA..." clusters: # ... clusterLabels: common: true group-du-sno: "" sites : "example-sno" accelerated-ztp: full You can use accelerated-ztp: full to fully automate the accelerated process. GitOps ZTP updates the AgentClusterInstall resource with a reference to the accelerated GitOps ZTP ConfigMap , and includes resources extracted from policies by TALM, and accelerated ZTP job manifests. If you use accelerated-ztp: partial , GitOps ZTP does not include the accelerated job manifests, but includes policy-derived objects created during the cluster installation of the following kind types: PerformanceProfile.performance.openshift.io Tuned.tuned.openshift.io Namespace CatalogSource.operators.coreos.com ContainerRuntimeConfig.machineconfiguration.openshift.io This partial acceleration can reduce the number of reboots done by the node when applying resources of the kind Performance Profile , Tuned , and ContainerRuntimeConfig . TALM installs the Operator subscriptions derived from policies after RHACM completes the import of the cluster, following the same flow as standard GitOps ZTP. The benefits of accelerated ZTP increase with the scale of your deployment. Using accelerated-ztp: full gives more benefit on a large number of clusters. With a smaller number of clusters, the reduction in installation time is less significant. Full accelerated ZTP leaves behind a namespace and a completed job on the spoke that need to be manually removed. One benefit of using accelerated-ztp: partial is that you can override the functionality of the on-spoke job if something goes wrong with the stock implementation or if you require a custom functionality. 4.5.1.2. The accelerated ZTP process Accelerated ZTP uses an additional ConfigMap to create the resources derived from policies on the spoke cluster. The standard ConfigMap includes manifests that the GitOps ZTP workflow uses to customize cluster installs. TALM detects that the accelerated-ztp label is set and then creates a second ConfigMap . As part of accelerated ZTP, the SiteConfig generator adds a reference to that second ConfigMap using the naming convention <spoke-cluster-name>-aztp . After TALM creates that second ConfigMap , it finds all policies bound to the managed cluster and extracts the GitOps ZTP profile information. TALM adds the GitOps ZTP profile information to the <spoke-cluster-name>-aztp ConfigMap custom resource (CR) and applies the CR to the hub cluster API. 4.5.2. Configuring IPsec encryption for single-node OpenShift clusters using GitOps ZTP and SiteConfig resources You can enable IPsec encryption in managed single-node OpenShift clusters that you install using GitOps ZTP and Red Hat Advanced Cluster Management (RHACM). You can encrypt traffic between the managed cluster and IPsec endpoints external to the managed cluster. All network traffic between nodes on the OVN-Kubernetes cluster network is encrypted with IPsec in Transport mode. Important You can also configure IPsec encryption for single-node OpenShift clusters with an additional worker node by following this procedure. It is recommended to use the MachineConfig custom resource (CR) to configure IPsec encryption for single-node OpenShift clusters and single-node OpenShift clusters with an additional worker node because of their low resource availability. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have configured RHACM and the hub cluster for generating the required installation and policy custom resources (CRs) for managed clusters. You have created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application. You have installed the butane utility version 0.20.0 or later. You have a PKCS#12 certificate for the IPsec endpoint and a CA cert in PEM format. Procedure Extract the latest version of the ztp-site-generate container source and merge it with your repository where you manage your custom site configuration data. Configure optional-extra-manifest/ipsec/ipsec-endpoint-config.yaml with the required values that configure IPsec in the cluster. For example: interfaces: - name: hosta_conn type: ipsec libreswan: left: '%defaultroute' leftid: '%fromcert' leftmodecfgclient: false leftcert: left_server 1 leftrsasigkey: '%cert' right: <external_host> 2 rightid: '%fromcert' rightrsasigkey: '%cert' rightsubnet: <external_address> 3 ikev2: insist 4 type: tunnel 1 The value of this field must match with the name of the certificate used on the remote system. 2 Replace <external_host> with the external host IP address or DNS hostname. 3 Replace <external_address> with the IP subnet of the external host on the other side of the IPsec tunnel. 4 Use the IKEv2 VPN encryption protocol only. Do not use IKEv1, which is deprecated. Add the following certificates to the optional-extra-manifest/ipsec folder: left_server.p12 : The certificate bundle for the IPsec endpoints ca.pem : The certificate authority that you signed your certificates with The certificate files are required for the Network Security Services (NSS) database on each host. These files are imported as part of the Butane configuration in later steps. Open a shell prompt at the optional-extra-manifest/ipsec folder of the Git repository where you maintain your custom site configuration data. Run the optional-extra-manifest/ipsec/build.sh script to generate the required Butane and MachineConfig CRs files. If the PKCS#12 certificate is protected with a password, set the -W argument. Example output out └── argocd └── example └── optional-extra-manifest └── ipsec ├── 99-ipsec-master-endpoint-config.bu 1 ├── 99-ipsec-master-endpoint-config.yaml 2 ├── 99-ipsec-worker-endpoint-config.bu 3 ├── 99-ipsec-worker-endpoint-config.yaml 4 ├── build.sh ├── ca.pem 5 ├── left_server.p12 6 ├── enable-ipsec.yaml ├── ipsec-endpoint-config.yml └── README.md 1 2 3 4 The ipsec/build.sh script generates the Butane and endpoint configuration CRs. 5 6 You provide ca.pem and left_server.p12 certificate files that are relevant to your network. Create a custom-manifest/ folder in the repository where you manage your custom site configuration data. Add the enable-ipsec.yaml and 99-ipsec-* YAML files to the directory. For example: siteconfig ├── site1-sno-du.yaml ├── extra-manifest/ └── custom-manifest ├── enable-ipsec.yaml ├── 99-ipsec-worker-endpoint-config.yaml └── 99-ipsec-master-endpoint-config.yaml In your SiteConfig CR, add the custom-manifest/ directory to the extraManifests.searchPaths field. For example: clusters: - clusterName: "site1-sno-du" networkType: "OVNKubernetes" extraManifests: searchPaths: - extra-manifest/ - custom-manifest/ Commit the SiteConfig CR changes and updated files in your Git repository and push the changes to provision the managed cluster and configure IPsec encryption. The Argo CD pipeline detects the changes and begins the managed cluster deployment. During cluster provisioning, the GitOps ZTP pipeline appends the CRs in the custom-manifest/ directory to the default set of extra manifests stored in the extra-manifest/ directory. Verification For information about verifying the IPsec encryption, see "Verifying the IPsec encryption". Additional resources Verifying the IPsec encryption Configuring IPsec encryption Encryption protocol and IPsec mode Installing managed clusters with RHACM and SiteConfig resources 4.5.3. Configuring IPsec encryption for multi-node clusters using GitOps ZTP and SiteConfig resources You can enable IPsec encryption in managed multi-node clusters that you install using GitOps ZTP and Red Hat Advanced Cluster Management (RHACM). You can encrypt traffic between the managed cluster and IPsec endpoints external to the managed cluster. All network traffic between nodes on the OVN-Kubernetes cluster network is encrypted with IPsec in Transport mode. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have configured RHACM and the hub cluster for generating the required installation and policy custom resources (CRs) for managed clusters. You have created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application. You have installed the butane utility version 0.20.0 or later. You have a PKCS#12 certificate for the IPsec endpoint and a CA cert in PEM format. You have installed the NMState Operator. Procedure Extract the latest version of the ztp-site-generate container source and merge it with your repository where you manage your custom site configuration data. Configure the optional-extra-manifest/ipsec/ipsec-config-policy.yaml file with the required values that configure IPsec in the cluster. ConfigurationPolicy object for creating an IPsec configuration apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-config spec: namespaceSelector: include: ["default"] exclude: [] matchExpressions: [] matchLabels: {} remediationAction: inform severity: low evaluationInterval: compliant: noncompliant: object-templates-raw: | {{- range (lookup "v1" "Node" "" "").items }} - complianceType: musthave objectDefinition: kind: NodeNetworkConfigurationPolicy apiVersion: nmstate.io/v1 metadata: name: {{ .metadata.name }}-ipsec-policy spec: nodeSelector: kubernetes.io/hostname: {{ .metadata.name }} desiredState: interfaces: - name: hosta_conn type: ipsec libreswan: left: '%defaultroute' leftid: '%fromcert' leftmodecfgclient: false leftcert: left_server 1 leftrsasigkey: '%cert' right: <external_host> 2 rightid: '%fromcert' rightrsasigkey: '%cert' rightsubnet: <external_address> 3 ikev2: insist 4 type: tunnel 1 The value of this field must match with the name of the certificate used on the remote system. 2 Replace <external_host> with the external host IP address or DNS hostname. 3 Replace <external_address> with the IP subnet of the external host on the other side of the IPsec tunnel. 4 Use the IKEv2 VPN encryption protocol only. Do not use IKEv1, which is deprecated. Add the following certificates to the optional-extra-manifest/ipsec folder: left_server.p12 : The certificate bundle for the IPsec endpoints ca.pem : The certificate authority that you signed your certificates with The certificate files are required for the Network Security Services (NSS) database on each host. These files are imported as part of the Butane configuration in later steps. Open a shell prompt at the optional-extra-manifest/ipsec folder of the Git repository where you maintain your custom site configuration data. Run the optional-extra-manifest/ipsec/import-certs.sh script to generate the required Butane and MachineConfig CRs to import the external certs. If the PKCS#12 certificate is protected with a password, set the -W argument. Example output out └── argocd └── example └── optional-extra-manifest └── ipsec ├── 99-ipsec-master-import-certs.bu 1 ├── 99-ipsec-master-import-certs.yaml 2 ├── 99-ipsec-worker-import-certs.bu 3 ├── 99-ipsec-worker-import-certs.yaml 4 ├── import-certs.sh ├── ca.pem 5 ├── left_server.p12 6 ├── enable-ipsec.yaml ├── ipsec-config-policy.yaml └── README.md 1 2 3 4 The ipsec/import-certs.sh script generates the Butane and endpoint configuration CRs. 5 6 Add the ca.pem and left_server.p12 certificate files that are relevant to your network. Create a custom-manifest/ folder in the repository where you manage your custom site configuration data and add the enable-ipsec.yaml and 99-ipsec-* YAML files to the directory. Example siteconfig directory siteconfig ├── site1-mno-du.yaml ├── extra-manifest/ └── custom-manifest ├── enable-ipsec.yaml ├── 99-ipsec-master-import-certs.yaml └── 99-ipsec-worker-import-certs.yaml In your SiteConfig CR, add the custom-manifest/ directory to the extraManifests.searchPaths field, as in the following example: clusters: - clusterName: "site1-mno-du" networkType: "OVNKubernetes" extraManifests: searchPaths: - extra-manifest/ - custom-manifest/ Include the ipsec-config-policy.yaml config policy file in the source-crs directory in GitOps and reference the file in one of the PolicyGenerator CRs. Commit the SiteConfig CR changes and updated files in your Git repository and push the changes to provision the managed cluster and configure IPsec encryption. The Argo CD pipeline detects the changes and begins the managed cluster deployment. During cluster provisioning, the GitOps ZTP pipeline appends the CRs in the custom-manifest/ directory to the default set of extra manifests stored in the extra-manifest/ directory. Verification For information about verifying the IPsec encryption, see "Verifying the IPsec encryption". Additional resources Verifying the IPsec encryption Configuring IPsec encryption Encryption protocol and IPsec mode Installing managed clusters with RHACM and SiteConfig resources 4.5.4. Verifying the IPsec encryption You can verify that the IPsec encryption is successfully applied in a managed OpenShift Container Platform cluster. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have configured the IPsec encryption. Procedure Start a debug pod for the managed cluster by running the following command: USD oc debug node/<node_name> Check that the IPsec policy is applied in the cluster node by running the following command: sh-5.1# ip xfrm policy Example output src 172.16.123.0/24 dst 10.1.232.10/32 dir out priority 1757377 ptype main tmpl src 10.1.28.190 dst 10.1.232.10 proto esp reqid 16393 mode tunnel src 10.1.232.10/32 dst 172.16.123.0/24 dir fwd priority 1757377 ptype main tmpl src 10.1.232.10 dst 10.1.28.190 proto esp reqid 16393 mode tunnel src 10.1.232.10/32 dst 172.16.123.0/24 dir in priority 1757377 ptype main tmpl src 10.1.232.10 dst 10.1.28.190 proto esp reqid 16393 mode tunnel Check that the IPsec tunnel is up and connected by running the following command: sh-5.1# ip xfrm state Example output src 10.1.232.10 dst 10.1.28.190 proto esp spi 0xa62a05aa reqid 16393 mode tunnel replay-window 0 flag af-unspec esn auth-trunc hmac(sha1) 0x8c59f680c8ea1e667b665d8424e2ab749cec12dc 96 enc cbc(aes) 0x2818a489fe84929c8ab72907e9ce2f0eac6f16f2258bd22240f4087e0326badb anti-replay esn context: seq-hi 0x0, seq 0x0, oseq-hi 0x0, oseq 0x0 replay_window 128, bitmap-length 4 00000000 00000000 00000000 00000000 src 10.1.28.190 dst 10.1.232.10 proto esp spi 0x8e96e9f9 reqid 16393 mode tunnel replay-window 0 flag af-unspec esn auth-trunc hmac(sha1) 0xd960ddc0a6baaccb343396a51295e08cfd8aaddd 96 enc cbc(aes) 0x0273c02e05b4216d5e652de3fc9b3528fea94648bc2b88fa01139fdf0beb27ab anti-replay esn context: seq-hi 0x0, seq 0x0, oseq-hi 0x0, oseq 0x0 replay_window 128, bitmap-length 4 00000000 00000000 00000000 00000000 Ping a known IP in the external host subnet by running the following command: For example, ping an IP address in the rightsubnet range that you set in the ipsec/ipsec-endpoint-config.yaml file: sh-5.1# ping 172.16.110.8 Example output PING 172.16.110.8 (172.16.110.8) 56(84) bytes of data. 64 bytes from 172.16.110.8: icmp_seq=1 ttl=64 time=153 ms 64 bytes from 172.16.110.8: icmp_seq=2 ttl=64 time=155 ms 4.5.5. Single-node OpenShift SiteConfig CR installation reference Table 4.1. SiteConfig CR installation options for single-node OpenShift clusters SiteConfig CR field Description spec.cpuPartitioningMode Configure workload partitioning by setting the value for cpuPartitioningMode to AllNodes . To complete the configuration, specify the isolated and reserved CPUs in the PerformanceProfile CR. Note Configuring workload partitioning by using the cpuPartitioningMode field in the SiteConfig CR is a Tech Preview feature in OpenShift Container Platform 4.13. metadata.name Set name to assisted-deployment-pull-secret and create the assisted-deployment-pull-secret CR in the same namespace as the SiteConfig CR. spec.clusterImageSetNameRef Configure the image set available on the hub cluster for all the clusters in the site. To see the list of supported versions on your hub cluster, run oc get clusterimagesets . installConfigOverrides Set the installConfigOverrides field to enable or disable optional components prior to cluster installation. Important Use the reference configuration as specified in the example SiteConfig CR. Adding additional components back into the system might require additional reserved CPU capacity. spec.clusters.clusterImageSetNameRef Specifies the cluster image set used to deploy an individual cluster. If defined, it overrides the spec.clusterImageSetNameRef at site level. spec.clusters.clusterLabels Configure cluster labels to correspond to the binding rules in the PolicyGenerator or PolicyGentemplate CRs that you define. PolicyGenerator CRs use the policyDefaults.placement.labelSelector field. PolicyGentemplate CRs use the spec.bindingRules field. For example, acmpolicygenerator/acm-common-ranGen.yaml applies to all clusters with common: true set, acmpolicygenerator/acm-group-du-sno-ranGen.yaml applies to all clusters with group-du-sno: "" set. spec.clusters.crTemplates.KlusterletAddonConfig Optional. Set KlusterletAddonConfig to KlusterletAddonConfigOverride.yaml to override the default `KlusterletAddonConfig that is created for the cluster. spec.clusters.diskEncryption Configure this field to enable disk encryption with Trusted Platform Module (TPM) and Platform Configuration Registers (PCRs) protection. For more information, see "About disk encryption with TPM and PCR protection". Note Configuring disk encryption by using the diskEncryption field in the SiteConfig CR is a Technology Preview feature in OpenShift Container Platform 4.17. spec.clusters.diskEncryption.type Set the disk encryption type to tpm2 . spec.clusters.diskEncryption.tpm2 Configure the Platform Configuration Registers (PCRs) protection for disk encryption. spec.clusters.diskEncryption.tpm2.pcrList Configure the list of Platform Configuration Registers (PCRs) to be used for disk encryption. You must use PCR registers 1 and 7. spec.clusters.nodes.hostName For single-node deployments, define a single host. For three-node deployments, define three hosts. For standard deployments, define three hosts with role: master and two or more hosts defined with role: worker . spec.clusters.nodes.nodeLabels Specify custom roles for your nodes in your managed clusters. These are additional roles are not used by any OpenShift Container Platform components, only by the user. When you add a custom role, it can be associated with a custom machine config pool that references a specific configuration for that role. Adding custom labels or roles during installation makes the deployment process more effective and prevents the need for additional reboots after the installation is complete. spec.clusters.nodes.automatedCleaningMode Optional. Uncomment and set the value to metadata to enable the removal of the disk's partitioning table only, without fully wiping the disk. The default value is disabled . spec.clusters.nodes.bmcAddress BMC address that you use to access the host. Applies to all cluster types. GitOps ZTP supports iPXE and virtual media booting by using Redfish or IPMI protocols. To use iPXE booting, you must use RHACM 2.8 or later. For more information about BMC addressing, see the "Additional resources" section. spec.clusters.nodes.bmcAddress BMC address that you use to access the host. Applies to all cluster types. GitOps ZTP supports iPXE and virtual media booting by using Redfish or IPMI protocols. To use iPXE booting, you must use RHACM 2.8 or later. For more information about BMC addressing, see the "Additional resources" section. Note In far edge Telco use cases, only virtual media is supported for use with GitOps ZTP. spec.clusters.nodes.bmcCredentialsName Configure the bmh-secret CR that you separately create with the host BMC credentials. When creating the bmh-secret CR, use the same namespace as the SiteConfig CR that provisions the host. spec.clusters.nodes.bootMode Set the boot mode for the host to UEFI . The default value is UEFI . Use UEFISecureBoot to enable secure boot on the host. spec.clusters.nodes.rootDeviceHints Specifies the device for deployment. Identifiers that are stable across reboots are recommended. For example, wwn: <disk_wwn> or deviceName: /dev/disk/by-path/<device_path> . <by-path> values are preferred. For a detailed list of stable identifiers, see the "About root device hints" section. spec.clusters.nodes.ignitionConfigOverride Optional. Use this field to assign partitions for persistent storage. Adjust disk ID and size to the specific hardware. spec.clusters.nodes.nodeNetwork Configure the network settings for the node. spec.clusters.nodes.nodeNetwork.config.interfaces.ipv6 Configure the IPv6 address for the host. For single-node OpenShift clusters with static IP addresses, the node-specific API and Ingress IPs should be the same. Additional resources About disk encryption with TPM and PCR protection . Customizing extra installation manifests in the GitOps ZTP pipeline Preparing the GitOps ZTP site configuration repository Configuring the hub cluster with ArgoCD Signalling GitOps ZTP cluster deployment completion with validator inform policies Creating the managed bare-metal host secrets BMC addressing About root device hints 4.6. Managing host firmware settings with GitOps ZTP Hosts require the correct firmware configuration to ensure high performance and optimal efficiency. You can deploy custom host firmware configurations for managed clusters with GitOps ZTP. Tune hosts with specific hardware profiles in your lab and ensure they are optimized for your requirements. When you have completed host tuning to your satisfaction, you extract the host profile and save it in your GitOps ZTP repository. Then, you use the host profile to configure firmware settings in the managed cluster hosts that you deploy with GitOps ZTP. You specify the required hardware profiles in SiteConfig custom resources (CRs) that you use to deploy the managed clusters. The GitOps ZTP pipeline generates the required HostFirmwareSettings ( HFS ) and BareMetalHost ( BMH ) CRs that are applied to the hub cluster. Use the following best practices to manage your host firmware profiles. Identify critical firmware settings with hardware vendors Work with hardware vendors to identify and document critical host firmware settings required for optimal performance and compatibility with the deployed host platform. Use common firmware configurations across similar hardware platforms Where possible, use a standardized host firmware configuration across similar hardware platforms to reduce complexity and potential errors during deployment. Test firmware configurations in a lab environment Test host firmware configurations in a controlled lab environment before deploying in production to ensure that settings are compatible with hardware, firmware, and software. Manage firmware profiles in source control Manage host firmware profiles in Git repositories to track changes, ensure consistency, and facilitate collaboration with vendors. Additional resources Recommended firmware configuration for vDU cluster hosts 4.6.1. Retrieving the host firmware schema for a managed cluster You can discover the host firmware schema for managed clusters. The host firmware schema for bare-metal hosts is populated with information that the Ironic API returns. The API returns information about host firmware interfaces, including firmware setting types, allowable values, ranges, and flags. Prerequisites You have installed the OpenShift CLI ( oc ). You have installed Red Hat Advanced Cluster Management (RHACM) and logged in to the hub cluster as a user with cluster-admin privileges. You have provisioned a cluster that is managed by RHACM. Procedure Discover the host firmware schema for the managed cluster. Run the following command: USD oc get firmwareschema -n <managed_cluster_namespace> -o yaml Example output apiVersion: v1 items: - apiVersion: metal3.io/v1alpha1 kind: FirmwareSchema metadata: creationTimestamp: "2024-09-11T10:29:43Z" generation: 1 name: schema-40562318 namespace: compute-1 ownerReferences: - apiVersion: metal3.io/v1alpha1 kind: HostFirmwareSettings name: compute-1.example.com uid: 65d0e89b-1cd8-4317-966d-2fbbbe033fe9 resourceVersion: "280057624" uid: 511ad25d-f1c9-457b-9a96-776605c7b887 spec: schema: AccessControlService: allowable_values: - Enabled - Disabled attribute_type: Enumeration read_only: false # ... 4.6.2. Retrieving the host firmware settings for a managed cluster You can retrieve the host firmware settings for managed clusters. This is useful when you have deployed changes to the host firmware and you want to monitor the changes and ensure that they are applied successfully. Prerequisites You have installed the OpenShift CLI ( oc ). You have installed Red Hat Advanced Cluster Management (RHACM) and logged in to the hub cluster as a user with cluster-admin privileges. You have provisioned a cluster that is managed by RHACM. Procedure Retrieve the host firmware settings for the managed cluster. Run the following command: USD oc get hostfirmwaresettings -n <cluster_namespace> <node_name> -o yaml Example output apiVersion: v1 items: - apiVersion: metal3.io/v1alpha1 kind: HostFirmwareSettings metadata: creationTimestamp: "2024-09-11T10:29:43Z" generation: 1 name: compute-1.example.com namespace: kni-qe-24 ownerReferences: - apiVersion: metal3.io/v1alpha1 blockOwnerDeletion: true controller: true kind: BareMetalHost name: compute-1.example.com uid: 0baddbb7-bb34-4224-8427-3d01d91c9287 resourceVersion: "280057626" uid: 65d0e89b-1cd8-4317-966d-2fbbbe033fe9 spec: settings: {} status: conditions: - lastTransitionTime: "2024-09-11T10:29:43Z" message: "" observedGeneration: 1 reason: Success status: "True" 1 type: ChangeDetected - lastTransitionTime: "2024-09-11T10:29:43Z" message: Invalid BIOS setting observedGeneration: 1 reason: ConfigurationError status: "False" 2 type: Valid lastUpdated: "2024-09-11T10:29:43Z" schema: name: schema-40562318 namespace: compute-1 settings: 3 AccessControlService: Enabled AcpiHpet: Enabled AcpiRootBridgePxm: Enabled # ... 1 Indicates that a change in the host firmware settings has been detected 2 Indicates that the host has an invalid firmware setting 3 The complete list of configured host firmware settings is returned under the status.settings field Optional: Check the status of the HostFirmwareSettings ( hfs ) custom resource in the cluster: USD oc get hfs -n <managed_cluster_namespace> <managed_cluster_name> -o jsonpath='{.status.conditions[?(@.type=="ChangeDetected")].status}' Example output True Optional: Check for invalid firmware settings in the cluster host. Run the following command: USD oc get hfs -n <managed_cluster_namespace> <managed_cluster_name> -o jsonpath='{.status.conditions[?(@.type=="Valid")].status}' Example output False 4.6.3. Deploying user-defined firmware to cluster hosts with GitOps ZTP You can deploy user-defined firmware settings to cluster hosts by configuring the SiteConfig custom resource (CR) to include a hardware profile that you want to apply during cluster host provisioning. You can configure hardware profiles to apply to hosts in the following scenarios: All hosts site-wide Only cluster hosts that meet certain criteria Individual cluster hosts Important You can configure host hardware profiles to be applied in a hierarchy. Cluster-level settings override site-wide settings. Node level profiles override cluster and site-wide settings. Prerequisites You have installed the OpenShift CLI ( oc ). You have installed Red Hat Advanced Cluster Management (RHACM) and logged in to the hub cluster as a user with cluster-admin privileges. You have provisioned a cluster that is managed by RHACM. You created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application. Procedure Create the host firmware profile that contain the firmware settings you want to apply. For example, create the following YAML file: host-firmware.profile BootMode: Uefi LogicalProc: Enabled ProcVirtualization: Enabled Save the hardware profile YAML file relative to the kustomization.yaml file that you use to define how to provision the cluster, for example: example-ztp/install └── site-install ├── siteconfig-example.yaml ├── kustomization.yaml └── host-firmware.profile Edit the SiteConfig CR to include the firmware profile that you want to apply in the cluster. For example: apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "site-plan-cluster" namespace: "example-cluster-namespace" spec: baseDomain: "example.com" # ... biosConfigRef: filePath: "./host-firmware.profile" 1 1 Applies the hardware profile to all cluster hosts site-wide Note Where possible, use a single SiteConfig CR per cluster. Optional. To apply a hardware profile to hosts in a specific cluster, update clusters.biosConfigRef.filePath with the hardware profile that you want to apply. For example: clusters: - clusterName: "cluster-1" # ... biosConfigRef: filePath: "./host-firmware.profile" 1 1 Applies to all hosts in the cluster-1 cluster Optional. To apply a hardware profile to a specific host in the cluster, update clusters.nodes.biosConfigRef.filePath with the hardware profile that you want to apply. For example: clusters: - clusterName: "cluster-1" # ... nodes: - hostName: "compute-1.example.com" # ... bootMode: "UEFI" biosConfigRef: filePath: "./host-firmware.profile" 1 1 Applies the firmware profile to the compute-1.example.com host in the cluster Commit the SiteConfig CR and associated kustomization.yaml changes in your Git repository and push the changes. The ArgoCD pipeline detects the changes and begins the managed cluster deployment. Note Cluster deployment proceeds even if an invalid firmware setting is detected. To apply a correction using GitOps ZTP, re-deploy the cluster with the corrected hardware profile. Verification Check that the firmware settings have been applied in the managed cluster host. For example, run the following command: USD oc get hfs -n <managed_cluster_namespace> <managed_cluster_name> -o jsonpath='{.status.conditions[?(@.type=="Valid")].status}' Example output True 4.7. Monitoring managed cluster installation progress The ArgoCD pipeline uses the SiteConfig CR to generate the cluster configuration CRs and syncs it with the hub cluster. You can monitor the progress of the synchronization in the ArgoCD dashboard. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure When the synchronization is complete, the installation generally proceeds as follows: The Assisted Service Operator installs OpenShift Container Platform on the cluster. You can monitor the progress of cluster installation from the RHACM dashboard or from the command line by running the following commands: Export the cluster name: USD export CLUSTER=<clusterName> Query the AgentClusterInstall CR for the managed cluster: USD oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.conditions[?(@.type=="Completed")]}' | jq Get the installation events for the cluster: USD curl -sk USD(oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.debugInfo.eventsURL}') | jq '.[-2,-1]' 4.8. Troubleshooting GitOps ZTP by validating the installation CRs The ArgoCD pipeline uses the SiteConfig and PolicyGenerator or PolicyGentemplate custom resources (CRs) to generate the cluster configuration CRs and Red Hat Advanced Cluster Management (RHACM) policies. Use the following steps to troubleshoot issues that might occur during this process. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Check that the installation CRs were created by using the following command: USD oc get AgentClusterInstall -n <cluster_name> If no object is returned, use the following steps to troubleshoot the ArgoCD pipeline flow from SiteConfig files to the installation CRs. Verify that the ManagedCluster CR was generated using the SiteConfig CR on the hub cluster: USD oc get managedcluster If the ManagedCluster is missing, check if the clusters application failed to synchronize the files from the Git repository to the hub cluster: USD oc get applications.argoproj.io -n openshift-gitops clusters -o yaml To identify error logs for the managed cluster, inspect the status.operationState.syncResult.resources field. For example, if an invalid value is assigned to the extraManifestPath in the SiteConfig CR, an error similar to the following is generated: syncResult: resources: - group: ran.openshift.io kind: SiteConfig message: The Kubernetes API could not find ran.openshift.io/SiteConfig for requested resource spoke-sno/spoke-sno. Make sure the "SiteConfig" CRD is installed on the destination cluster To see a more detailed SiteConfig error, complete the following steps: In the Argo CD dashboard, click the SiteConfig resource that Argo CD is trying to sync. Check the DESIRED MANIFEST tab to find the siteConfigError field. siteConfigError: >- Error: could not build the entire SiteConfig defined by /tmp/kust-plugin-config-1081291903: stat sno-extra-manifest: no such file or directory Check the Status.Sync field. If there are log errors, the Status.Sync field could indicate an Unknown error: Status: Sync: Compared To: Destination: Namespace: clusters-sub Server: https://kubernetes.default.svc Source: Path: sites-config Repo URL: https://git.com/ran-sites/siteconfigs/.git Target Revision: master Status: Unknown 4.9. Troubleshooting GitOps ZTP virtual media booting on SuperMicro servers SuperMicro X11 servers do not support virtual media installations when the image is served using the https protocol. As a result, single-node OpenShift deployments for this environment fail to boot on the target node. To avoid this issue, log in to the hub cluster and disable Transport Layer Security (TLS) in the Provisioning resource. This ensures the image is not served with TLS even though the image address uses the https scheme. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Disable TLS in the Provisioning resource by running the following command: USD oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"disableVirtualMediaTLS": true}}' Continue the steps to deploy your single-node OpenShift cluster. 4.10. Removing a managed cluster site from the GitOps ZTP pipeline You can remove a managed site and the associated installation and configuration policy CRs from the GitOps Zero Touch Provisioning (ZTP) pipeline. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Remove a site and the associated CRs by removing the associated SiteConfig and PolicyGenerator or PolicyGentemplate files from the kustomization.yaml file. Add the following syncOptions field to your SiteConfig application. kind: Application spec: syncPolicy: syncOptions: - PrunePropagationPolicy=background When you run the GitOps ZTP pipeline again, the generated CRs are removed. Optional: If you want to permanently remove a site, you should also remove the SiteConfig and site-specific PolicyGenerator or PolicyGentemplate files from the Git repository. Optional: If you want to remove a site temporarily, for example when redeploying a site, you can leave the SiteConfig and site-specific PolicyGenerator or PolicyGentemplate CRs in the Git repository. Additional resources For information about removing a cluster, see Removing a cluster from management . 4.11. Removing obsolete content from the GitOps ZTP pipeline If a change to the PolicyGenerator or PolicyGentemplate configuration results in obsolete policies, for example, if you rename policies, use the following procedure to remove the obsolete policies. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Remove the affected PolicyGenerator or PolicyGentemplate files from the Git repository, commit and push to the remote repository. Wait for the changes to synchronize through the application and the affected policies to be removed from the hub cluster. Add the updated PolicyGenerator or PolicyGentemplate files back to the Git repository, and then commit and push to the remote repository. Note Removing GitOps Zero Touch Provisioning (ZTP) policies from the Git repository, and as a result also removing them from the hub cluster, does not affect the configuration of the managed cluster. The policy and CRs managed by that policy remains in place on the managed cluster. Optional: As an alternative, after making changes to PolicyGenerator or PolicyGentemplate CRs that result in obsolete policies, you can remove these policies from the hub cluster manually. You can delete policies from the RHACM console using the Governance tab or by running the following command: USD oc delete policy -n <namespace> <policy_name> 4.12. Tearing down the GitOps ZTP pipeline You can remove the ArgoCD pipeline and all generated GitOps Zero Touch Provisioning (ZTP) artifacts. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Detach all clusters from Red Hat Advanced Cluster Management (RHACM) on the hub cluster. Delete the kustomization.yaml file in the deployment directory using the following command: USD oc delete -k out/argocd/deployment Commit and push your changes to the site repository.
[ "grep -r \"ztp-deploy-wave\" out/source-crs", "apiVersion: v1 kind: Secret metadata: name: example-sno-bmc-secret namespace: example-sno 1 data: 2 password: <base64_password> username: <base64_username> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno 3 data: .dockerconfigjson: <pull_secret> 4 type: kubernetes.io/dockerconfigjson", "apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: annotations: argocd.argoproj.io/sync-wave: \"1\" name: \"{{ .Cluster.ClusterName }}\" namespace: \"{{ .Cluster.ClusterName }}\" spec: clusterRef: name: \"{{ .Cluster.ClusterName }}\" namespace: \"{{ .Cluster.ClusterName }}\" kernelArguments: - operation: append 1 value: audit=0 2 - operation: append value: trace=1 sshAuthorizedKey: \"{{ .Site.SshPublicKey }}\" proxy: \"{{ .Cluster.ProxySettings }}\" pullSecretRef: name: \"{{ .Site.PullSecretRef.Name }}\" ignitionConfigOverride: \"{{ .Cluster.IgnitionConfigOverride }}\" nmStateConfigLabelSelector: matchLabels: nmstate-label: \"{{ .Cluster.ClusterName }}\" additionalNTPSources: \"{{ .Cluster.AdditionalNTPSources }}\"", "~/example-ztp/install └── site-install ├── siteconfig-example.yaml ├── InfraEnv-example.yaml", "clusters: crTemplates: InfraEnv: \"InfraEnv-example.yaml\"", "ssh -i /path/to/privatekey core@<host_name>", "cat /proc/cmdline", "export CLUSTERNS=example-sno", "oc create namespace USDCLUSTERNS", "example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno --- apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"example-sno\" namespace: \"example-sno\" spec: baseDomain: \"example.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"openshift-4.16\" sshPublicKey: \"ssh-rsa AAAA...\" clusters: - clusterName: \"example-sno\" networkType: \"OVNKubernetes\" # installConfigOverrides is a generic way of passing install-config # parameters through the siteConfig. The 'capabilities' field configures # the composable openshift feature. In this 'capabilities' setting, we # remove all the optional set of components. # Notes: # - OperatorLifecycleManager is needed for 4.15 and later # - NodeTuning is needed for 4.13 and later, not for 4.12 and earlier # - Ingress is needed for 4.16 and later installConfigOverrides: | { \"capabilities\": { \"baselineCapabilitySet\": \"None\", \"additionalEnabledCapabilities\": [ \"NodeTuning\", \"OperatorLifecycleManager\", \"Ingress\" ] } } # It is strongly recommended to include crun manifests as part of the additional install-time manifests for 4.13+. # The crun manifests can be obtained from source-crs/optional-extra-manifest/ and added to the git repo ie.sno-extra-manifest. # extraManifestPath: sno-extra-manifest clusterLabels: # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples du-profile: \"latest\" # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples in ../policygentemplates: # ../policygentemplates/common-ranGen.yaml will apply to all clusters with 'common: true' common: true # ../policygentemplates/group-du-sno-ranGen.yaml will apply to all clusters with 'group-du-sno: \"\"' group-du-sno: \"\" # ../policygentemplates/example-sno-site.yaml will apply to all clusters with 'sites: \"example-sno\"' # Normally this should match or contain the cluster name so it only applies to a single cluster sites: \"example-sno\" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 # Initiates the cluster for workload partitioning. Setting specific reserved/isolated CPUSets is done via PolicyTemplate # please see Workload Partitioning Feature for a complete guide. cpuPartitioningMode: AllNodes # Optionally; This can be used to override the KlusterletAddonConfig that is created for this cluster: #crTemplates: # KlusterletAddonConfig: \"KlusterletAddonConfigOverride.yaml\" nodes: - hostName: \"example-node1.example.com\" role: \"master\" # Optionally; This can be used to configure desired BIOS setting on a host: #biosConfigRef: # filePath: \"example-hw.profile\" bmcAddress: \"idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1\" bmcCredentialsName: name: \"example-node1-bmh-secret\" bootMACAddress: \"AA:BB:CC:DD:EE:11\" # Use UEFISecureBoot to enable secure boot. bootMode: \"UEFISecureBoot\" rootDeviceHints: deviceName: \"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\" # disk partition at `/var/lib/containers` with ignitionConfigOverride. Some values must be updated. See DiskPartitionContainer.md for more details ignitionConfigOverride: | { \"ignition\": { \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"/dev/disk/by-id/wwn-0x6b07b250ebb9d0002a33509f24af1f62\", \"partitions\": [ { \"label\": \"var-lib-containers\", \"sizeMiB\": 0, \"startMiB\": 250000 } ], \"wipeTable\": false } ], \"filesystems\": [ { \"device\": \"/dev/disk/by-partlabel/var-lib-containers\", \"format\": \"xfs\", \"mountOptions\": [ \"defaults\", \"prjquota\" ], \"path\": \"/var/lib/containers\", \"wipeFilesystem\": true } ] }, \"systemd\": { \"units\": [ { \"contents\": \"# Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\", \"enabled\": true, \"name\": \"var-lib-containers.mount\" } ] } } nodeNetwork: interfaces: - name: eno1 macAddress: \"AA:BB:CC:DD:EE:11\" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: enabled: true address: # For SNO sites with static IP addresses, the node-specific, # API and Ingress IPs should all be the same and configured on # the interface - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 1111:2222:3333:4444::1 table-id: 254", "oc describe node example-node.example.com", "Name: example-node.example.com Roles: control-plane,example-label,master,worker Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux custom-label/parameter1=true kubernetes.io/arch=amd64 kubernetes.io/hostname=cnfdf03.telco5gran.eng.rdu2.redhat.com kubernetes.io/os=linux node-role.kubernetes.io/control-plane= node-role.kubernetes.io/example-label= 1 node-role.kubernetes.io/master= node-role.kubernetes.io/worker= node.openshift.io/os_id=rhcos", "apiVersion: ran.openshift.io/v2 kind: SiteConfig metadata: name: \"example-sno\" namespace: \"example-sno\" spec: baseDomain: \"example.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"openshift-4.10\" sshPublicKey: \"ssh-rsa AAAA...\" clusters: # clusterLabels: common: true group-du-sno: \"\" sites : \"example-sno\" accelerated-ztp: full", "interfaces: - name: hosta_conn type: ipsec libreswan: left: '%defaultroute' leftid: '%fromcert' leftmodecfgclient: false leftcert: left_server 1 leftrsasigkey: '%cert' right: <external_host> 2 rightid: '%fromcert' rightrsasigkey: '%cert' rightsubnet: <external_address> 3 ikev2: insist 4 type: tunnel", "out └── argocd └── example └── optional-extra-manifest └── ipsec ├── 99-ipsec-master-endpoint-config.bu 1 ├── 99-ipsec-master-endpoint-config.yaml 2 ├── 99-ipsec-worker-endpoint-config.bu 3 ├── 99-ipsec-worker-endpoint-config.yaml 4 ├── build.sh ├── ca.pem 5 ├── left_server.p12 6 ├── enable-ipsec.yaml ├── ipsec-endpoint-config.yml └── README.md", "siteconfig ├── site1-sno-du.yaml ├── extra-manifest/ └── custom-manifest ├── enable-ipsec.yaml ├── 99-ipsec-worker-endpoint-config.yaml └── 99-ipsec-master-endpoint-config.yaml", "clusters: - clusterName: \"site1-sno-du\" networkType: \"OVNKubernetes\" extraManifests: searchPaths: - extra-manifest/ - custom-manifest/", "apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-config spec: namespaceSelector: include: [\"default\"] exclude: [] matchExpressions: [] matchLabels: {} remediationAction: inform severity: low evaluationInterval: compliant: noncompliant: object-templates-raw: | {{- range (lookup \"v1\" \"Node\" \"\" \"\").items }} - complianceType: musthave objectDefinition: kind: NodeNetworkConfigurationPolicy apiVersion: nmstate.io/v1 metadata: name: {{ .metadata.name }}-ipsec-policy spec: nodeSelector: kubernetes.io/hostname: {{ .metadata.name }} desiredState: interfaces: - name: hosta_conn type: ipsec libreswan: left: '%defaultroute' leftid: '%fromcert' leftmodecfgclient: false leftcert: left_server 1 leftrsasigkey: '%cert' right: <external_host> 2 rightid: '%fromcert' rightrsasigkey: '%cert' rightsubnet: <external_address> 3 ikev2: insist 4 type: tunnel", "out └── argocd └── example └── optional-extra-manifest └── ipsec ├── 99-ipsec-master-import-certs.bu 1 ├── 99-ipsec-master-import-certs.yaml 2 ├── 99-ipsec-worker-import-certs.bu 3 ├── 99-ipsec-worker-import-certs.yaml 4 ├── import-certs.sh ├── ca.pem 5 ├── left_server.p12 6 ├── enable-ipsec.yaml ├── ipsec-config-policy.yaml └── README.md", "siteconfig ├── site1-mno-du.yaml ├── extra-manifest/ └── custom-manifest ├── enable-ipsec.yaml ├── 99-ipsec-master-import-certs.yaml └── 99-ipsec-worker-import-certs.yaml", "clusters: - clusterName: \"site1-mno-du\" networkType: \"OVNKubernetes\" extraManifests: searchPaths: - extra-manifest/ - custom-manifest/", "oc debug node/<node_name>", "sh-5.1# ip xfrm policy", "src 172.16.123.0/24 dst 10.1.232.10/32 dir out priority 1757377 ptype main tmpl src 10.1.28.190 dst 10.1.232.10 proto esp reqid 16393 mode tunnel src 10.1.232.10/32 dst 172.16.123.0/24 dir fwd priority 1757377 ptype main tmpl src 10.1.232.10 dst 10.1.28.190 proto esp reqid 16393 mode tunnel src 10.1.232.10/32 dst 172.16.123.0/24 dir in priority 1757377 ptype main tmpl src 10.1.232.10 dst 10.1.28.190 proto esp reqid 16393 mode tunnel", "sh-5.1# ip xfrm state", "src 10.1.232.10 dst 10.1.28.190 proto esp spi 0xa62a05aa reqid 16393 mode tunnel replay-window 0 flag af-unspec esn auth-trunc hmac(sha1) 0x8c59f680c8ea1e667b665d8424e2ab749cec12dc 96 enc cbc(aes) 0x2818a489fe84929c8ab72907e9ce2f0eac6f16f2258bd22240f4087e0326badb anti-replay esn context: seq-hi 0x0, seq 0x0, oseq-hi 0x0, oseq 0x0 replay_window 128, bitmap-length 4 00000000 00000000 00000000 00000000 src 10.1.28.190 dst 10.1.232.10 proto esp spi 0x8e96e9f9 reqid 16393 mode tunnel replay-window 0 flag af-unspec esn auth-trunc hmac(sha1) 0xd960ddc0a6baaccb343396a51295e08cfd8aaddd 96 enc cbc(aes) 0x0273c02e05b4216d5e652de3fc9b3528fea94648bc2b88fa01139fdf0beb27ab anti-replay esn context: seq-hi 0x0, seq 0x0, oseq-hi 0x0, oseq 0x0 replay_window 128, bitmap-length 4 00000000 00000000 00000000 00000000", "sh-5.1# ping 172.16.110.8", "PING 172.16.110.8 (172.16.110.8) 56(84) bytes of data. 64 bytes from 172.16.110.8: icmp_seq=1 ttl=64 time=153 ms 64 bytes from 172.16.110.8: icmp_seq=2 ttl=64 time=155 ms", "oc get firmwareschema -n <managed_cluster_namespace> -o yaml", "apiVersion: v1 items: - apiVersion: metal3.io/v1alpha1 kind: FirmwareSchema metadata: creationTimestamp: \"2024-09-11T10:29:43Z\" generation: 1 name: schema-40562318 namespace: compute-1 ownerReferences: - apiVersion: metal3.io/v1alpha1 kind: HostFirmwareSettings name: compute-1.example.com uid: 65d0e89b-1cd8-4317-966d-2fbbbe033fe9 resourceVersion: \"280057624\" uid: 511ad25d-f1c9-457b-9a96-776605c7b887 spec: schema: AccessControlService: allowable_values: - Enabled - Disabled attribute_type: Enumeration read_only: false #", "oc get hostfirmwaresettings -n <cluster_namespace> <node_name> -o yaml", "apiVersion: v1 items: - apiVersion: metal3.io/v1alpha1 kind: HostFirmwareSettings metadata: creationTimestamp: \"2024-09-11T10:29:43Z\" generation: 1 name: compute-1.example.com namespace: kni-qe-24 ownerReferences: - apiVersion: metal3.io/v1alpha1 blockOwnerDeletion: true controller: true kind: BareMetalHost name: compute-1.example.com uid: 0baddbb7-bb34-4224-8427-3d01d91c9287 resourceVersion: \"280057626\" uid: 65d0e89b-1cd8-4317-966d-2fbbbe033fe9 spec: settings: {} status: conditions: - lastTransitionTime: \"2024-09-11T10:29:43Z\" message: \"\" observedGeneration: 1 reason: Success status: \"True\" 1 type: ChangeDetected - lastTransitionTime: \"2024-09-11T10:29:43Z\" message: Invalid BIOS setting observedGeneration: 1 reason: ConfigurationError status: \"False\" 2 type: Valid lastUpdated: \"2024-09-11T10:29:43Z\" schema: name: schema-40562318 namespace: compute-1 settings: 3 AccessControlService: Enabled AcpiHpet: Enabled AcpiRootBridgePxm: Enabled #", "oc get hfs -n <managed_cluster_namespace> <managed_cluster_name> -o jsonpath='{.status.conditions[?(@.type==\"ChangeDetected\")].status}'", "True", "oc get hfs -n <managed_cluster_namespace> <managed_cluster_name> -o jsonpath='{.status.conditions[?(@.type==\"Valid\")].status}'", "False", "BootMode: Uefi LogicalProc: Enabled ProcVirtualization: Enabled", "example-ztp/install └── site-install ├── siteconfig-example.yaml ├── kustomization.yaml └── host-firmware.profile", "apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"site-plan-cluster\" namespace: \"example-cluster-namespace\" spec: baseDomain: \"example.com\" # biosConfigRef: filePath: \"./host-firmware.profile\" 1", "clusters: - clusterName: \"cluster-1\" # biosConfigRef: filePath: \"./host-firmware.profile\" 1", "clusters: - clusterName: \"cluster-1\" # nodes: - hostName: \"compute-1.example.com\" # bootMode: \"UEFI\" biosConfigRef: filePath: \"./host-firmware.profile\" 1", "oc get hfs -n <managed_cluster_namespace> <managed_cluster_name> -o jsonpath='{.status.conditions[?(@.type==\"Valid\")].status}'", "True", "export CLUSTER=<clusterName>", "oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.conditions[?(@.type==\"Completed\")]}' | jq", "curl -sk USD(oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.debugInfo.eventsURL}') | jq '.[-2,-1]'", "oc get AgentClusterInstall -n <cluster_name>", "oc get managedcluster", "oc get applications.argoproj.io -n openshift-gitops clusters -o yaml", "syncResult: resources: - group: ran.openshift.io kind: SiteConfig message: The Kubernetes API could not find ran.openshift.io/SiteConfig for requested resource spoke-sno/spoke-sno. Make sure the \"SiteConfig\" CRD is installed on the destination cluster", "siteConfigError: >- Error: could not build the entire SiteConfig defined by /tmp/kust-plugin-config-1081291903: stat sno-extra-manifest: no such file or directory", "Status: Sync: Compared To: Destination: Namespace: clusters-sub Server: https://kubernetes.default.svc Source: Path: sites-config Repo URL: https://git.com/ran-sites/siteconfigs/.git Target Revision: master Status: Unknown", "oc patch provisioning provisioning-configuration --type merge -p '{\"spec\":{\"disableVirtualMediaTLS\": true}}'", "kind: Application spec: syncPolicy: syncOptions: - PrunePropagationPolicy=background", "oc delete policy -n <namespace> <policy_name>", "oc delete -k out/argocd/deployment" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/edge_computing/ztp-deploying-far-edge-sites
Chapter 20. Multiple networks
Chapter 20. Multiple networks 20.1. Understanding multiple networks In Kubernetes, container networking is delegated to networking plugins that implement the Container Network Interface (CNI). OpenShift Container Platform uses the Multus CNI plugin to allow chaining of CNI plugins. During cluster installation, you configure your default pod network. The default network handles all ordinary network traffic for the cluster. You can define an additional network based on the available CNI plugins and attach one or more of these networks to your pods. You can define more than one additional network for your cluster, depending on your needs. This gives you flexibility when you configure pods that deliver network functionality, such as switching or routing. 20.1.1. Usage scenarios for an additional network You can use an additional network in situations where network isolation is needed, including data plane and control plane separation. Isolating network traffic is useful for the following performance and security reasons: Performance You can send traffic on two different planes to manage how much traffic is along each plane. Security You can send sensitive traffic onto a network plane that is managed specifically for security considerations, and you can separate private data that must not be shared between tenants or customers. All of the pods in the cluster still use the cluster-wide default network to maintain connectivity across the cluster. Every pod has an eth0 interface that is attached to the cluster-wide pod network. You can view the interfaces for a pod by using the oc exec -it <pod_name> -- ip a command. If you add additional network interfaces that use Multus CNI, they are named net1 , net2 , ... , netN . To attach additional network interfaces to a pod, you must create configurations that define how the interfaces are attached. You specify each interface by using a NetworkAttachmentDefinition custom resource (CR). A CNI configuration inside each of these CRs defines how that interface is created. 20.1.2. Additional networks in OpenShift Container Platform OpenShift Container Platform provides the following CNI plugins for creating additional networks in your cluster: bridge : Configure a bridge-based additional network to allow pods on the same host to communicate with each other and the host. host-device : Configure a host-device additional network to allow pods access to a physical Ethernet network device on the host system. ipvlan : Configure an ipvlan-based additional network to allow pods on a host to communicate with other hosts and pods on those hosts, similar to a macvlan-based additional network. Unlike a macvlan-based additional network, each pod shares the same MAC address as the parent physical network interface. macvlan : Configure a macvlan-based additional network to allow pods on a host to communicate with other hosts and pods on those hosts by using a physical network interface. Each pod that is attached to a macvlan-based additional network is provided a unique MAC address. SR-IOV : Configure an SR-IOV based additional network to allow pods to attach to a virtual function (VF) interface on SR-IOV capable hardware on the host system. 20.2. Configuring an additional network As a cluster administrator, you can configure an additional network for your cluster. The following network types are supported: Bridge Host device IPVLAN MACVLAN 20.2.1. Approaches to managing an additional network You can manage the life cycle of an additional network by two approaches. Each approach is mutually exclusive and you can only use one approach for managing an additional network at a time. For either approach, the additional network is managed by a Container Network Interface (CNI) plugin that you configure. For an additional network, IP addresses are provisioned through an IP Address Management (IPAM) CNI plugin that you configure as part of the additional network. The IPAM plugin supports a variety of IP address assignment approaches including DHCP and static assignment. Modify the Cluster Network Operator (CNO) configuration: The CNO automatically creates and manages the NetworkAttachmentDefinition object. In addition to managing the object lifecycle the CNO ensures a DHCP is available for an additional network that uses a DHCP assigned IP address. Applying a YAML manifest: You can manage the additional network directly by creating an NetworkAttachmentDefinition object. This approach allows for the chaining of CNI plugins. 20.2.2. Configuration for an additional network attachment An additional network is configured via the NetworkAttachmentDefinition API in the k8s.cni.cncf.io API group. Important Do not store any sensitive information or a secret in the NetworkAttachmentDefinition object because this information is accessible by the project administration user. The configuration for the API is described in the following table: Table 20.1. NetworkAttachmentDefinition API fields Field Type Description metadata.name string The name for the additional network. metadata.namespace string The namespace that the object is associated with. spec.config string The CNI plugin configuration in JSON format. 20.2.2.1. Configuration of an additional network through the Cluster Network Operator The configuration for an additional network attachment is specified as part of the Cluster Network Operator (CNO) configuration. The following YAML describes the configuration parameters for managing an additional network with the CNO: Cluster Network Operator configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # ... additionalNetworks: 1 - name: <name> 2 namespace: <namespace> 3 rawCNIConfig: |- 4 { ... } type: Raw 1 An array of one or more additional network configurations. 2 The name for the additional network attachment that you are creating. The name must be unique within the specified namespace . 3 The namespace to create the network attachment in. If you do not specify a value, then the default namespace is used. 4 A CNI plugin configuration in JSON format. 20.2.2.2. Configuration of an additional network from a YAML manifest The configuration for an additional network is specified from a YAML configuration file, such as in the following example: apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: <name> 1 spec: config: |- 2 { ... } 1 The name for the additional network attachment that you are creating. 2 A CNI plugin configuration in JSON format. 20.2.3. Configurations for additional network types The specific configuration fields for additional networks is described in the following sections. 20.2.3.1. Configuration for a bridge additional network The following object describes the configuration parameters for the bridge CNI plugin: Table 20.2. Bridge CNI plugin JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plugin to configure: bridge . ipam object The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. bridge string Optional: Specify the name of the virtual bridge to use. If the bridge interface does not exist on the host, it is created. The default value is cni0 . ipMasq boolean Optional: Set to true to enable IP masquerading for traffic that leaves the virtual network. The source IP address for all traffic is rewritten to the bridge's IP address. If the bridge does not have an IP address, this setting has no effect. The default value is false . isGateway boolean Optional: Set to true to assign an IP address to the bridge. The default value is false . isDefaultGateway boolean Optional: Set to true to configure the bridge as the default gateway for the virtual network. The default value is false . If isDefaultGateway is set to true , then isGateway is also set to true automatically. forceAddress boolean Optional: Set to true to allow assignment of a previously assigned IP address to the virtual bridge. When set to false , if an IPv4 address or an IPv6 address from overlapping subsets is assigned to the virtual bridge, an error occurs. The default value is false . hairpinMode boolean Optional: Set to true to allow the virtual bridge to send an Ethernet frame back through the virtual port it was received on. This mode is also known as reflective relay . The default value is false . promiscMode boolean Optional: Set to true to enable promiscuous mode on the bridge. The default value is false . vlan string Optional: Specify a virtual LAN (VLAN) tag as an integer value. By default, no VLAN tag is assigned. preserveDefaultVlan string Optional: Indicates whether the default vlan must be preserved on the veth end connected to the bridge. Defaults to true. vlanTrunk list Optional: Assign a VLAN trunk tag. The default value is none . mtu string Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. enabledad boolean Optional: Enables duplicate address detection for the container side veth . The default value is false . macspoofchk boolean Optional: Enables mac spoof check, limiting the traffic originating from the container to the mac address of the interface. The default value is false . Note The VLAN parameter configures the VLAN tag on the host end of the veth and also enables the vlan_filtering feature on the bridge interface. Note To configure uplink for a L2 network you need to allow the vlan on the uplink interface by using the following command: USD bridge vlan add vid VLAN_ID dev DEV 20.2.3.1.1. bridge configuration example The following example configures an additional network named bridge-net : { "cniVersion": "0.3.1", "name": "bridge-net", "type": "bridge", "isGateway": true, "vlan": 2, "ipam": { "type": "dhcp" } } 20.2.3.2. Configuration for a host device additional network Note Specify your network device by setting only one of the following parameters: device , hwaddr , kernelpath , or pciBusID . The following object describes the configuration parameters for the host-device CNI plugin: Table 20.3. Host device CNI plugin JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plugin to configure: host-device . device string Optional: The name of the device, such as eth0 . hwaddr string Optional: The device hardware MAC address. kernelpath string Optional: The Linux kernel device path, such as /sys/devices/pci0000:00/0000:00:1f.6 . pciBusID string Optional: The PCI address of the network device, such as 0000:00:1f.6 . 20.2.3.2.1. host-device configuration example The following example configures an additional network named hostdev-net : { "cniVersion": "0.3.1", "name": "hostdev-net", "type": "host-device", "device": "eth1" } 20.2.3.3. Configuration for an IPVLAN additional network The following object describes the configuration parameters for the IPVLAN CNI plugin: Table 20.4. IPVLAN CNI plugin JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plugin to configure: ipvlan . ipam object The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. This is required unless the plugin is chained. mode string Optional: The operating mode for the virtual network. The value must be l2 , l3 , or l3s . The default value is l2 . master string Optional: The Ethernet interface to associate with the network attachment. If a master is not specified, the interface for the default network route is used. mtu integer Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. Note The ipvlan object does not allow virtual interfaces to communicate with the master interface. Therefore the container will not be able to reach the host by using the ipvlan interface. Be sure that the container joins a network that provides connectivity to the host, such as a network supporting the Precision Time Protocol ( PTP ). A single master interface cannot simultaneously be configured to use both macvlan and ipvlan . For IP allocation schemes that cannot be interface agnostic, the ipvlan plugin can be chained with an earlier plugin that handles this logic. If the master is omitted, then the result must contain a single interface name for the ipvlan plugin to enslave. If ipam is omitted, then the result is used to configure the ipvlan interface. 20.2.3.3.1. ipvlan configuration example The following example configures an additional network named ipvlan-net : { "cniVersion": "0.3.1", "name": "ipvlan-net", "type": "ipvlan", "master": "eth1", "mode": "l3", "ipam": { "type": "static", "addresses": [ { "address": "192.168.10.10/24" } ] } } 20.2.3.4. Configuration for a MACVLAN additional network The following object describes the configuration parameters for the macvlan CNI plugin: Table 20.5. MACVLAN CNI plugin JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plugin to configure: macvlan . ipam object The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. mode string Optional: Configures traffic visibility on the virtual network. Must be either bridge , passthru , private , or vepa . If a value is not provided, the default value is bridge . master string Optional: The host network interface to associate with the newly created macvlan interface. If a value is not specified, then the default route interface is used. mtu string Optional: The maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. Note If you specify the master key for the plugin configuration, use a different physical network interface than the one that is associated with your primary network plugin to avoid possible conflicts. 20.2.3.4.1. macvlan configuration example The following example configures an additional network named macvlan-net : { "cniVersion": "0.3.1", "name": "macvlan-net", "type": "macvlan", "master": "eth1", "mode": "bridge", "ipam": { "type": "dhcp" } } 20.2.4. Configuration of IP address assignment for an additional network The IP address management (IPAM) Container Network Interface (CNI) plugin provides IP addresses for other CNI plugins. You can use the following IP address assignment types: Static assignment. Dynamic assignment through a DHCP server. The DHCP server you specify must be reachable from the additional network. Dynamic assignment through the Whereabouts IPAM CNI plugin. 20.2.4.1. Static IP address assignment configuration The following table describes the configuration for static IP address assignment: Table 20.6. ipam static configuration object Field Type Description type string The IPAM address type. The value static is required. addresses array An array of objects specifying IP addresses to assign to the virtual interface. Both IPv4 and IPv6 IP addresses are supported. routes array An array of objects specifying routes to configure inside the pod. dns array Optional: An array of objects specifying the DNS configuration. The addresses array requires objects with the following fields: Table 20.7. ipam.addresses[] array Field Type Description address string An IP address and network prefix that you specify. For example, if you specify 10.10.21.10/24 , then the additional network is assigned an IP address of 10.10.21.10 and the netmask is 255.255.255.0 . gateway string The default gateway to route egress network traffic to. Table 20.8. ipam.routes[] array Field Type Description dst string The IP address range in CIDR format, such as 192.168.17.0/24 or 0.0.0.0/0 for the default route. gw string The gateway where network traffic is routed. Table 20.9. ipam.dns object Field Type Description nameservers array An array of one or more IP addresses for to send DNS queries to. domain array The default domain to append to a hostname. For example, if the domain is set to example.com , a DNS lookup query for example-host is rewritten as example-host.example.com . search array An array of domain names to append to an unqualified hostname, such as example-host , during a DNS lookup query. Static IP address assignment configuration example { "ipam": { "type": "static", "addresses": [ { "address": "191.168.1.7/24" } ] } } 20.2.4.2. Dynamic IP address (DHCP) assignment configuration The following JSON describes the configuration for dynamic IP address address assignment with DHCP. Renewal of DHCP leases A pod obtains its original DHCP lease when it is created. The lease must be periodically renewed by a minimal DHCP server deployment running on the cluster. To trigger the deployment of the DHCP server, you must create a shim network attachment by editing the Cluster Network Operator configuration, as in the following example: Example shim network attachment definition apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { "name": "dhcp-shim", "cniVersion": "0.3.1", "type": "bridge", "ipam": { "type": "dhcp" } } # ... Table 20.10. ipam DHCP configuration object Field Type Description type string The IPAM address type. The value dhcp is required. Dynamic IP address (DHCP) assignment configuration example { "ipam": { "type": "dhcp" } } 20.2.4.3. Dynamic IP address assignment configuration with Whereabouts The Whereabouts CNI plugin allows the dynamic assignment of an IP address to an additional network without the use of a DHCP server. The following table describes the configuration for dynamic IP address assignment with Whereabouts: Table 20.11. ipam whereabouts configuration object Field Type Description type string The IPAM address type. The value whereabouts is required. range string An IP address and range in CIDR notation. IP addresses are assigned from within this range of addresses. exclude array Optional: A list of zero or more IP addresses and ranges in CIDR notation. IP addresses within an excluded address range are not assigned. Dynamic IP address assignment configuration example that uses Whereabouts { "ipam": { "type": "whereabouts", "range": "192.0.2.192/27", "exclude": [ "192.0.2.192/30", "192.0.2.196/32" ] } } 20.2.4.4. Creating a Whereabouts reconciler daemon set The Whereabouts reconciler is responsible for managing dynamic IP address assignments for the pods within a cluster using the Whereabouts IP Address Management (IPAM) solution. It ensures that each pods gets a unique IP address from the specified IP address range. It also handles IP address releases when pods are deleted or scaled down. Note You can also use a NetworkAttachmentDefinition custom resource for dynamic IP address assignment. The Whereabouts reconciler daemon set is automatically created when you configure an additional network through the Cluster Network Operator. It is not automatically created when you configure an additional network from a YAML manifest. To trigger the deployment of the Whereabouts reconciler daemonset, you must manually create a whereabouts-shim network attachment by editing the Cluster Network Operator custom resource file. Use the following procedure to deploy the Whereabouts reconciler daemonset. Procedure Edit the Network.operator.openshift.io custom resource (CR) by running the following command: USD oc edit network.operator.openshift.io cluster Modify the additionalNetworks parameter in the CR to add the whereabouts-shim network attachment definition. For example: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: whereabouts-shim namespace: default rawCNIConfig: |- { "name": "whereabouts-shim", "cniVersion": "0.3.1", "type": "bridge", "ipam": { "type": "whereabouts" } } type: Raw Save the file and exit the text editor. Verify that the whereabouts-reconciler daemon set deployed successfully by running the following command: USD oc get all -n openshift-multus | grep whereabouts-reconciler Example output pod/whereabouts-reconciler-jnp6g 1/1 Running 0 6s pod/whereabouts-reconciler-k76gg 1/1 Running 0 6s pod/whereabouts-reconciler-k86t9 1/1 Running 0 6s pod/whereabouts-reconciler-p4sxw 1/1 Running 0 6s pod/whereabouts-reconciler-rvfdv 1/1 Running 0 6s pod/whereabouts-reconciler-svzw9 1/1 Running 0 6s daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 6s 20.2.5. Creating an additional network attachment with the Cluster Network Operator The Cluster Network Operator (CNO) manages additional network definitions. When you specify an additional network to create, the CNO creates the NetworkAttachmentDefinition object automatically. Important Do not edit the NetworkAttachmentDefinition objects that the Cluster Network Operator manages. Doing so might disrupt network traffic on your additional network. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Optional: Create the namespace for the additional networks: USD oc create namespace <namespace_name> To edit the CNO configuration, enter the following command: USD oc edit networks.operator.openshift.io cluster Modify the CR that you are creating by adding the configuration for the additional network that you are creating, as in the following example CR. apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # ... additionalNetworks: - name: tertiary-net namespace: namespace2 type: Raw rawCNIConfig: |- { "cniVersion": "0.3.1", "name": "tertiary-net", "type": "ipvlan", "master": "eth1", "mode": "l2", "ipam": { "type": "static", "addresses": [ { "address": "192.168.1.23/24" } ] } } Save your changes and quit the text editor to commit your changes. Verification Confirm that the CNO created the NetworkAttachmentDefinition object by running the following command. There might be a delay before the CNO creates the object. USD oc get network-attachment-definitions -n <namespace> where: <namespace> Specifies the namespace for the network attachment that you added to the CNO configuration. Example output NAME AGE test-network-1 14m 20.2.6. Creating an additional network attachment by applying a YAML manifest Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a YAML file with your additional network configuration, such as in the following example: apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: -net spec: config: |- { "cniVersion": "0.3.1", "name": "work-network", "type": "host-device", "device": "eth1", "ipam": { "type": "dhcp" } } To create the additional network, enter the following command: USD oc apply -f <file>.yaml where: <file> Specifies the name of the file contained the YAML manifest. 20.3. About virtual routing and forwarding 20.3.1. About virtual routing and forwarding Virtual routing and forwarding (VRF) devices combined with IP rules provide the ability to create virtual routing and forwarding domains. VRF reduces the number of permissions needed by CNF, and provides increased visibility of the network topology of secondary networks. VRF is used to provide multi-tenancy functionality, for example, where each tenant has its own unique routing tables and requires different default gateways. Processes can bind a socket to the VRF device. Packets through the binded socket use the routing table associated with the VRF device. An important feature of VRF is that it impacts only OSI model layer 3 traffic and above so L2 tools, such as LLDP, are not affected. This allows higher priority IP rules such as policy based routing to take precedence over the VRF device rules directing specific traffic. 20.3.1.1. Benefits of secondary networks for pods for telecommunications operators In telecommunications use cases, each CNF can potentially be connected to multiple different networks sharing the same address space. These secondary networks can potentially conflict with the cluster's main network CIDR. Using the CNI VRF plugin, network functions can be connected to different customers' infrastructure using the same IP address, keeping different customers isolated. IP addresses are overlapped with OpenShift Container Platform IP space. The CNI VRF plugin also reduces the number of permissions needed by CNF and increases the visibility of network topologies of secondary networks. 20.4. Configuring multi-network policy As a cluster administrator, you can configure network policy for additional networks. Note You can specify multi-network policy for only macvlan additional networks. Other types of additional networks, such as ipvlan, are not supported. 20.4.1. Differences between multi-network policy and network policy Although the MultiNetworkPolicy API implements the NetworkPolicy API, there are several important differences: You must use the MultiNetworkPolicy API: apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy You must use the multi-networkpolicy resource name when using the CLI to interact with multi-network policies. For example, you can view a multi-network policy object with the oc get multi-networkpolicy <name> command where <name> is the name of a multi-network policy. You must specify an annotation with the name of the network attachment definition that defines the macvlan additional network: apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> where: <network_name> Specifies the name of a network attachment definition. 20.4.2. Enabling multi-network policy for the cluster As a cluster administrator, you can enable multi-network policy support on your cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure Create the multinetwork-enable-patch.yaml file with the following YAML: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: useMultiNetworkPolicy: true Configure the cluster to enable multi-network policy: USD oc patch network.operator.openshift.io cluster --type=merge --patch-file=multinetwork-enable-patch.yaml Example output network.operator.openshift.io/cluster patched 20.4.3. Working with multi-network policy As a cluster administrator, you can create, edit, view, and delete multi-network policies. 20.4.3.1. Prerequisites You have enabled multi-network policy support for your cluster. 20.4.3.2. Creating a multi-network policy using the CLI To define granular rules describing ingress or egress network traffic allowed for namespaces in your cluster, you can create a multi-network policy. Prerequisites Your cluster uses a cluster network provider that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace that the multi-network policy applies to. Procedure Create a policy rule: Create a <policy_name>.yaml file: USD touch <policy_name>.yaml where: <policy_name> Specifies the multi-network policy file name. Define a multi-network policy in the file that you just created, such as in the following examples: Deny ingress from all pods in all namespaces apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: deny-by-default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: ingress: [] where <network_name> Specifies the name of a network attachment definition. Allow ingress from all pods in the same namespace apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-same-namespace annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: ingress: - from: - podSelector: {} where <network_name> Specifies the name of a network attachment definition. To create the multi-network policy object, enter the following command: USD oc apply -f <policy_name>.yaml -n <namespace> where: <policy_name> Specifies the multi-network policy file name. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Example output multinetworkpolicy.k8s.cni.cncf.io/default-deny created Note If you log in to the web console with cluster-admin privileges, you have a choice of creating a network policy in any namespace in the cluster directly in YAML or from a form in the web console. 20.4.3.3. Editing a multi-network policy You can edit a multi-network policy in a namespace. Prerequisites Your cluster uses a cluster network provider that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace where the multi-network policy exists. Procedure Optional: To list the multi-network policy objects in a namespace, enter the following command: USD oc get multi-networkpolicy where: <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Edit the multi-network policy object. If you saved the multi-network policy definition in a file, edit the file and make any necessary changes, and then enter the following command. USD oc apply -n <namespace> -f <policy_file>.yaml where: <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. <policy_file> Specifies the name of the file containing the network policy. If you need to update the multi-network policy object directly, enter the following command: USD oc edit multi-networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Confirm that the multi-network policy object is updated. USD oc describe multi-networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the multi-network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Note If you log in to the web console with cluster-admin privileges, you have a choice of editing a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu. 20.4.3.4. Viewing multi-network policies using the CLI You can examine the multi-network policies in a namespace. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace where the multi-network policy exists. Procedure List multi-network policies in a namespace: To view multi-network policy objects defined in a namespace, enter the following command: USD oc get multi-networkpolicy Optional: To examine a specific multi-network policy, enter the following command: USD oc describe multi-networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the multi-network policy to inspect. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Note If you log in to the web console with cluster-admin privileges, you have a choice of viewing a network policy in any namespace in the cluster directly in YAML or from a form in the web console. 20.4.3.5. Deleting a multi-network policy using the CLI You can delete a multi-network policy in a namespace. Prerequisites Your cluster uses a cluster network provider that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace where the multi-network policy exists. Procedure To delete a multi-network policy object, enter the following command: USD oc delete multi-networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the multi-network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Example output multinetworkpolicy.k8s.cni.cncf.io/default-deny deleted Note If you log in to the web console with cluster-admin privileges, you have a choice of deleting a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu. 20.4.4. Additional resources About network policy Understanding multiple networks Configuring a macvlan network 20.5. Attaching a pod to an additional network As a cluster user you can attach a pod to an additional network. 20.5.1. Adding a pod to an additional network You can add a pod to an additional network. The pod continues to send normal cluster-related network traffic over the default network. When a pod is created additional networks are attached to it. However, if a pod already exists, you cannot attach additional networks to it. The pod must be in the same namespace as the additional network. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster. Procedure Add an annotation to the Pod object. Only one of the following annotation formats can be used: To attach an additional network without any customization, add an annotation with the following format. Replace <network> with the name of the additional network to associate with the pod: metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1 1 To specify more than one additional network, separate each network with a comma. Do not include whitespace between the comma. If you specify the same additional network multiple times, that pod will have multiple network interfaces attached to that network. To attach an additional network with customizations, add an annotation with the following format: metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "<network>", 1 "namespace": "<namespace>", 2 "default-route": ["<default-route>"] 3 } ] 1 Specify the name of the additional network defined by a NetworkAttachmentDefinition object. 2 Specify the namespace where the NetworkAttachmentDefinition object is defined. 3 Optional: Specify an override for the default route, such as 192.168.17.1 . To create the pod, enter the following command. Replace <name> with the name of the pod. USD oc create -f <name>.yaml Optional: To Confirm that the annotation exists in the Pod CR, enter the following command, replacing <name> with the name of the pod. USD oc get pod <name> -o yaml In the following example, the example-pod pod is attached to the net1 additional network: USD oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/networks-status: |- 1 [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.128.2.14" ], "default": true, "dns": {} },{ "name": "macvlan-bridge", "interface": "net1", "ips": [ "20.2.2.100" ], "mac": "22:2f:60:a5:f8:00", "dns": {} }] name: example-pod namespace: default spec: ... status: ... 1 The k8s.v1.cni.cncf.io/networks-status parameter is a JSON array of objects. Each object describes the status of an additional network attached to the pod. The annotation value is stored as a plain text value. 20.5.1.1. Specifying pod-specific addressing and routing options When attaching a pod to an additional network, you may want to specify further properties about that network in a particular pod. This allows you to change some aspects of routing, as well as specify static IP addresses and MAC addresses. To accomplish this, you can use the JSON formatted annotations. Prerequisites The pod must be in the same namespace as the additional network. Install the OpenShift CLI ( oc ). You must log in to the cluster. Procedure To add a pod to an additional network while specifying addressing and/or routing options, complete the following steps: Edit the Pod resource definition. If you are editing an existing Pod resource, run the following command to edit its definition in the default editor. Replace <name> with the name of the Pod resource to edit. USD oc edit pod <name> In the Pod resource definition, add the k8s.v1.cni.cncf.io/networks parameter to the pod metadata mapping. The k8s.v1.cni.cncf.io/networks accepts a JSON string of a list of objects that reference the name of NetworkAttachmentDefinition custom resource (CR) names in addition to specifying additional properties. metadata: annotations: k8s.v1.cni.cncf.io/networks: '[<network>[,<network>,...]]' 1 1 Replace <network> with a JSON object as shown in the following examples. The single quotes are required. In the following example the annotation specifies which network attachment will have the default route, using the default-route parameter. apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "net1" }, { "name": "net2", 1 "default-route": ["192.0.2.1"] 2 }]' spec: containers: - name: example-pod command: ["/bin/bash", "-c", "sleep 2000000000000"] image: centos/tools 1 The name key is the name of the additional network to associate with the pod. 2 The default-route key specifies a value of a gateway for traffic to be routed over if no other routing entry is present in the routing table. If more than one default-route key is specified, this will cause the pod to fail to become active. The default route will cause any traffic that is not specified in other routes to be routed to the gateway. Important Setting the default route to an interface other than the default network interface for OpenShift Container Platform may cause traffic that is anticipated for pod-to-pod traffic to be routed over another interface. To verify the routing properties of a pod, the oc command may be used to execute the ip command within a pod. USD oc exec -it <pod_name> -- ip route Note You may also reference the pod's k8s.v1.cni.cncf.io/networks-status to see which additional network has been assigned the default route, by the presence of the default-route key in the JSON-formatted list of objects. To set a static IP address or MAC address for a pod you can use the JSON formatted annotations. This requires you create networks that specifically allow for this functionality. This can be specified in a rawCNIConfig for the CNO. Edit the CNO CR by running the following command: USD oc edit networks.operator.openshift.io cluster The following YAML describes the configuration parameters for the CNO: Cluster Network Operator YAML configuration name: <name> 1 namespace: <namespace> 2 rawCNIConfig: '{ 3 ... }' type: Raw 1 Specify a name for the additional network attachment that you are creating. The name must be unique within the specified namespace . 2 Specify the namespace to create the network attachment in. If you do not specify a value, then the default namespace is used. 3 Specify the CNI plugin configuration in JSON format, which is based on the following template. The following object describes the configuration parameters for utilizing static MAC address and IP address using the macvlan CNI plugin: macvlan CNI plugin JSON configuration object using static IP and MAC address { "cniVersion": "0.3.1", "name": "<name>", 1 "plugins": [{ 2 "type": "macvlan", "capabilities": { "ips": true }, 3 "master": "eth0", 4 "mode": "bridge", "ipam": { "type": "static" } }, { "capabilities": { "mac": true }, 5 "type": "tuning" }] } 1 Specifies the name for the additional network attachment to create. The name must be unique within the specified namespace . 2 Specifies an array of CNI plugin configurations. The first object specifies a macvlan plugin configuration and the second object specifies a tuning plugin configuration. 3 Specifies that a request is made to enable the static IP address functionality of the CNI plugin runtime configuration capabilities. 4 Specifies the interface that the macvlan plugin uses. 5 Specifies that a request is made to enable the static MAC address functionality of a CNI plugin. The above network attachment can be referenced in a JSON formatted annotation, along with keys to specify which static IP and MAC address will be assigned to a given pod. Edit the pod with: USD oc edit pod <name> macvlan CNI plugin JSON configuration object using static IP and MAC address apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "<name>", 1 "ips": [ "192.0.2.205/24" ], 2 "mac": "CA:FE:C0:FF:EE:00" 3 } ]' 1 Use the <name> as provided when creating the rawCNIConfig above. 2 Provide an IP address including the subnet mask. 3 Provide the MAC address. Note Static IP addresses and MAC addresses do not have to be used at the same time, you may use them individually, or together. To verify the IP address and MAC properties of a pod with additional networks, use the oc command to execute the ip command within a pod. USD oc exec -it <pod_name> -- ip a 20.6. Removing a pod from an additional network As a cluster user you can remove a pod from an additional network. 20.6.1. Removing a pod from an additional network You can remove a pod from an additional network only by deleting the pod. Prerequisites An additional network is attached to the pod. Install the OpenShift CLI ( oc ). Log in to the cluster. Procedure To delete the pod, enter the following command: USD oc delete pod <name> -n <namespace> <name> is the name of the pod. <namespace> is the namespace that contains the pod. 20.7. Editing an additional network As a cluster administrator you can modify the configuration for an existing additional network. 20.7.1. Modifying an additional network attachment definition As a cluster administrator, you can make changes to an existing additional network. Any existing pods attached to the additional network will not be updated. Prerequisites You have configured an additional network for your cluster. Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure To edit an additional network for your cluster, complete the following steps: Run the following command to edit the Cluster Network Operator (CNO) CR in your default text editor: USD oc edit networks.operator.openshift.io cluster In the additionalNetworks collection, update the additional network with your changes. Save your changes and quit the text editor to commit your changes. Optional: Confirm that the CNO updated the NetworkAttachmentDefinition object by running the following command. Replace <network-name> with the name of the additional network to display. There might be a delay before the CNO updates the NetworkAttachmentDefinition object to reflect your changes. USD oc get network-attachment-definitions <network-name> -o yaml For example, the following console output displays a NetworkAttachmentDefinition object that is named net1 : USD oc get network-attachment-definitions net1 -o go-template='{{printf "%s\n" .spec.config}}' { "cniVersion": "0.3.1", "type": "macvlan", "master": "ens5", "mode": "bridge", "ipam": {"type":"static","routes":[{"dst":"0.0.0.0/0","gw":"10.128.2.1"}],"addresses":[{"address":"10.128.2.100/23","gateway":"10.128.2.1"}],"dns":{"nameservers":["172.30.0.10"],"domain":"us-west-2.compute.internal","search":["us-west-2.compute.internal"]}} } 20.8. Removing an additional network As a cluster administrator you can remove an additional network attachment. 20.8.1. Removing an additional network attachment definition As a cluster administrator, you can remove an additional network from your OpenShift Container Platform cluster. The additional network is not removed from any pods it is attached to. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure To remove an additional network from your cluster, complete the following steps: Edit the Cluster Network Operator (CNO) in your default text editor by running the following command: USD oc edit networks.operator.openshift.io cluster Modify the CR by removing the configuration from the additionalNetworks collection for the network attachment definition you are removing. apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: [] 1 1 If you are removing the configuration mapping for the only additional network attachment definition in the additionalNetworks collection, you must specify an empty collection. Save your changes and quit the text editor to commit your changes. Optional: Confirm that the additional network CR was deleted by running the following command: USD oc get network-attachment-definition --all-namespaces 20.9. Assigning a secondary network to a VRF 20.9.1. Assigning a secondary network to a VRF As a cluster administrator, you can configure an additional network for your VRF domain by using the CNI VRF plugin. The virtual network created by this plugin is associated with a physical interface that you specify. Note Applications that use VRFs need to bind to a specific device. The common usage is to use the SO_BINDTODEVICE option for a socket. SO_BINDTODEVICE binds the socket to a device that is specified in the passed interface name, for example, eth1 . To use SO_BINDTODEVICE , the application must have CAP_NET_RAW capabilities. Using a VRF through the ip vrf exec command is not supported in OpenShift Container Platform pods. To use VRF, bind applications directly to the VRF interface. 20.9.1.1. Creating an additional network attachment with the CNI VRF plugin The Cluster Network Operator (CNO) manages additional network definitions. When you specify an additional network to create, the CNO creates the NetworkAttachmentDefinition custom resource (CR) automatically. Note Do not edit the NetworkAttachmentDefinition CRs that the Cluster Network Operator manages. Doing so might disrupt network traffic on your additional network. To create an additional network attachment with the CNI VRF plugin, perform the following procedure. Prerequisites Install the OpenShift Container Platform CLI (oc). Log in to the OpenShift cluster as a user with cluster-admin privileges. Procedure Create the Network custom resource (CR) for the additional network attachment and insert the rawCNIConfig configuration for the additional network, as in the following example CR. Save the YAML as the file additional-network-attachment.yaml . apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: test-network-1 namespace: additional-network-1 type: Raw rawCNIConfig: '{ "cniVersion": "0.3.1", "name": "macvlan-vrf", "plugins": [ 1 { "type": "macvlan", 2 "master": "eth1", "ipam": { "type": "static", "addresses": [ { "address": "191.168.1.23/24" } ] } }, { "type": "vrf", "vrfname": "example-vrf-name", 3 "table": 1001 4 }] }' 1 plugins must be a list. The first item in the list must be the secondary network underpinning the VRF network. The second item in the list is the VRF plugin configuration. 2 type must be set to vrf . 3 vrfname is the name of the VRF that the interface is assigned to. If it does not exist in the pod, it is created. 4 Optional. table is the routing table ID. By default, the tableid parameter is used. If it is not specified, the CNI assigns a free routing table ID to the VRF. Note VRF functions correctly only when the resource is of type netdevice . Create the Network resource: USD oc create -f additional-network-attachment.yaml Confirm that the CNO created the NetworkAttachmentDefinition CR by running the following command. Replace <namespace> with the namespace that you specified when configuring the network attachment, for example, additional-network-1 . USD oc get network-attachment-definitions -n <namespace> Example output NAME AGE additional-network-1 14m Note There might be a delay before the CNO creates the CR. Verifying that the additional VRF network attachment is successful To verify that the VRF CNI is correctly configured and the additional network attachment is attached, do the following: Create a network that uses the VRF CNI. Assign the network to a pod. Verify that the pod network attachment is connected to the VRF additional network. Remote shell into the pod and run the following command: USD ip vrf show Example output Name Table ----------------------- red 10 Confirm the VRF interface is master of the secondary interface: USD ip link Example output 5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode
[ "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # additionalNetworks: 1 - name: <name> 2 namespace: <namespace> 3 rawCNIConfig: |- 4 { } type: Raw", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: <name> 1 spec: config: |- 2 { }", "bridge vlan add vid VLAN_ID dev DEV", "{ \"cniVersion\": \"0.3.1\", \"name\": \"bridge-net\", \"type\": \"bridge\", \"isGateway\": true, \"vlan\": 2, \"ipam\": { \"type\": \"dhcp\" } }", "{ \"cniVersion\": \"0.3.1\", \"name\": \"hostdev-net\", \"type\": \"host-device\", \"device\": \"eth1\" }", "{ \"cniVersion\": \"0.3.1\", \"name\": \"ipvlan-net\", \"type\": \"ipvlan\", \"master\": \"eth1\", \"mode\": \"l3\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"192.168.10.10/24\" } ] } }", "{ \"cniVersion\": \"0.3.1\", \"name\": \"macvlan-net\", \"type\": \"macvlan\", \"master\": \"eth1\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } }", "{ \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.7/24\" } ] } }", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"dhcp-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } } #", "{ \"ipam\": { \"type\": \"dhcp\" } }", "{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/27\", \"exclude\": [ \"192.0.2.192/30\", \"192.0.2.196/32\" ] } }", "oc edit network.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: whereabouts-shim namespace: default rawCNIConfig: |- { \"name\": \"whereabouts-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\" } } type: Raw", "oc get all -n openshift-multus | grep whereabouts-reconciler", "pod/whereabouts-reconciler-jnp6g 1/1 Running 0 6s pod/whereabouts-reconciler-k76gg 1/1 Running 0 6s pod/whereabouts-reconciler-k86t9 1/1 Running 0 6s pod/whereabouts-reconciler-p4sxw 1/1 Running 0 6s pod/whereabouts-reconciler-rvfdv 1/1 Running 0 6s pod/whereabouts-reconciler-svzw9 1/1 Running 0 6s daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 6s", "oc create namespace <namespace_name>", "oc edit networks.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # additionalNetworks: - name: tertiary-net namespace: namespace2 type: Raw rawCNIConfig: |- { \"cniVersion\": \"0.3.1\", \"name\": \"tertiary-net\", \"type\": \"ipvlan\", \"master\": \"eth1\", \"mode\": \"l2\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"192.168.1.23/24\" } ] } }", "oc get network-attachment-definitions -n <namespace>", "NAME AGE test-network-1 14m", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: next-net spec: config: |- { \"cniVersion\": \"0.3.1\", \"name\": \"work-network\", \"type\": \"host-device\", \"device\": \"eth1\", \"ipam\": { \"type\": \"dhcp\" } }", "oc apply -f <file>.yaml", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: annotations: k8s.v1.cni.cncf.io/policy-for: <network_name>", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: useMultiNetworkPolicy: true", "oc patch network.operator.openshift.io cluster --type=merge --patch-file=multinetwork-enable-patch.yaml", "network.operator.openshift.io/cluster patched", "touch <policy_name>.yaml", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: deny-by-default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: ingress: []", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-same-namespace annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: ingress: - from: - podSelector: {}", "oc apply -f <policy_name>.yaml -n <namespace>", "multinetworkpolicy.k8s.cni.cncf.io/default-deny created", "oc get multi-networkpolicy", "oc apply -n <namespace> -f <policy_file>.yaml", "oc edit multi-networkpolicy <policy_name> -n <namespace>", "oc describe multi-networkpolicy <policy_name> -n <namespace>", "oc get multi-networkpolicy", "oc describe multi-networkpolicy <policy_name> -n <namespace>", "oc delete multi-networkpolicy <policy_name> -n <namespace>", "multinetworkpolicy.k8s.cni.cncf.io/default-deny deleted", "metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1", "metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"<network>\", 1 \"namespace\": \"<namespace>\", 2 \"default-route\": [\"<default-route>\"] 3 } ]", "oc create -f <name>.yaml", "oc get pod <name> -o yaml", "oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/networks-status: |- 1 [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.128.2.14\" ], \"default\": true, \"dns\": {} },{ \"name\": \"macvlan-bridge\", \"interface\": \"net1\", \"ips\": [ \"20.2.2.100\" ], \"mac\": \"22:2f:60:a5:f8:00\", \"dns\": {} }] name: example-pod namespace: default spec: status:", "oc edit pod <name>", "metadata: annotations: k8s.v1.cni.cncf.io/networks: '[<network>[,<network>,...]]' 1", "apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"net1\" }, { \"name\": \"net2\", 1 \"default-route\": [\"192.0.2.1\"] 2 }]' spec: containers: - name: example-pod command: [\"/bin/bash\", \"-c\", \"sleep 2000000000000\"] image: centos/tools", "oc exec -it <pod_name> -- ip route", "oc edit networks.operator.openshift.io cluster", "name: <name> 1 namespace: <namespace> 2 rawCNIConfig: '{ 3 }' type: Raw", "{ \"cniVersion\": \"0.3.1\", \"name\": \"<name>\", 1 \"plugins\": [{ 2 \"type\": \"macvlan\", \"capabilities\": { \"ips\": true }, 3 \"master\": \"eth0\", 4 \"mode\": \"bridge\", \"ipam\": { \"type\": \"static\" } }, { \"capabilities\": { \"mac\": true }, 5 \"type\": \"tuning\" }] }", "oc edit pod <name>", "apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"<name>\", 1 \"ips\": [ \"192.0.2.205/24\" ], 2 \"mac\": \"CA:FE:C0:FF:EE:00\" 3 } ]'", "oc exec -it <pod_name> -- ip a", "oc delete pod <name> -n <namespace>", "oc edit networks.operator.openshift.io cluster", "oc get network-attachment-definitions <network-name> -o yaml", "oc get network-attachment-definitions net1 -o go-template='{{printf \"%s\\n\" .spec.config}}' { \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens5\", \"mode\": \"bridge\", \"ipam\": {\"type\":\"static\",\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.128.2.1\"}],\"addresses\":[{\"address\":\"10.128.2.100/23\",\"gateway\":\"10.128.2.1\"}],\"dns\":{\"nameservers\":[\"172.30.0.10\"],\"domain\":\"us-west-2.compute.internal\",\"search\":[\"us-west-2.compute.internal\"]}} }", "oc edit networks.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: [] 1", "oc get network-attachment-definition --all-namespaces", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: test-network-1 namespace: additional-network-1 type: Raw rawCNIConfig: '{ \"cniVersion\": \"0.3.1\", \"name\": \"macvlan-vrf\", \"plugins\": [ 1 { \"type\": \"macvlan\", 2 \"master\": \"eth1\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.23/24\" } ] } }, { \"type\": \"vrf\", \"vrfname\": \"example-vrf-name\", 3 \"table\": 1001 4 }] }'", "oc create -f additional-network-attachment.yaml", "oc get network-attachment-definitions -n <namespace>", "NAME AGE additional-network-1 14m", "ip vrf show", "Name Table ----------------------- red 10", "ip link", "5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/networking/multiple-networks
Chapter 59. Replace Field Action
Chapter 59. Replace Field Action Replace field with a different key in the message in transit. The required parameter 'renames' is a comma-separated list of colon-delimited renaming pairs like for example 'foo:bar,abc:xyz' and it represents the field rename mappings. The optional parameter 'enabled' represents the fields to include. If specified, only the named fields will be included in the resulting message. The optional parameter 'disabled' represents the fields to exclude. If specified, the listed fields will be excluded from the resulting message. This takes precedence over the 'enabled' parameter. The default value of 'enabled' parameter is 'all', so all the fields of the payload will be included. The default value of 'disabled' parameter is 'none', so no fields of the payload will be excluded. 59.1. Configuration Options The following table summarizes the configuration options available for the replace-field-action Kamelet: Property Name Description Type Default Example renames * Renames Comma separated list of field with new value to be renamed string "foo:bar,c1:c2" disabled Disabled Comma separated list of fields to be disabled string "none" enabled Enabled Comma separated list of fields to be enabled string "all" Note Fields marked with an asterisk (*) are mandatory. 59.2. Dependencies At runtime, the replace-field-action Kamelet relies upon the presence of the following dependencies: github:openshift-integration.kamelet-catalog:camel-kamelets-utils:kamelet-catalog-1.6-SNAPSHOT camel:core camel:jackson camel:kamelet 59.3. Usage This section describes how you can use the replace-field-action . 59.3.1. Knative Action You can use the replace-field-action Kamelet as an intermediate step in a Knative binding. replace-field-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: replace-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: replace-field-action properties: renames: "foo:bar,c1:c2" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 59.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 59.3.1.2. Procedure for using the cluster CLI Save the replace-field-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f replace-field-action-binding.yaml 59.3.1.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step replace-field-action -p "step-0.renames=foo:bar,c1:c2" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 59.3.2. Kafka Action You can use the replace-field-action Kamelet as an intermediate step in a Kafka binding. replace-field-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: replace-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: replace-field-action properties: renames: "foo:bar,c1:c2" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 59.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 59.3.2.2. Procedure for using the cluster CLI Save the replace-field-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f replace-field-action-binding.yaml 59.3.2.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step replace-field-action -p "step-0.renames=foo:bar,c1:c2" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 59.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/replace-field-action.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: replace-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: replace-field-action properties: renames: \"foo:bar,c1:c2\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel", "apply -f replace-field-action-binding.yaml", "kamel bind timer-source?message=Hello --step replace-field-action -p \"step-0.renames=foo:bar,c1:c2\" channel:mychannel", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: replace-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: replace-field-action properties: renames: \"foo:bar,c1:c2\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic", "apply -f replace-field-action-binding.yaml", "kamel bind timer-source?message=Hello --step replace-field-action -p \"step-0.renames=foo:bar,c1:c2\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/kamelets_reference/replace-field-action
20.18. Converting QEMU Arguments to Domain XML
20.18. Converting QEMU Arguments to Domain XML The virsh domxml-from-native command provides a way to convert an existing set of QEMU arguments into a Domain XML configuration file that can then be used by libvirt. Note that this command is intended to be used only to convert existing QEMU guests previously started from the command line, in order to enable them to be managed through libvirt. Therefore, the method described here should not be used to create new guests from scratch. New guests should be created using either virsh, virt-install , or virt-manager . Additional information can be found on the libvirt upstream website . Procedure 20.3. How to convert a QEMU guest to libvirt Start with a QEMU guest with a arguments file (file type *.args ), named demo.args in this example: To convert this file into a domain XML file so that the guest can be managed by libvirt, enter the following command. Remember to replace qemu-guest1 with the name of your guest virtual machine and demo.args with the filename of your QEMU args file. # virsh domxml-from-native qemu-guest1 demo.args This command turns the demo.args file into the following domain XML file: <domain type='qemu'> <uuid>00000000-0000-0000-0000-000000000000</uuid> <memory>219136</memory> <currentMemory>219136</currentMemory> <vcpu>1</vcpu> <os> <type arch='i686' machine='pc'>hvm</type> <boot dev='hd'/> </os> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/bin/qemu</emulator> <disk type='block' device='disk'> <source dev='/dev/HostVG/QEMUGuest1'/> <target dev='hda' bus='ide'/> </disk> </devices> </domain> Figure 20.1. Guest virtual machine new configuration file
[ "cat demo.args LC_ALL=C PATH=/bin HOME=/home/test USER=test LOGNAME=test /usr/bin/qemu -S -M pc -m 214 -smp 1 -nographic -monitor pty -no-acpi -boot c -hda /dev/HostVG/QEMUGuest1 -net none -serial none -parallel none -usb", "<domain type='qemu'> <uuid>00000000-0000-0000-0000-000000000000</uuid> <memory>219136</memory> <currentMemory>219136</currentMemory> <vcpu>1</vcpu> <os> <type arch='i686' machine='pc'>hvm</type> <boot dev='hd'/> </os> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/bin/qemu</emulator> <disk type='block' device='disk'> <source dev='/dev/HostVG/QEMUGuest1'/> <target dev='hda' bus='ide'/> </disk> </devices> </domain>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-domain_commands-converting_qemu_arguments_to_domain_xml
Preface
Preface The Red Hat build of Cryostat is a container-native implementation of JDK Flight Recorder (JFR) that you can use to securely monitor the Java Virtual Machine (JVM) performance in workloads that run on an OpenShift Container Platform cluster. You can use Cryostat 3.0 to start, stop, retrieve, archive, import, and export JFR data for JVMs inside your containerized applications by using a web console or an HTTP API. Depending on your use case, you can store and analyze your recordings directly on your Red Hat OpenShift cluster by using the built-in tools that Cryostat provides or you can export recordings to an external monitoring application to perform a more in-depth analysis of your recorded data. Important Red Hat build of Cryostat is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/using_the_cryostat_dashboard/preface-cryostat
C.6. Glock tracepoints
C.6. Glock tracepoints The tracepoints are also designed to be able to confirm the correctness of the cache control by combining them with the blktrace output and with knowledge of the on-disk layout. It is then possible to check that any given I/O has been issued and completed under the correct lock, and that no races are present. The gfs2_glock_state_change tracepoint is the most important one to understand. It tracks every state change of the glock from initial creation right through to the final demotion which ends with gfs2_glock_put and the final NL to unlocked transition. The l (locked) glock flag is always set before a state change occurs and will not be cleared until after it has finished. There are never any granted holders (the H glock holder flag) during a state change. If there are any queued holders, they will always be in the W (waiting) state. When the state change is complete then the holders may be granted which is the final operation before the l glock flag is cleared. The gfs2_demote_rq tracepoint keeps track of demote requests, both local and remote. Assuming that there is enough memory on the node, the local demote requests will rarely be seen, and most often they will be created by umount or by occasional memory reclaim. The number of remote demote requests is a measure of the contention between nodes for a particular inode or resource group. When a holder is granted a lock, gfs2_promote is called, this occurs as the final stages of a state change or when a lock is requested which can be granted immediately due to the glock state already caching a lock of a suitable mode. If the holder is the first one to be granted for this glock, then the f (first) flag is set on that holder. This is currently used only by resource groups.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/ap-glock-tracepoints-gfs2
Chapter 21. System Monitoring Tools
Chapter 21. System Monitoring Tools In order to configure the system, system administrators often need to determine the amount of free memory, how much free disk space is available, how the hard drive is partitioned, or what processes are running. 21.1. Viewing System Processes 21.1.1. Using the ps Command The ps command allows you to display information about running processes. It produces a static list, that is, a snapshot of what is running when you execute the command. If you want a constantly updated list of running processes, use the top command or the System Monitor application instead. To list all processes that are currently running on the system including processes owned by other users, type the following at a shell prompt: For each listed process, the ps ax command displays the process ID ( PID ), the terminal that is associated with it ( TTY ), the current status ( STAT ), the cumulated CPU time ( TIME ), and the name of the executable file ( COMMAND ). For example: To display the owner alongside each process, use the following command: Apart from the information provided by the ps ax command, ps aux displays the effective user name of the process owner ( USER ), the percentage of the CPU ( %CPU ) and memory ( %MEM ) usage, the virtual memory size in kilobytes ( VSZ ), the non-swapped physical memory size in kilobytes ( RSS ), and the time or date the process was started. For example: You can also use the ps command in a combination with grep to see if a particular process is running. For example, to determine if Emacs is running, type: For a complete list of available command line options, see the ps (1) manual page. 21.1.2. Using the top Command The top command displays a real-time list of processes that are running on the system. It also displays additional information about the system uptime, current CPU and memory usage, or total number of running processes, and allows you to perform actions such as sorting the list or killing a process. To run the top command, type the following at a shell prompt: For each listed process, the top command displays the process ID ( PID ), the effective user name of the process owner ( USER ), the priority ( PR ), the nice value ( NI ), the amount of virtual memory the process uses ( VIRT ), the amount of non-swapped physical memory the process uses ( RES ), the amount of shared memory the process uses ( SHR ), the process status field S ), the percentage of the CPU ( %CPU ) and memory ( %MEM ) usage, the cumulated CPU time ( TIME+ ), and the name of the executable file ( COMMAND ). For example: Table 21.1, "Interactive top commands" contains useful interactive commands that you can use with top . For more information, see the top (1) manual page. Table 21.1. Interactive top commands Command Description Enter , Space Immediately refreshes the display. h Displays a help screen for interactive commands. h , ? Displays a help screen for windows and field groups. k Kills a process. You are prompted for the process ID and the signal to send to it. n Changes the number of displayed processes. You are prompted to enter the number. u Sorts the list by user. M Sorts the list by memory usage. P Sorts the list by CPU usage. q Terminates the utility and returns to the shell prompt. 21.1.3. Using the System Monitor Tool The Processes tab of the System Monitor tool allows you to view, search for, change the priority of, and kill processes from the graphical user interface. To start the System Monitor tool from the command line, type gnome-system-monitor at a shell prompt. The System Monitor tool appears. Alternatively, if using the GNOME desktop, press the Super key to enter the Activities Overview, type System Monitor and then press Enter . The System Monitor tool appears. The Super key appears in a variety of guises, depending on the keyboard and other hardware, but often as either the Windows or Command key, and typically to the left of the Spacebar . Click the Processes tab to view the list of running processes. Figure 21.1. System Monitor - Processes For each listed process, the System Monitor tool displays its name ( Process Name ), current status ( Status ), percentage of the CPU usage ( % CPU ), nice value ( Nice ), process ID ( ID ), memory usage ( Memory ), the channel the process is waiting in ( Waiting Channel ), and additional details about the session ( Session ). To sort the information by a specific column in ascending order, click the name of that column. Click the name of the column again to toggle the sort between ascending and descending order. By default, the System Monitor tool displays a list of processes that are owned by the current user. Selecting various options from the View menu allows you to: view only active processes, view all processes, view your processes, view process dependencies, Additionally, two buttons enable you to: refresh the list of processes, end a process by selecting it from the list and then clicking the End Process button. 21.2. Viewing Memory Usage 21.2.1. Using the free Command The free command allows you to display the amount of free and used memory on the system. To do so, type the following at a shell prompt: The free command provides information about both the physical memory ( Mem ) and swap space ( Swap ). It displays the total amount of memory ( total ), as well as the amount of memory that is in use ( used ), free ( free ), shared ( shared ), sum of buffers and cached ( buff/cache ), and available ( available ). For example: By default, free displays the values in kilobytes. To display the values in megabytes, supply the -m command line option: For instance: For a complete list of available command line options, see the free (1) manual page. 21.2.2. Using the System Monitor Tool The Resources tab of the System Monitor tool allows you to view the amount of free and used memory on the system. To start the System Monitor tool from the command line, type gnome-system-monitor at a shell prompt. The System Monitor tool appears. Alternatively, if using the GNOME desktop, press the Super key to enter the Activities Overview, type System Monitor and then press Enter . The System Monitor tool appears. The Super key appears in a variety of guises, depending on the keyboard and other hardware, but often as either the Windows or Command key, and typically to the left of the Spacebar . Click the Resources tab to view the system's memory usage. Figure 21.2. System Monitor - Resources In the Memory and Swap History section, the System Monitor tool displays a graphical representation of the memory and swap usage history, as well as the total amount of the physical memory ( Memory ) and swap space ( Swap ) and how much of it is in use. 21.3. Viewing CPU Usage 21.3.1. Using the System Monitor Tool The Resources tab of the System Monitor tool allows you to view the current CPU usage on the system. To start the System Monitor tool from the command line, type gnome-system-monitor at a shell prompt. The System Monitor tool appears. Alternatively, if using the GNOME desktop, press the Super key to enter the Activities Overview, type System Monitor and then press Enter . The System Monitor tool appears. The Super key appears in a variety of guises, depending on the keyboard and other hardware, but often as either the Windows or Command key, and typically to the left of the Spacebar . Click the Resources tab to view the system's CPU usage. In the CPU History section, the System Monitor tool displays a graphical representation of the CPU usage history and shows the percentage of how much CPU is currently in use. 21.4. Viewing Block Devices and File Systems 21.4.1. Using the lsblk Command The lsblk command allows you to display a list of available block devices. It provides more information and better control on output formatting than the blkid command. It reads information from udev , therefore it is usable by non- root users. To display a list of block devices, type the following at a shell prompt: For each listed block device, the lsblk command displays the device name ( NAME ), major and minor device number ( MAJ:MIN ), if the device is removable ( RM ), its size ( SIZE ), if the device is read-only ( RO ), what type it is ( TYPE ), and where the device is mounted ( MOUNTPOINT ). For example: By default, lsblk lists block devices in a tree-like format. To display the information as an ordinary list, add the -l command line option: For instance: For a complete list of available command line options, see the lsblk (8) manual page. 21.4.2. Using the blkid Command The blkid command allows you to display low-level information about available block devices. It requires root privileges, therefore non- root users should use the lsblk command. To do so, type the following at a shell prompt as root : For each listed block device, the blkid command displays available attributes such as its universally unique identifier ( UUID ), file system type ( TYPE ), or volume label ( LABEL ). For example: By default, the blkid command lists all available block devices. To display information about a particular device only, specify the device name on the command line: For instance, to display information about /dev/vda1 , type as root : You can also use the above command with the -p and -o udev command line options to obtain more detailed information. Note that root privileges are required to run this command: For example: For a complete list of available command line options, see the blkid (8) manual page. 21.4.3. Using the findmnt Command The findmnt command allows you to display a list of currently mounted file systems. To do so, type the following at a shell prompt: For each listed file system, the findmnt command displays the target mount point ( TARGET ), source device ( SOURCE ), file system type ( FSTYPE ), and relevant mount options ( OPTIONS ). For example: By default, findmnt lists file systems in a tree-like format. To display the information as an ordinary list, add the -l command line option: For instance: You can also choose to list only file systems of a particular type. To do so, add the -t command line option followed by a file system type: For example, to all list xfs file systems, type: For a complete list of available command line options, see the findmnt (8) manual page. 21.4.4. Using the df Command The df command allows you to display a detailed report on the system's disk space usage. To do so, type the following at a shell prompt: For each listed file system, the df command displays its name ( Filesystem ), size ( 1K-blocks or Size ), how much space is used ( Used ), how much space is still available ( Available ), the percentage of space usage ( Use% ), and where is the file system mounted ( Mounted on ). For example: By default, the df command shows the partition size in 1 kilobyte blocks and the amount of used and available disk space in kilobytes. To view the information in megabytes and gigabytes, supply the -h command line option, which causes df to display the values in a human-readable format: For instance: For a complete list of available command line options, see the df (1) manual page. 21.4.5. Using the du Command The du command allows you to displays the amount of space that is being used by files in a directory. To display the disk usage for each of the subdirectories in the current working directory, run the command with no additional command line options: For example: By default, the du command displays the disk usage in kilobytes. To view the information in megabytes and gigabytes, supply the -h command line option, which causes the utility to display the values in a human-readable format: For instance: At the end of the list, the du command always shows the grand total for the current directory. To display only this information, supply the -s command line option: For example: For a complete list of available command line options, see the du (1) manual page. 21.4.6. Using the System Monitor Tool The File Systems tab of the System Monitor tool allows you to view file systems and disk space usage in the graphical user interface. To start the System Monitor tool from the command line, type gnome-system-monitor at a shell prompt. The System Monitor tool appears. Alternatively, if using the GNOME desktop, press the Super key to enter the Activities Overview, type System Monitor and then press Enter . The System Monitor tool appears. The Super key appears in a variety of guises, depending on the keyboard and other hardware, but often as either the Windows or Command key, and typically to the left of the Spacebar . Click the File Systems tab to view a list of file systems. Figure 21.3. System Monitor - File Systems For each listed file system, the System Monitor tool displays the source device ( Device ), target mount point ( Directory ), and file system type ( Type ), as well as its size ( Total ), and how much space is available ( Available ), and used ( Used ). 21.5. Viewing Hardware Information 21.5.1. Using the lspci Command The lspci command allows you to display information about PCI buses and devices that are attached to them. To list all PCI devices that are in the system, type the following at a shell prompt: This displays a simple list of devices, for example: You can also use the -v command line option to display more verbose output, or -vv for very verbose output: For instance, to determine the manufacturer, model, and memory size of a system's video card, type: For a complete list of available command line options, see the lspci (8) manual page. 21.5.2. Using the lsusb Command The lsusb command allows you to display information about USB buses and devices that are attached to them. To list all USB devices that are in the system, type the following at a shell prompt: This displays a simple list of devices, for example: You can also use the -v command line option to display more verbose output: For instance: For a complete list of available command line options, see the lsusb (8) manual page. 21.5.3. Using the lscpu Command The lscpu command allows you to list information about CPUs that are present in the system, including the number of CPUs, their architecture, vendor, family, model, CPU caches, etc. To do so, type the following at a shell prompt: For example: For a complete list of available command line options, see the lscpu (1) manual page. 21.6. Checking for Hardware Errors Red Hat Enterprise Linux 7 introduced the new hardware event report mechanism ( HERM .) This mechanism gathers system-reported memory errors as well as errors reported by the error detection and correction ( EDAC ) mechanism for dual in-line memory modules ( DIMM s) and reports them to user space. The user-space daemon rasdaemon , catches and handles all reliability, availability, and serviceability ( RAS ) error events that come from the kernel tracing mechanism, and logs them. The functions previously provided by edac-utils are now replaced by rasdaemon . To install rasdaemon , enter the following command as root : Start the service as follows: To make the service run at system start, enter the following command: The ras-mc-ctl utility provides a means to work with EDAC drivers. Enter the following command to see a list of command options: To view a summary of memory controller events, run as root : To view a list of errors reported by the memory controller, run as root : These commands are also described in the ras-mc-ctl(8) manual page. 21.7. Monitoring Performance with Net-SNMP Red Hat Enterprise Linux 7 includes the Net-SNMP software suite, which includes a flexible and extensible simple network management protocol ( SNMP ) agent. This agent and its associated utilities can be used to provide performance data from a large number of systems to a variety of tools which support polling over the SNMP protocol. This section provides information on configuring the Net-SNMP agent to securely provide performance data over the network, retrieving the data using the SNMP protocol, and extending the SNMP agent to provide custom performance metrics. 21.7.1. Installing Net-SNMP The Net-SNMP software suite is available as a set of RPM packages in the Red Hat Enterprise Linux software distribution. Table 21.2, "Available Net-SNMP packages" summarizes each of the packages and their contents. Table 21.2. Available Net-SNMP packages Package Provides net-snmp The SNMP Agent Daemon and documentation. This package is required for exporting performance data. net-snmp-libs The netsnmp library and the bundled management information bases ( MIB s). This package is required for exporting performance data. net-snmp-utils SNMP clients such as snmpget and snmpwalk . This package is required in order to query a system's performance data over SNMP. net-snmp-perl The mib2c utility and the NetSNMP Perl module. Note that this package is provided by the Optional channel. See Section 9.5.7, "Adding the Optional and Supplementary Repositories" for more information on Red Hat additional channels. net-snmp-python An SNMP client library for Python. Note that this package is provided by the Optional channel. See Section 9.5.7, "Adding the Optional and Supplementary Repositories" for more information on Red Hat additional channels. To install any of these packages, use the yum command in the following form: For example, to install the SNMP Agent Daemon and SNMP clients used in the rest of this section, type the following at a shell prompt as root : For more information on how to install new packages in Red Hat Enterprise Linux, see Section 9.2.4, "Installing Packages" . 21.7.2. Running the Net-SNMP Daemon The net-snmp package contains snmpd , the SNMP Agent Daemon. This section provides information on how to start, stop, and restart the snmpd service. For more information on managing system services in Red Hat Enterprise Linux 7, see Chapter 10, Managing Services with systemd . 21.7.2.1. Starting the Service To run the snmpd service in the current session, type the following at a shell prompt as root : To configure the service to be automatically started at boot time, use the following command: 21.7.2.2. Stopping the Service To stop the running snmpd service, type the following at a shell prompt as root : To disable starting the service at boot time, use the following command: 21.7.2.3. Restarting the Service To restart the running snmpd service, type the following at a shell prompt: This command stops the service and starts it again in quick succession. To only reload the configuration without stopping the service, run the following command instead: This causes the running snmpd service to reload its configuration. 21.7.3. Configuring Net-SNMP To change the Net-SNMP Agent Daemon configuration, edit the /etc/snmp/snmpd.conf configuration file. The default snmpd.conf file included with Red Hat Enterprise Linux 7 is heavily commented and serves as a good starting point for agent configuration. This section focuses on two common tasks: setting system information and configuring authentication. For more information about available configuration directives, see the snmpd.conf (5) manual page. Additionally, there is a utility in the net-snmp package named snmpconf which can be used to interactively generate a valid agent configuration. Note that the net-snmp-utils package must be installed in order to use the snmpwalk utility described in this section. Note For any changes to the configuration file to take effect, force the snmpd service to re-read the configuration by running the following command as root : 21.7.3.1. Setting System Information Net-SNMP provides some rudimentary system information via the system tree. For example, the following snmpwalk command shows the system tree with a default agent configuration. By default, the sysName object is set to the host name. The sysLocation and sysContact objects can be configured in the /etc/snmp/snmpd.conf file by changing the value of the syslocation and syscontact directives, for example: After making changes to the configuration file, reload the configuration and test it by running the snmpwalk command again: 21.7.3.2. Configuring Authentication The Net-SNMP Agent Daemon supports all three versions of the SNMP protocol. The first two versions (1 and 2c) provide for simple authentication using a community string . This string is a shared secret between the agent and any client utilities. The string is passed in clear text over the network however and is not considered secure. Version 3 of the SNMP protocol supports user authentication and message encryption using a variety of protocols. The Net-SNMP agent also supports tunneling over SSH, and TLS authentication with X.509 certificates. Configuring SNMP Version 2c Community To configure an SNMP version 2c community , use either the rocommunity or rwcommunity directive in the /etc/snmp/snmpd.conf configuration file. The format of the directives is as follows: ... where community is the community string to use, source is an IP address or subnet, and OID is the SNMP tree to provide access to. For example, the following directive provides read-only access to the system tree to a client using the community string "redhat" on the local machine: To test the configuration, use the snmpwalk command with the -v and -c options. Configuring SNMP Version 3 User To configure an SNMP version 3 user , use the net-snmp-create-v3-user command. This command adds entries to the /var/lib/net-snmp/snmpd.conf and /etc/snmp/snmpd.conf files which create the user and grant access to the user. Note that the net-snmp-create-v3-user command may only be run when the agent is not running. The following example creates the "admin" user with the password "redhatsnmp": The rwuser directive (or rouser when the -ro command line option is supplied) that net-snmp-create-v3-user adds to /etc/snmp/snmpd.conf has a similar format to the rwcommunity and rocommunity directives: ... where user is a user name and OID is the SNMP tree to provide access to. By default, the Net-SNMP Agent Daemon allows only authenticated requests (the auth option). The noauth option allows you to permit unauthenticated requests, and the priv option enforces the use of encryption. The authpriv option specifies that requests must be authenticated and replies should be encrypted. For example, the following line grants the user "admin" read-write access to the entire tree: To test the configuration, create a .snmp/ directory in your user's home directory and a configuration file named snmp.conf in that directory ( ~/.snmp/snmp.conf ) with the following lines: The snmpwalk command will now use these authentication settings when querying the agent: 21.7.4. Retrieving Performance Data over SNMP The Net-SNMP Agent in Red Hat Enterprise Linux provides a wide variety of performance information over the SNMP protocol. In addition, the agent can be queried for a listing of the installed RPM packages on the system, a listing of currently running processes on the system, or the network configuration of the system. This section provides an overview of OIDs related to performance tuning available over SNMP. It assumes that the net-snmp-utils package is installed and that the user is granted access to the SNMP tree as described in Section 21.7.3.2, "Configuring Authentication" . 21.7.4.1. Hardware Configuration The Host Resources MIB included with Net-SNMP presents information about the current hardware and software configuration of a host to a client utility. Table 21.3, "Available OIDs" summarizes the different OIDs available under that MIB. Table 21.3. Available OIDs OID Description HOST-RESOURCES-MIB::hrSystem Contains general system information such as uptime, number of users, and number of running processes. HOST-RESOURCES-MIB::hrStorage Contains data on memory and file system usage. HOST-RESOURCES-MIB::hrDevices Contains a listing of all processors, network devices, and file systems. HOST-RESOURCES-MIB::hrSWRun Contains a listing of all running processes. HOST-RESOURCES-MIB::hrSWRunPerf Contains memory and CPU statistics on the process table from HOST-RESOURCES-MIB::hrSWRun. HOST-RESOURCES-MIB::hrSWInstalled Contains a listing of the RPM database. There are also a number of SNMP tables available in the Host Resources MIB which can be used to retrieve a summary of the available information. The following example displays HOST-RESOURCES-MIB::hrFSTable : For more information about HOST-RESOURCES-MIB , see the /usr/share/snmp/mibs/HOST-RESOURCES-MIB.txt file. 21.7.4.2. CPU and Memory Information Most system performance data is available in the UCD SNMP MIB . The systemStats OID provides a number of counters around processor usage: In particular, the ssCpuRawUser , ssCpuRawSystem , ssCpuRawWait , and ssCpuRawIdle OIDs provide counters which are helpful when determining whether a system is spending most of its processor time in kernel space, user space, or I/O. ssRawSwapIn and ssRawSwapOut can be helpful when determining whether a system is suffering from memory exhaustion. More memory information is available under the UCD-SNMP-MIB::memory OID, which provides similar data to the free command: Load averages are also available in the UCD SNMP MIB . The SNMP table UCD-SNMP-MIB::laTable has a listing of the 1, 5, and 15 minute load averages: 21.7.4.3. File System and Disk Information The Host Resources MIB provides information on file system size and usage. Each file system (and also each memory pool) has an entry in the HOST-RESOURCES-MIB::hrStorageTable table: The OIDs under HOST-RESOURCES-MIB::hrStorageSize and HOST-RESOURCES-MIB::hrStorageUsed can be used to calculate the remaining capacity of each mounted file system. I/O data is available both in UCD-SNMP-MIB::systemStats ( ssIORawSent.0 and ssIORawRecieved.0 ) and in UCD-DISKIO-MIB::diskIOTable . The latter provides much more granular data. Under this table are OIDs for diskIONReadX and diskIONWrittenX , which provide counters for the number of bytes read from and written to the block device in question since the system boot: 21.7.4.4. Network Information The Interfaces MIB provides information on network devices. IF-MIB::ifTable provides an SNMP table with an entry for each interface on the system, the configuration of the interface, and various packet counters for the interface. The following example shows the first few columns of ifTable on a system with two physical network interfaces: Network traffic is available under the OIDs IF-MIB::ifOutOctets and IF-MIB::ifInOctets . The following SNMP queries will retrieve network traffic for each of the interfaces on this system: 21.7.5. Extending Net-SNMP The Net-SNMP Agent can be extended to provide application metrics in addition to raw system metrics. This allows for capacity planning as well as performance issue troubleshooting. For example, it may be helpful to know that an email system had a 5-minute load average of 15 while being tested, but it is more helpful to know that the email system has a load average of 15 while processing 80,000 messages a second. When application metrics are available via the same interface as the system metrics, this also allows for the visualization of the impact of different load scenarios on system performance (for example, each additional 10,000 messages increases the load average linearly until 100,000). A number of the applications included in Red Hat Enterprise Linux extend the Net-SNMP Agent to provide application metrics over SNMP. There are several ways to extend the agent for custom applications as well. This section describes extending the agent with shell scripts and the Perl plug-ins from the Optional channel. It assumes that the net-snmp-utils and net-snmp-perl packages are installed, and that the user is granted access to the SNMP tree as described in Section 21.7.3.2, "Configuring Authentication" . 21.7.5.1. Extending Net-SNMP with Shell Scripts The Net-SNMP Agent provides an extension MIB ( NET-SNMP-EXTEND-MIB ) that can be used to query arbitrary shell scripts. To specify the shell script to run, use the extend directive in the /etc/snmp/snmpd.conf file. Once defined, the Agent will provide the exit code and any output of the command over SNMP. The example below demonstrates this mechanism with a script which determines the number of httpd processes in the process table. Note The Net-SNMP Agent also provides a built-in mechanism for checking the process table via the proc directive. See the snmpd.conf (5) manual page for more information. The exit code of the following shell script is the number of httpd processes running on the system at a given point in time: To make this script available over SNMP, copy the script to a location on the system path, set the executable bit, and add an extend directive to the /etc/snmp/snmpd.conf file. The format of the extend directive is the following: ... where name is an identifying string for the extension, prog is the program to run, and args are the arguments to give the program. For instance, if the above shell script is copied to /usr/local/bin/check_apache.sh , the following directive will add the script to the SNMP tree: The script can then be queried at NET-SNMP-EXTEND-MIB::nsExtendObjects : Note that the exit code ("8" in this example) is provided as an INTEGER type and any output is provided as a STRING type. To expose multiple metrics as integers, supply different arguments to the script using the extend directive. For example, the following shell script can be used to determine the number of processes matching an arbitrary string, and will also output a text string giving the number of processes: The following /etc/snmp/snmpd.conf directives will give both the number of httpd PIDs as well as the number of snmpd PIDs when the above script is copied to /usr/local/bin/check_proc.sh : The following example shows the output of an snmpwalk of the nsExtendObjects OID: Warning Integer exit codes are limited to a range of 0-255. For values that are likely to exceed 256, either use the standard output of the script (which will be typed as a string) or a different method of extending the agent. This last example shows a query for the free memory of the system and the number of httpd processes. This query could be used during a performance test to determine the impact of the number of processes on memory pressure: 21.7.5.2. Extending Net-SNMP with Perl Executing shell scripts using the extend directive is a fairly limited method for exposing custom application metrics over SNMP. The Net-SNMP Agent also provides an embedded Perl interface for exposing custom objects. The net-snmp-perl package in the Optional channel provides the NetSNMP::agent Perl module that is used to write embedded Perl plug-ins on Red Hat Enterprise Linux. Note Before subscribing to the Optional and Supplementary channels see the Scope of Coverage Details . If you decide to install packages from these channels, follow the steps documented in the article called How to access Optional and Supplementary channels, and -devel packages using Red Hat Subscription Manager (RHSM)? on the Red Hat Customer Portal. The NetSNMP::agent Perl module provides an agent object which is used to handle requests for a part of the agent's OID tree. The agent object's constructor has options for running the agent as a sub-agent of snmpd or a standalone agent. No arguments are necessary to create an embedded agent: The agent object has a register method which is used to register a callback function with a particular OID. The register function takes a name, OID, and pointer to the callback function. The following example will register a callback function named hello_handler with the SNMP Agent which will handle requests under the OID .1.3.6.1.4.1.8072.9999.9999 : Note The OID .1.3.6.1.4.1.8072.9999.9999 ( NET-SNMP-MIB::netSnmpPlaypen ) is typically used for demonstration purposes only. If your organization does not already have a root OID, you can obtain one by contacting an ISO Name Registration Authority (ANSI in the United States). The handler function will be called with four parameters, HANDLER , REGISTRATION_INFO , REQUEST_INFO , and REQUESTS . The REQUESTS parameter contains a list of requests in the current call and should be iterated over and populated with data. The request objects in the list have get and set methods which allow for manipulating the OID and value of the request. For example, the following call will set the value of a request object to the string "hello world": The handler function should respond to two types of SNMP requests: the GET request and the GETNEXT request. The type of request is determined by calling the getMode method on the request_info object passed as the third parameter to the handler function. If the request is a GET request, the caller will expect the handler to set the value of the request object, depending on the OID of the request. If the request is a GETNEXT request, the caller will also expect the handler to set the OID of the request to the available OID in the tree. This is illustrated in the following code example: When getMode returns MODE_GET , the handler analyzes the value of the getOID call on the request object. The value of the request is set to either string_value if the OID ends in ".1.0", or set to integer_value if the OID ends in ".1.1". If the getMode returns MODE_GETNEXT , the handler determines whether the OID of the request is ".1.0", and then sets the OID and value for ".1.1". If the request is higher on the tree than ".1.0", the OID and value for ".1.0" is set. This in effect returns the "" value in the tree so that a program like snmpwalk can traverse the tree without prior knowledge of the structure. The type of the variable is set using constants from NetSNMP::ASN . See the perldoc for NetSNMP::ASN for a full list of available constants. The entire code listing for this example Perl plug-in is as follows: To test the plug-in, copy the above program to /usr/share/snmp/hello_world.pl and add the following line to the /etc/snmp/snmpd.conf configuration file: The SNMP Agent Daemon will need to be restarted to load the new Perl plug-in. Once it has been restarted, an snmpwalk should return the new data: The snmpget should also be used to exercise the other mode of the handler: 21.8. Additional Resources To learn more about gathering system information, see the following resources. 21.8.1. Installed Documentation lscpu (1) - The manual page for the lscpu command. lsusb (8) - The manual page for the lsusb command. findmnt (8) - The manual page for the findmnt command. blkid (8) - The manual page for the blkid command. lsblk (8) - The manual page for the lsblk command. ps (1) - The manual page for the ps command. top (1) - The manual page for the top command. free (1) - The manual page for the free command. df (1) - The manual page for the df command. du (1) - The manual page for the du command. lspci (8) - The manual page for the lspci command. snmpd (8) - The manual page for the snmpd service. snmpd.conf (5) - The manual page for the /etc/snmp/snmpd.conf file containing full documentation of available configuration directives.
[ "ps ax", "~]USD ps ax PID TTY STAT TIME COMMAND 1 ? Ss 0:01 /usr/lib/systemd/systemd --switched-root --system --deserialize 23 2 ? S 0:00 [kthreadd] 3 ? S 0:00 [ksoftirqd/0] 5 ? S> 0:00 [kworker/0:0H] [output truncated]", "ps aux", "~]USD ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.3 0.3 134776 6840 ? Ss 09:28 0:01 /usr/lib/systemd/systemd --switched-root --system --d root 2 0.0 0.0 0 0 ? S 09:28 0:00 [kthreadd] root 3 0.0 0.0 0 0 ? S 09:28 0:00 [ksoftirqd/0] root 5 0.0 0.0 0 0 ? S> 09:28 0:00 [kworker/0:0H] [output truncated]", "~]USD ps ax | grep emacs 12056 pts/3 S+ 0:00 emacs 12060 pts/2 S+ 0:00 grep --color=auto emacs", "top", "~]USD top top - 16:42:12 up 13 min, 2 users, load average: 0.67, 0.31, 0.19 Tasks: 165 total, 2 running, 163 sleeping, 0 stopped, 0 zombie %Cpu(s): 37.5 us, 3.0 sy, 0.0 ni, 59.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 1016800 total, 77368 free, 728936 used, 210496 buff/cache KiB Swap: 839676 total, 776796 free, 62880 used. 122628 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3168 sjw 20 0 1454628 143240 15016 S 20.3 14.1 0:22.53 gnome-shell 4006 sjw 20 0 1367832 298876 27856 S 13.0 29.4 0:15.58 firefox 1683 root 20 0 242204 50464 4268 S 6.0 5.0 0:07.76 Xorg 4125 sjw 20 0 555148 19820 12644 S 1.3 1.9 0:00.48 gnome-terminal- 10 root 20 0 0 0 0 S 0.3 0.0 0:00.39 rcu_sched 3091 sjw 20 0 37000 1468 904 S 0.3 0.1 0:00.31 dbus-daemon 3096 sjw 20 0 129688 2164 1492 S 0.3 0.2 0:00.14 at-spi2-registr 3925 root 20 0 0 0 0 S 0.3 0.0 0:00.05 kworker/0:0 1 root 20 0 126568 3884 1052 S 0.0 0.4 0:01.61 systemd 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0 6 root 20 0 0 0 0 S 0.0 0.0 0:00.07 kworker/u2:0 [output truncated]", "free", "~]USD free total used free shared buff/cache available Mem: 1016800 727300 84684 3500 204816 124068 Swap: 839676 66920 772756", "free -m", "~]USD free -m total used free shared buff/cache available Mem: 992 711 81 3 200 120 Swap: 819 65 754", "lsblk", "~]USD lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sr0 11:0 1 1024M 0 rom vda 252:0 0 20G 0 rom |-vda1 252:1 0 500M 0 part /boot `-vda2 252:2 0 19.5G 0 part |-vg_kvm-lv_root (dm-0) 253:0 0 18G 0 lvm / `-vg_kvm-lv_swap (dm-1) 253:1 0 1.5G 0 lvm [SWAP]", "lsblk -l", "~]USD lsblk -l NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sr0 11:0 1 1024M 0 rom vda 252:0 0 20G 0 rom vda1 252:1 0 500M 0 part /boot vda2 252:2 0 19.5G 0 part vg_kvm-lv_root (dm-0) 253:0 0 18G 0 lvm / vg_kvm-lv_swap (dm-1) 253:1 0 1.5G 0 lvm [SWAP]", "blkid", "~]# blkid /dev/vda1: UUID=\"7fa9c421-0054-4555-b0ca-b470a97a3d84\" TYPE=\"ext4\" /dev/vda2: UUID=\"7IvYzk-TnnK-oPjf-ipdD-cofz-DXaJ-gPdgBW\" TYPE=\"LVM2_member\" /dev/mapper/vg_kvm-lv_root: UUID=\"a07b967c-71a0-4925-ab02-aebcad2ae824\" TYPE=\"ext4\" /dev/mapper/vg_kvm-lv_swap: UUID=\"d7ef54ca-9c41-4de4-ac1b-4193b0c1ddb6\" TYPE=\"swap\"", "blkid device_name", "~]# blkid /dev/vda1 /dev/vda1: UUID=\"7fa9c421-0054-4555-b0ca-b470a97a3d84\" TYPE=\"ext4\"", "blkid -po udev device_name", "~]# blkid -po udev /dev/vda1 ID_FS_UUID=7fa9c421-0054-4555-b0ca-b470a97a3d84 ID_FS_UUID_ENC=7fa9c421-0054-4555-b0ca-b470a97a3d84 ID_FS_VERSION=1.0 ID_FS_TYPE=ext4 ID_FS_USAGE=filesystem", "findmnt", "~]USD findmnt TARGET SOURCE FSTYPE OPTIONS / /dev/mapper/rhel-root xfs rw,relatime,seclabel,attr2,inode64,noquota ├─/proc proc proc rw,nosuid,nodev,noexec,relatime │ ├─/proc/sys/fs/binfmt_misc systemd-1 autofs rw,relatime,fd=32,pgrp=1,timeout=300,minproto=5,maxproto=5,direct │ └─/proc/fs/nfsd sunrpc nfsd rw,relatime ├─/sys sysfs sysfs rw,nosuid,nodev,noexec,relatime,seclabel │ ├─/sys/kernel/security securityfs securityfs rw,nosuid,nodev,noexec,relatime │ ├─/sys/fs/cgroup tmpfs tmpfs rw,nosuid,nodev,noexec,seclabel,mode=755 [output truncated]", "findmnt -l", "~]USD findmnt -l TARGET SOURCE FSTYPE OPTIONS /proc proc proc rw,nosuid,nodev,noexec,relatime /sys sysfs sysfs rw,nosuid,nodev,noexec,relatime,seclabel /dev devtmpfs devtmpfs rw,nosuid,seclabel,size=933372k,nr_inodes=233343,mode=755 /sys/kernel/security securityfs securityfs rw,nosuid,nodev,noexec,relatime /dev/shm tmpfs tmpfs rw,nosuid,nodev,seclabel /dev/pts devpts devpts rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000 /run tmpfs tmpfs rw,nosuid,nodev,seclabel,mode=755 /sys/fs/cgroup tmpfs tmpfs rw,nosuid,nodev,noexec,seclabel,mode=755 [output truncated]", "findmnt -t type", "~]USD findmnt -t xfs TARGET SOURCE FSTYPE OPTIONS / /dev/mapper/rhel-root xfs rw,relatime,seclabel,attr2,inode64,noquota └─/boot /dev/vda1 xfs rw,relatime,seclabel,attr2,inode64,noquota", "df", "~]USD df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vg_kvm-lv_root 18618236 4357360 13315112 25% / tmpfs 380376 288 380088 1% /dev/shm /dev/vda1 495844 77029 393215 17% /boot", "df -h", "~]USD df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_kvm-lv_root 18G 4.2G 13G 25% / tmpfs 372M 288K 372M 1% /dev/shm /dev/vda1 485M 76M 384M 17% /boot", "du", "~]USD du 14972 ./Downloads 4 ./.mozilla/extensions 4 ./.mozilla/plugins 12 ./.mozilla 15004 .", "du -h", "~]USD du -h 15M ./Downloads 4.0K ./.mozilla/extensions 4.0K ./.mozilla/plugins 12K ./.mozilla 15M .", "du -sh", "~]USD du -sh 15M .", "lspci", "~]USD lspci 00:00.0 Host bridge: Intel Corporation 82X38/X48 Express DRAM Controller 00:01.0 PCI bridge: Intel Corporation 82X38/X48 Express Host-Primary PCI Express Bridge 00:1a.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 02) 00:1a.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 02) 00:1a.2 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 02) [output truncated]", "lspci -v | -vv", "~]USD lspci -v [output truncated] 01:00.0 VGA compatible controller: nVidia Corporation G84 [Quadro FX 370] (rev a1) (prog-if 00 [VGA controller]) Subsystem: nVidia Corporation Device 0491 Physical Slot: 2 Flags: bus master, fast devsel, latency 0, IRQ 16 Memory at f2000000 (32-bit, non-prefetchable) [size=16M] Memory at e0000000 (64-bit, prefetchable) [size=256M] Memory at f0000000 (64-bit, non-prefetchable) [size=32M] I/O ports at 1100 [size=128] Expansion ROM at <unassigned> [disabled] Capabilities: <access denied> Kernel driver in use: nouveau Kernel modules: nouveau, nvidiafb [output truncated]", "lsusb", "~]USD lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub [output truncated] Bus 001 Device 002: ID 0bda:0151 Realtek Semiconductor Corp. Mass Storage Device (Multicard Reader) Bus 008 Device 002: ID 03f0:2c24 Hewlett-Packard Logitech M-UAL-96 Mouse Bus 008 Device 003: ID 04b3:3025 IBM Corp.", "lsusb -v", "~]USD lsusb -v [output truncated] Bus 008 Device 002: ID 03f0:2c24 Hewlett-Packard Logitech M-UAL-96 Mouse Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 8 idVendor 0x03f0 Hewlett-Packard idProduct 0x2c24 Logitech M-UAL-96 Mouse bcdDevice 31.00 iManufacturer 1 iProduct 2 iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 [output truncated]", "lscpu", "~]USD lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 1 Core(s) per socket: 4 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 23 Stepping: 7 CPU MHz: 1998.000 BogoMIPS: 4999.98 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 3072K NUMA node0 CPU(s): 0-3", "~]# yum install rasdaemon", "~]# systemctl start rasdaemon", "~]# systemctl enable rasdaemon", "~]USD ras-mc-ctl --help Usage: ras-mc-ctl [OPTIONS...] --quiet Quiet operation. --mainboard Print mainboard vendor and model for this hardware. --status Print status of EDAC drivers. output truncated", "~]# ras-mc-ctl --summary Memory controller events summary: Corrected on DIMM Label(s): 'CPU_SrcID#0_Ha#0_Chan#0_DIMM#0' location: 0:0:0:-1 errors: 1 No PCIe AER errors. No Extlog errors. MCE records summary: 1 MEMORY CONTROLLER RD_CHANNEL0_ERR Transaction: Memory read error errors 2 No Error errors", "~]# ras-mc-ctl --errors Memory controller events: 1 3172-02-17 00:47:01 -0500 1 Corrected error(s): memory read error at CPU_SrcID#0_Ha#0_Chan#0_DIMM#0 location: 0:0:0:-1, addr 65928, grain 7, syndrome 0 area:DRAM err_code:0001:0090 socket:0 ha:0 channel_mask:1 rank:0 No PCIe AER errors. No Extlog errors. MCE events: 1 3171-11-09 06:20:21 -0500 error: MEMORY CONTROLLER RD_CHANNEL0_ERR Transaction: Memory read error, mcg mcgstatus=0, mci Corrected_error, n_errors=1, mcgcap=0x01000c16, status=0x8c00004000010090, addr=0x1018893000, misc=0x15020a086, walltime=0x57e96780, cpuid=0x00050663, bank=0x00000007 2 3205-06-22 00:13:41 -0400 error: No Error, mcg mcgstatus=0, mci Corrected_error Error_enabled, mcgcap=0x01000c16, status=0x9400000000000000, addr=0x0000abcd, walltime=0x57e967ea, cpuid=0x00050663, bank=0x00000001 3 3205-06-22 00:13:41 -0400 error: No Error, mcg mcgstatus=0, mci Corrected_error Error_enabled, mcgcap=0x01000c16, status=0x9400000000000000, addr=0x00001234, walltime=0x57e967ea, cpu=0x00000001, cpuid=0x00050663, apicid=0x00000002, bank=0x00000002", "install package &hellip;", "~]# yum install net-snmp net-snmp-libs net-snmp-utils", "systemctl start snmpd.service", "systemctl enable snmpd.service", "systemctl stop snmpd.service", "systemctl disable snmpd.service", "systemctl restart snmpd.service", "systemctl reload snmpd.service", "systemctl reload snmpd.service", "~]# snmpwalk -v2c -c public localhost system SNMPv2-MIB::sysDescr.0 = STRING: Linux localhost.localdomain 3.10.0-123.el7.x86_64 #1 SMP Mon May 5 11:16:57 EDT 2014 x86_64 SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-MIB::netSnmpAgentOIDs.10 DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (464) 0:00:04.64 SNMPv2-MIB::sysContact.0 = STRING: Root <root@localhost> (configure /etc/snmp/snmp.local.conf) [output truncated]", "syslocation Datacenter, Row 4, Rack 3 syscontact UNIX Admin <[email protected]>", "~]# systemctl reload snmp.service ~]# snmpwalk -v2c -c public localhost system SNMPv2-MIB::sysDescr.0 = STRING: Linux localhost.localdomain 3.10.0-123.el7.x86_64 #1 SMP Mon May 5 11:16:57 EDT 2014 x86_64 SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-MIB::netSnmpAgentOIDs.10 DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (35424) 0:05:54.24 SNMPv2-MIB::sysContact.0 = STRING: UNIX Admin < [email protected] > SNMPv2-MIB::sysName.0 = STRING: localhost.localdomain SNMPv2-MIB::sysLocation.0 = STRING: Datacenter, Row 4, Rack 3 [output truncated]", "directive community source OID", "rocommunity redhat 127.0.0.1 .1.3.6.1.2.1.1", "~]# snmpwalk -v2c -c redhat localhost system SNMPv2-MIB::sysDescr.0 = STRING: Linux localhost.localdomain 3.10.0-123.el7.x86_64 #1 SMP Mon May 5 11:16:57 EDT 2014 x86_64 SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-MIB::netSnmpAgentOIDs.10 DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (101376) 0:16:53.76 SNMPv2-MIB::sysContact.0 = STRING: UNIX Admin <[email protected]> SNMPv2-MIB::sysName.0 = STRING: localhost.localdomain SNMPv2-MIB::sysLocation.0 = STRING: Datacenter, Row 4, Rack 3 [output truncated]", "~]# systemctl stop snmpd.service ~]# net-snmp-create-v3-user Enter a SNMPv3 user name to create: admin Enter authentication pass-phrase: redhatsnmp Enter encryption pass-phrase: [press return to reuse the authentication pass-phrase] adding the following line to /var/lib/net-snmp/snmpd.conf: createUser admin MD5 \"redhatsnmp\" DES adding the following line to /etc/snmp/snmpd.conf: rwuser admin ~]# systemctl start snmpd.service", "directive user noauth | auth | priv OID", "rwuser admin authpriv .1", "defVersion 3 defSecurityLevel authPriv defSecurityName admin defPassphrase redhatsnmp", "~]USD snmpwalk -v3 localhost system SNMPv2-MIB::sysDescr.0 = STRING: Linux localhost.localdomain 3.10.0-123.el7.x86_64 #1 SMP Mon May 5 11:16:57 EDT 2014 x86_64 [output truncated]", "~]USD snmptable -Cb localhost HOST-RESOURCES-MIB::hrFSTable SNMP table: HOST-RESOURCES-MIB::hrFSTable Index MountPoint RemoteMountPoint Type Access Bootable StorageIndex LastFullBackupDate LastPartialBackupDate 1 \"/\" \"\" HOST-RESOURCES-TYPES::hrFSLinuxExt2 readWrite true 31 0-1-1,0:0:0.0 0-1-1,0:0:0.0 5 \"/dev/shm\" \"\" HOST-RESOURCES-TYPES::hrFSOther readWrite false 35 0-1-1,0:0:0.0 0-1-1,0:0:0.0 6 \"/boot\" \"\" HOST-RESOURCES-TYPES::hrFSLinuxExt2 readWrite false 36 0-1-1,0:0:0.0 0-1-1,0:0:0.0", "~]USD snmpwalk localhost UCD-SNMP-MIB::systemStats UCD-SNMP-MIB::ssIndex.0 = INTEGER: 1 UCD-SNMP-MIB::ssErrorName.0 = STRING: systemStats UCD-SNMP-MIB::ssSwapIn.0 = INTEGER: 0 kB UCD-SNMP-MIB::ssSwapOut.0 = INTEGER: 0 kB UCD-SNMP-MIB::ssIOSent.0 = INTEGER: 0 blocks/s UCD-SNMP-MIB::ssIOReceive.0 = INTEGER: 0 blocks/s UCD-SNMP-MIB::ssSysInterrupts.0 = INTEGER: 29 interrupts/s UCD-SNMP-MIB::ssSysContext.0 = INTEGER: 18 switches/s UCD-SNMP-MIB::ssCpuUser.0 = INTEGER: 0 UCD-SNMP-MIB::ssCpuSystem.0 = INTEGER: 0 UCD-SNMP-MIB::ssCpuIdle.0 = INTEGER: 99 UCD-SNMP-MIB::ssCpuRawUser.0 = Counter32: 2278 UCD-SNMP-MIB::ssCpuRawNice.0 = Counter32: 1395 UCD-SNMP-MIB::ssCpuRawSystem.0 = Counter32: 6826 UCD-SNMP-MIB::ssCpuRawIdle.0 = Counter32: 3383736 UCD-SNMP-MIB::ssCpuRawWait.0 = Counter32: 7629 UCD-SNMP-MIB::ssCpuRawKernel.0 = Counter32: 0 UCD-SNMP-MIB::ssCpuRawInterrupt.0 = Counter32: 434 UCD-SNMP-MIB::ssIORawSent.0 = Counter32: 266770 UCD-SNMP-MIB::ssIORawReceived.0 = Counter32: 427302 UCD-SNMP-MIB::ssRawInterrupts.0 = Counter32: 743442 UCD-SNMP-MIB::ssRawContexts.0 = Counter32: 718557 UCD-SNMP-MIB::ssCpuRawSoftIRQ.0 = Counter32: 128 UCD-SNMP-MIB::ssRawSwapIn.0 = Counter32: 0 UCD-SNMP-MIB::ssRawSwapOut.0 = Counter32: 0", "~]USD snmpwalk localhost UCD-SNMP-MIB::memory UCD-SNMP-MIB::memIndex.0 = INTEGER: 0 UCD-SNMP-MIB::memErrorName.0 = STRING: swap UCD-SNMP-MIB::memTotalSwap.0 = INTEGER: 1023992 kB UCD-SNMP-MIB::memAvailSwap.0 = INTEGER: 1023992 kB UCD-SNMP-MIB::memTotalReal.0 = INTEGER: 1021588 kB UCD-SNMP-MIB::memAvailReal.0 = INTEGER: 634260 kB UCD-SNMP-MIB::memTotalFree.0 = INTEGER: 1658252 kB UCD-SNMP-MIB::memMinimumSwap.0 = INTEGER: 16000 kB UCD-SNMP-MIB::memBuffer.0 = INTEGER: 30760 kB UCD-SNMP-MIB::memCached.0 = INTEGER: 216200 kB UCD-SNMP-MIB::memSwapError.0 = INTEGER: noError(0) UCD-SNMP-MIB::memSwapErrorMsg.0 = STRING:", "~]USD snmptable localhost UCD-SNMP-MIB::laTable SNMP table: UCD-SNMP-MIB::laTable laIndex laNames laLoad laConfig laLoadInt laLoadFloat laErrorFlag laErrMessage 1 Load-1 0.00 12.00 0 0.000000 noError 2 Load-5 0.00 12.00 0 0.000000 noError 3 Load-15 0.00 12.00 0 0.000000 noError", "~]USD snmptable -Cb localhost HOST-RESOURCES-MIB::hrStorageTable SNMP table: HOST-RESOURCES-MIB::hrStorageTable Index Type Descr AllocationUnits Size Used AllocationFailures 1 HOST-RESOURCES-TYPES::hrStorageRam Physical memory 1024 Bytes 1021588 388064 ? 3 HOST-RESOURCES-TYPES::hrStorageVirtualMemory Virtual memory 1024 Bytes 2045580 388064 ? 6 HOST-RESOURCES-TYPES::hrStorageOther Memory buffers 1024 Bytes 1021588 31048 ? 7 HOST-RESOURCES-TYPES::hrStorageOther Cached memory 1024 Bytes 216604 216604 ? 10 HOST-RESOURCES-TYPES::hrStorageVirtualMemory Swap space 1024 Bytes 1023992 0 ? 31 HOST-RESOURCES-TYPES::hrStorageFixedDisk / 4096 Bytes 2277614 250391 ? 35 HOST-RESOURCES-TYPES::hrStorageFixedDisk /dev/shm 4096 Bytes 127698 0 ? 36 HOST-RESOURCES-TYPES::hrStorageFixedDisk /boot 1024 Bytes 198337 26694 ?", "~]USD snmptable -Cb localhost UCD-DISKIO-MIB::diskIOTable SNMP table: UCD-DISKIO-MIB::diskIOTable Index Device NRead NWritten Reads Writes LA1 LA5 LA15 NReadX NWrittenX 25 sda 216886272 139109376 16409 4894 ? ? ? 216886272 139109376 26 sda1 2455552 5120 613 2 ? ? ? 2455552 5120 27 sda2 1486848 0 332 0 ? ? ? 1486848 0 28 sda3 212321280 139104256 15312 4871 ? ? ? 212321280 139104256", "~]USD snmptable -Cb localhost IF-MIB::ifTable SNMP table: IF-MIB::ifTable Index Descr Type Mtu Speed PhysAddress AdminStatus 1 lo softwareLoopback 16436 10000000 up 2 eth0 ethernetCsmacd 1500 0 52:54:0:c7:69:58 up 3 eth1 ethernetCsmacd 1500 0 52:54:0:a7:a3:24 down", "~]USD snmpwalk localhost IF-MIB::ifDescr IF-MIB::ifDescr.1 = STRING: lo IF-MIB::ifDescr.2 = STRING: eth0 IF-MIB::ifDescr.3 = STRING: eth1 ~]USD snmpwalk localhost IF-MIB::ifOutOctets IF-MIB::ifOutOctets.1 = Counter32: 10060699 IF-MIB::ifOutOctets.2 = Counter32: 650 IF-MIB::ifOutOctets.3 = Counter32: 0 ~]USD snmpwalk localhost IF-MIB::ifInOctets IF-MIB::ifInOctets.1 = Counter32: 10060699 IF-MIB::ifInOctets.2 = Counter32: 78650 IF-MIB::ifInOctets.3 = Counter32: 0", "#!/bin/sh NUMPIDS= pgrep httpd | wc -l exit USDNUMPIDS", "extend name prog args", "extend httpd_pids /bin/sh /usr/local/bin/check_apache.sh", "~]USD snmpwalk localhost NET-SNMP-EXTEND-MIB::nsExtendObjects NET-SNMP-EXTEND-MIB::nsExtendNumEntries.0 = INTEGER: 1 NET-SNMP-EXTEND-MIB::nsExtendCommand.\"httpd_pids\" = STRING: /bin/sh NET-SNMP-EXTEND-MIB::nsExtendArgs.\"httpd_pids\" = STRING: /usr/local/bin/check_apache.sh NET-SNMP-EXTEND-MIB::nsExtendInput.\"httpd_pids\" = STRING: NET-SNMP-EXTEND-MIB::nsExtendCacheTime.\"httpd_pids\" = INTEGER: 5 NET-SNMP-EXTEND-MIB::nsExtendExecType.\"httpd_pids\" = INTEGER: exec(1) NET-SNMP-EXTEND-MIB::nsExtendRunType.\"httpd_pids\" = INTEGER: run-on-read(1) NET-SNMP-EXTEND-MIB::nsExtendStorage.\"httpd_pids\" = INTEGER: permanent(4) NET-SNMP-EXTEND-MIB::nsExtendStatus.\"httpd_pids\" = INTEGER: active(1) NET-SNMP-EXTEND-MIB::nsExtendOutput1Line.\"httpd_pids\" = STRING: NET-SNMP-EXTEND-MIB::nsExtendOutputFull.\"httpd_pids\" = STRING: NET-SNMP-EXTEND-MIB::nsExtendOutNumLines.\"httpd_pids\" = INTEGER: 1 NET-SNMP-EXTEND-MIB::nsExtendResult.\"httpd_pids\" = INTEGER: 8 NET-SNMP-EXTEND-MIB::nsExtendOutLine.\"httpd_pids\".1 = STRING:", "#!/bin/sh PATTERN=USD1 NUMPIDS= pgrep USDPATTERN | wc -l echo \"There are USDNUMPIDS USDPATTERN processes.\" exit USDNUMPIDS", "extend httpd_pids /bin/sh /usr/local/bin/check_proc.sh httpd extend snmpd_pids /bin/sh /usr/local/bin/check_proc.sh snmpd", "~]USD snmpwalk localhost NET-SNMP-EXTEND-MIB::nsExtendObjects NET-SNMP-EXTEND-MIB::nsExtendNumEntries.0 = INTEGER: 2 NET-SNMP-EXTEND-MIB::nsExtendCommand.\"httpd_pids\" = STRING: /bin/sh NET-SNMP-EXTEND-MIB::nsExtendCommand.\"snmpd_pids\" = STRING: /bin/sh NET-SNMP-EXTEND-MIB::nsExtendArgs.\"httpd_pids\" = STRING: /usr/local/bin/check_proc.sh httpd NET-SNMP-EXTEND-MIB::nsExtendArgs.\"snmpd_pids\" = STRING: /usr/local/bin/check_proc.sh snmpd NET-SNMP-EXTEND-MIB::nsExtendInput.\"httpd_pids\" = STRING: NET-SNMP-EXTEND-MIB::nsExtendInput.\"snmpd_pids\" = STRING: NET-SNMP-EXTEND-MIB::nsExtendResult.\"httpd_pids\" = INTEGER: 8 NET-SNMP-EXTEND-MIB::nsExtendResult.\"snmpd_pids\" = INTEGER: 1 NET-SNMP-EXTEND-MIB::nsExtendOutLine.\"httpd_pids\".1 = STRING: There are 8 httpd processes. NET-SNMP-EXTEND-MIB::nsExtendOutLine.\"snmpd_pids\".1 = STRING: There are 1 snmpd processes.", "~]USD snmpget localhost 'NET-SNMP-EXTEND-MIB::nsExtendResult.\"httpd_pids\"' UCD-SNMP-MIB::memAvailReal.0 NET-SNMP-EXTEND-MIB::nsExtendResult.\"httpd_pids\" = INTEGER: 8 UCD-SNMP-MIB::memAvailReal.0 = INTEGER: 799664 kB", "use NetSNMP::agent (':all'); my USDagent = new NetSNMP::agent();", "USDagent->register(\"hello_world\", \".1.3.6.1.4.1.8072.9999.9999\", \\&hello_handler);", "USDrequest->setValue(ASN_OCTET_STR, \"hello world\");", "my USDrequest; my USDstring_value = \"hello world\"; my USDinteger_value = \"8675309\"; for(USDrequest = USDrequests; USDrequest; USDrequest = USDrequest->next()) { my USDoid = USDrequest->getOID(); if (USDrequest_info->getMode() == MODE_GET) { if (USDoid == new NetSNMP::OID(\".1.3.6.1.4.1.8072.9999.9999.1.0\")) { USDrequest->setValue(ASN_OCTET_STR, USDstring_value); } elsif (USDoid == new NetSNMP::OID(\".1.3.6.1.4.1.8072.9999.9999.1.1\")) { USDrequest->setValue(ASN_INTEGER, USDinteger_value); } } elsif (USDrequest_info->getMode() == MODE_GETNEXT) { if (USDoid == new NetSNMP::OID(\".1.3.6.1.4.1.8072.9999.9999.1.0\")) { USDrequest->setOID(\".1.3.6.1.4.1.8072.9999.9999.1.1\"); USDrequest->setValue(ASN_INTEGER, USDinteger_value); } elsif (USDoid < new NetSNMP::OID(\".1.3.6.1.4.1.8072.9999.9999.1.0\")) { USDrequest->setOID(\".1.3.6.1.4.1.8072.9999.9999.1.0\"); USDrequest->setValue(ASN_OCTET_STR, USDstring_value); } } }", "#!/usr/bin/perl use NetSNMP::agent (':all'); use NetSNMP::ASN qw(ASN_OCTET_STR ASN_INTEGER); sub hello_handler { my (USDhandler, USDregistration_info, USDrequest_info, USDrequests) = @_; my USDrequest; my USDstring_value = \"hello world\"; my USDinteger_value = \"8675309\"; for(USDrequest = USDrequests; USDrequest; USDrequest = USDrequest->next()) { my USDoid = USDrequest->getOID(); if (USDrequest_info->getMode() == MODE_GET) { if (USDoid == new NetSNMP::OID(\".1.3.6.1.4.1.8072.9999.9999.1.0\")) { USDrequest->setValue(ASN_OCTET_STR, USDstring_value); } elsif (USDoid == new NetSNMP::OID(\".1.3.6.1.4.1.8072.9999.9999.1.1\")) { USDrequest->setValue(ASN_INTEGER, USDinteger_value); } } elsif (USDrequest_info->getMode() == MODE_GETNEXT) { if (USDoid == new NetSNMP::OID(\".1.3.6.1.4.1.8072.9999.9999.1.0\")) { USDrequest->setOID(\".1.3.6.1.4.1.8072.9999.9999.1.1\"); USDrequest->setValue(ASN_INTEGER, USDinteger_value); } elsif (USDoid < new NetSNMP::OID(\".1.3.6.1.4.1.8072.9999.9999.1.0\")) { USDrequest->setOID(\".1.3.6.1.4.1.8072.9999.9999.1.0\"); USDrequest->setValue(ASN_OCTET_STR, USDstring_value); } } } } my USDagent = new NetSNMP::agent(); USDagent->register(\"hello_world\", \".1.3.6.1.4.1.8072.9999.9999\", \\&hello_handler);", "perl do \"/usr/share/snmp/hello_world.pl\"", "~]USD snmpwalk localhost NET-SNMP-MIB::netSnmpPlaypen NET-SNMP-MIB::netSnmpPlaypen.1.0 = STRING: \"hello world\" NET-SNMP-MIB::netSnmpPlaypen.1.1 = INTEGER: 8675309", "~]USD snmpget localhost NET-SNMP-MIB::netSnmpPlaypen.1.0 NET-SNMP-MIB::netSnmpPlaypen.1.1 NET-SNMP-MIB::netSnmpPlaypen.1.0 = STRING: \"hello world\" NET-SNMP-MIB::netSnmpPlaypen.1.1 = INTEGER: 8675309" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system_administrators_guide/ch-system_monitoring_tools
Providing feedback on Red Hat JBoss Web Server documentation
Providing feedback on Red Hat JBoss Web Server documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Create creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_6.0_service_pack_3_release_notes/providing-direct-documentation-feedback_6.0.3_rn
Chapter 5. KVM Paravirtualized (virtio) Drivers
Chapter 5. KVM Paravirtualized (virtio) Drivers Paravirtualized drivers enhance the performance of guests, decreasing guest I/O latency and increasing throughput almost to bare-metal levels. It is recommended to use the paravirtualized drivers for fully virtualized guests running I/O-heavy tasks and applications. Virtio drivers are KVM's paravirtualized device drivers, available for guest virtual machines running on KVM hosts. These drivers are included in the virtio package. The virtio package supports block (storage) devices and network interface controllers. Note PCI devices are limited by the virtualized system architecture. See Chapter 16, Guest Virtual Machine Device Configuration for additional limitations when using assigned devices. 5.1. Using KVM virtio Drivers for Existing Storage Devices You can modify an existing hard disk device attached to the guest to use the virtio driver instead of the virtualized IDE driver. The example shown in this section edits libvirt configuration files. Note that the guest virtual machine does not need to be shut down to perform these steps, however the change will not be applied until the guest is completely shut down and rebooted. Procedure 5.1. Using KVM virtio drivers for existing devices Ensure that you have installed the appropriate driver ( viostor ), before continuing with this procedure. Run the virsh edit guestname command as root to edit the XML configuration file for your device. For example, virsh edit guest1 . The configuration files are located in the /etc/libvirt/qemu/ directory. Below is a file-based block device using the virtualized IDE driver. This is a typical entry for a virtual machine not using the virtio drivers. Change the entry to use the virtio device by modifying the bus= entry to virtio . Note that if the disk was previously IDE, it has a target similar to hda , hdb , or hdc . When changing to bus=virtio the target needs to be changed to vda , vdb , or vdc accordingly. Remove the address tag inside the disk tags. This must be done for this procedure to work. Libvirt will regenerate the address tag appropriately the time the virtual machine is started. Alternatively, virt-manager , virsh attach-disk or virsh attach-interface can add a new device using the virtio drivers. See the libvirt website for more details on using Virtio: http://www.linux-kvm.org/page/Virtio
[ "<disk type='file' device='disk'> <source file='/var/lib/libvirt/images/disk1.img'/> <target dev='hda' bus='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </disk>", "<disk type='file' device='disk'> <source file='/var/lib/libvirt/images/disk1.img'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </disk>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/chap-KVM_Para_virtualized_virtio_Drivers
Chapter 14. Synchronizing template repositories
Chapter 14. Synchronizing template repositories In Satellite, you can synchronize repositories of job templates, provisioning templates, report templates, and partition table templates between Satellite Server and a version control system or local directory. In this chapter, a Git repository is used for demonstration purposes. This section details the workflow for installing and configuring the Template Sync plugin and performing exporting and importing tasks. 14.1. Enabling the Template Sync plugin Procedure To enable the plugin on your Satellite Server, enter the following command: To verify that the plugin is installed correctly, ensure Administer > Settings includes the Template Sync menu. Optional: In the Satellite web UI, navigate to Administer > Settings > Template Sync to configure the plugin. For more information, see Template sync settings in Administering Red Hat Satellite . 14.2. Using repository sources You can use existing repositories or local directories to synchronize templates with your Satellite Server. 14.2.1. Synchronizing templates with an existing repository Use this procedure to synchronize templates between your Satellite Server and an existing repository. Procedure If you want to use HTTPS to connect to the repository and you use a self-signed certificate authority (CA) on your Git server: Create a new directory under the /usr/share/foreman/ directory to store the Git configuration for the certificate: Create a file named config in the new directory: Allow the foreman user access to the .config directory: Update the Git global configuration for the foreman user with the path to your self-signed CA certificate: If you want to use SSH to connect to the repository: Create an SSH key pair if you do not already have it. Do not specify a passphrase. Configure your version control server with the public key from your Satellite, which resides in /usr/share/foreman/.ssh/id_rsa.pub . Accept the Git SSH host key as the foreman user: Configure the Template Sync plugin settings on a Template Sync tab. Change the Branch setting to match the target branch on a Git server. Change the Repo setting to match the Git repository. For example, for the repository located in [email protected]/templates.git set the setting into [email protected]/templates.git . 14.2.2. Synchronizing templates with a local directory Synchronizing templates with a local directory is useful if you have configured a version control repository in the local directory. That way, you can edit templates and track the history of edits in the directory. You can also synchronize changes to Satellite Server after editing the templates. Prerequisites Each template must contain the location and organization that the template belongs to. This applies to all template types. Before you import a template, ensure that you add the following section to the template: Procedure In /var/lib/foreman , create a directory for storing templates: Note You can place your templates to a custom directory outside /var/lib/foreman , but you have to ensure that the Foreman service can read its contents. The directory must have the correct file permissions and the foreman_lib_t SELinux label. Change the owner of the new templates directory to the foreman user: Change the Repo setting on the Template Sync tab to match the /var/lib/foreman/ My_Templates_Dir / directory. 14.3. Importing and exporting templates You can import and export templates using the Satellite web UI, Hammer CLI, or Satellite API. Satellite API calls use the role-based access control system, which enables the tasks to be executed as any user. You can synchronize templates with a version control system, such as Git, or a local directory. 14.3.1. Importing templates You can import templates from a repository of your choice. You can use different protocols to point to your repository, for example /tmp/dir , git://example.com , https://example.com , and ssh://example.com . Note The templates provided by Satellite are locked and you cannot import them by default. To overwrite this behavior, change the Force import setting in the Template Sync menu to yes or add the force parameter -d '{ "force": "true" }' to the import command. Prerequisites Each template must contain the location and organization that the template belongs to. This applies to all template types. Before you import a template, ensure that you add the following section to the template: To use the CLI instead of the Satellite web UI, see the ]. To use the API, see the xref:api_Importing_Templates_managing-hosts[ . Procedure In the Satellite web UI, navigate to Hosts > Templates > Sync Templates . Click Import . Each field is populated with values configured in Administer > Settings > Template Sync . Change the values as required for the templates you want to import. For more information about each field, see Template sync settings in Administering Red Hat Satellite . Click Submit . The Satellite web UI displays the status of the import. The status is not persistent; if you leave the status page, you cannot return to it. CLI procedure To import a template from a repository, enter the following command: For better indexing and management of your templates, use --prefix to set a category for your templates. To select certain templates from a large repository, use --filter to define the title of the templates that you want to import. For example --filter '.*Ansible DefaultUSD' imports various Ansible Default templates. API procedure Send a POST request to api/v2/templates/import : If the import is successful, you receive {"message":"Success"} . 14.3.2. Exporting templates Use this procedure to export templates to a git repository. To use the CLI instead of the Satellite web UI, see the ]. To use the API, see the xref:api_Exporting_Templates_managing-hosts[ . Procedure In the Satellite web UI, navigate to Hosts > Templates > Sync Templates . Click Export . Each field is populated with values configured in Administer > Settings > Template Sync . Change the values as required for the templates you want to export. For more information about each field, see Template sync settings in Administering Red Hat Satellite . Click Submit . The Satellite web UI displays the status of the export. The status is not persistent; if you leave the status page, you cannot return to it. CLI procedure To export the templates to a repository, enter the following command: Note This command clones the repository, makes changes in a commit, and pushes back to the repository. You can use the --branch " My_Branch " option to export the templates to a specific branch. API procedure Send a POST request to api/v2/templates/export : If the export is successful, you receive {"message":"Success"} . Note You can override default API settings by specifying them in the request with the -d parameter. The following example exports templates to the git.example.com/templates repository:
[ "satellite-installer --enable-foreman-plugin-templates", "mkdir --parents /usr/share/foreman/.config/git", "touch /usr/share/foreman/.config/git/config", "chown --recursive foreman /usr/share/foreman/.config", "sudo --user foreman git config --global http.sslCAPath Path_To_CA_Certificate", "sudo --user foreman ssh-keygen", "sudo --user foreman ssh git.example.com", "<%# kind: provision name: My_Provisioning_Template oses: - My_first_OS - My_second_OS locations: - My_first_Location - My_second_Location organizations: - My_first_Organization - My_second_Organization %>", "mkdir /var/lib/foreman/ My_Templates_Dir", "chown foreman /var/lib/foreman/ My_Templates_Dir", "<%# kind: provision name: My_Provisioning_Template oses: - My_first_OS - My_second_OS locations: - My_first_Location - My_second_Location organizations: - My_first_Organization - My_second_Organization %>", "hammer import-templates --branch \" My_Branch \" --filter '.* Template NameUSD ' --organization \" My_Organization \" --prefix \"[ Custom Index ] \" --repo \" https://git.example.com/path/to/repository \"", "curl -H \"Accept:application/json\" -H \"Content-Type:application/json\" -u login : password -k https:// satellite.example.com /api/v2/templates/import -X POST", "hammer export-templates --organization \" My_Organization \" --repo \" https://git.example.com/path/to/repository \"", "curl -H \"Accept:application/json\" -H \"Content-Type:application/json\" -u login : password -k https:// satellite.example.com /api/v2/templates/export -X POST", "curl -H \"Accept:application/json\" -H \"Content-Type:application/json\" -u login:password -k https:// satellite.example.com /api/v2/templates/export -X POST -d \"{\\\"repo\\\":\\\"git.example.com/templates\\\"}\"" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_hosts/synchronizing_templates_repositories_managing-hosts
Chapter 9. Scaling storage of IBM Power OpenShift Data Foundation cluster
Chapter 9. Scaling storage of IBM Power OpenShift Data Foundation cluster To scale the storage capacity of your configured Red Hat OpenShift Data Foundation worker nodes on IBM Power cluster, you can increase the capacity by adding three disks at a time. Three disks are needed since OpenShift Data Foundation uses a replica count of 3 to maintain the high availability. So the amount of storage consumed is three times the usable space. Note Usable space might vary when encryption is enabled or replica 2 pools are being used. 9.1. Scaling up storage by adding capacity to your OpenShift Data Foundation nodes on IBM Power infrastructure using local storage devices In order to scale up an OpenShift Data Foundation cluster which was created using local storage devices, a new disk needs to be added to the storage node. It is recommended to have the new disks of the same size as used earlier during the deployment as OpenShift Data Foundation does not support heterogeneous disks/OSD's. You can add storage capacity (additional storage devices) to your configured local storage based OpenShift Data Foundation worker nodes on IBM Power infrastructures. Note Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on. Prerequisites You must be logged into the OpenShift Container Platform cluster. You must have installed the local storage operator. Use the following procedure: Installing Local Storage Operator on IBM Power You must have three OpenShift Container Platform worker nodes with the same storage type and size attached to each node (for example, 0.5TB SSD) as the original OpenShift Data Foundation StorageCluster was created with. Procedure To add storage capacity to OpenShift Container Platform nodes with OpenShift Data Foundation installed, you need to Find the available devices that you want to add, that is, a minimum of one device per worker node. You can follow the procedure for finding available storage devices in the respective deployment guide. Note Make sure you perform this process for all the existing nodes (minimum of 3) for which you want to add storage. Add the additional disks to the LocalVolume custom resource (CR). Example output: Make sure to save the changes after editing the CR. Example output: You can see in this CR that new devices are added. sdx Display the newly created Persistent Volumes (PVs) with the storageclass name used in the localVolume CR. Example output: Navigate to the OpenShift Web Console. Click Operators on the left navigation bar. Select Installed Operators . In the window, click OpenShift Data Foundation Operator. In the top navigation bar, scroll right and click Storage System tab. Click the Action menu (...) to the visible list to extend the options menu. Select Add Capacity from the options menu. From this dialog box, set the Storage Class name to the name used in the localVolume CR. Available Capacity displayed is based on the local disks available in storage class. Click Add . To check the status, navigate to Storage Data Foundation and verify that the Storage System in the Status card has a green tick. Verification steps Verify the available Capacity. In the OpenShift Web Console, click Storage Data Foundation . Click the Storage Systems tab and then click on ocs-storagecluster-storagesystem . Navigate to Overview Block and File tab, then check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected host(s). <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 9.2. Scaling out storage capacity on a IBM Power cluster OpenShift Data Foundation is highly scalable. It can be scaled out by adding new nodes with required storage and enough hardware resources in terms of CPU and RAM. Practically there is no limit on the number of nodes which can be added but from the support perspective 2000 nodes is the limit for OpenShift Data Foundation. Scaling out storage capacity can be broken down into two steps: Adding new node Scaling up the storage capacity Note OpenShift Data Foundation does not support heterogeneous OSD/Disk sizes. 9.2.1. Adding a node using a local storage device on IBM Power You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or when there are not enough resources to add new OSDs on the existing nodes. Add nodes in the multiple of 3, each of them in different failure domains. Though it is recommended to add nodes in multiples of 3 nodes, you have the flexibility to add one node at a time in flexible scaling deployment. See Knowledgebase article Verify if flexible scaling is enabled Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during initial OpenShift Data Foundation deployment. Prerequisites You must be logged into the OpenShift Container Platform cluster. You must have three OpenShift Container Platform worker nodes with the same storage type and size attached to each node (for example, 2TB SSD drive) as the original OpenShift Data Foundation StorageCluster was created with. Procedure Get a new IBM Power machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform node using the new IBM Power machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Click Operators Installed Operators from the OpenShift Web Console. From the Project drop-down list, make sure to select the project where the Local Storage Operator is installed. Click Local Storage . Click the Local Volume tab. Beside the LocalVolume , click Action menu (...) Edit Local Volume . In the YAML, add the hostname of the new node in the values field under the node selector . Figure 9.1. YAML showing the addition of new hostnames Click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 9.2.2. Scaling up storage capacity To scale up storage capacity, see Scaling up storage capacity on a cluster .
[ "oc edit -n openshift-local-storage localvolume localblock", "spec: logLevel: Normal managementState: Managed nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 - worker-2 storageClassDevices: - devicePaths: - /dev/sda - /dev/sdx # newly added device storageClassName: localblock volumeMode: Block", "localvolume.local.storage.openshift.io/localblock edited", "oc get pv | grep localblock | grep Available", "local-pv-a04ffd8 500Gi RWO Delete Available localblock 24s local-pv-a0ca996b 500Gi RWO Delete Available localblock 23s local-pv-c171754a 500Gi RWO Delete Available localblock 23s", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm", "NODE compute-1", "oc debug node/ <node-name>", "chroot /host", "lsblk", "oc get csr", "oc adm certificate approve <Certificate_Name>", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/scaling_storage/scaling_storage_of_ibm_power_openshift_data_foundation_cluster
Chapter 18. Configuring ingress cluster traffic
Chapter 18. Configuring ingress cluster traffic 18.1. Configuring ingress cluster traffic overview OpenShift Container Platform provides the following methods for communicating from outside the cluster with services running in the cluster. The methods are recommended, in order or preference: If you have HTTP/HTTPS, use an Ingress Controller. If you have a TLS-encrypted protocol other than HTTPS. For example, for TLS with the SNI header, use an Ingress Controller. Otherwise, use a Load Balancer, an External IP, or a NodePort . Method Purpose Use an Ingress Controller Allows access to HTTP/HTTPS traffic and TLS-encrypted protocols other than HTTPS (for example, TLS with the SNI header). Automatically assign an external IP using a load balancer service Allows traffic to non-standard ports through an IP address assigned from a pool. Most cloud platforms offer a method to start a service with a load-balancer IP address. About MetalLB and the MetalLB Operator Allows traffic to a specific IP address or address from a pool on the machine network. For bare-metal installations or platforms that are like bare metal, MetalLB provides a way to start a service with a load-balancer IP address. Manually assign an external IP to a service Allows traffic to non-standard ports through a specific IP address. Configure a NodePort Expose a service on all nodes in the cluster. 18.1.1. Comparision: Fault tolerant access to external IP addresses For the communication methods that provide access to an external IP address, fault tolerant access to the IP address is another consideration. The following features provide fault tolerant access to an external IP address. IP failover IP failover manages a pool of virtual IP address for a set of nodes. It is implemented with Keepalived and Virtual Router Redundancy Protocol (VRRP). IP failover is a layer 2 mechanism only and relies on multicast. Multicast can have disadvantages for some networks. MetalLB MetalLB has a layer 2 mode, but it does not use multicast. Layer 2 mode has a disadvantage that it transfers all traffic for an external IP address through one node. Manually assigning external IP addresses You can configure your cluster with an IP address block that is used to assign external IP addresses to services. By default, this feature is disabled. This feature is flexible, but places the largest burden on the cluster or network administrator. The cluster is prepared to receive traffic that is destined for the external IP, but each customer has to decide how they want to route traffic to nodes. 18.2. Configuring ExternalIPs for services As a cluster administrator, you can designate an IP address block that is external to the cluster that can send traffic to services in the cluster. This functionality is generally most useful for clusters installed on bare-metal hardware. 18.2.1. Prerequisites Your network infrastructure must route traffic for the external IP addresses to your cluster. 18.2.2. About ExternalIP For non-cloud environments, OpenShift Container Platform supports the assignment of external IP addresses to a Service object spec.externalIPs[] field through the ExternalIP facility. By setting this field, OpenShift Container Platform assigns an additional virtual IP address to the service. The IP address can be outside the service network defined for the cluster. A service configured with an ExternalIP functions similarly to a service with type=NodePort , allowing you to direct traffic to a local node for load balancing. You must configure your networking infrastructure to ensure that the external IP address blocks that you define are routed to the cluster. OpenShift Container Platform extends the ExternalIP functionality in Kubernetes by adding the following capabilities: Restrictions on the use of external IP addresses by users through a configurable policy Allocation of an external IP address automatically to a service upon request Warning Disabled by default, use of ExternalIP functionality can be a security risk, because in-cluster traffic to an external IP address is directed to that service. This could allow cluster users to intercept sensitive traffic destined for external resources. Important This feature is supported only in non-cloud deployments. For cloud deployments, use the load balancer services for automatic deployment of a cloud load balancer to target the endpoints of a service. You can assign an external IP address in the following ways: Automatic assignment of an external IP OpenShift Container Platform automatically assigns an IP address from the autoAssignCIDRs CIDR block to the spec.externalIPs[] array when you create a Service object with spec.type=LoadBalancer set. In this case, OpenShift Container Platform implements a non-cloud version of the load balancer service type and assigns IP addresses to the services. Automatic assignment is disabled by default and must be configured by a cluster administrator as described in the following section. Manual assignment of an external IP OpenShift Container Platform uses the IP addresses assigned to the spec.externalIPs[] array when you create a Service object. You cannot specify an IP address that is already in use by another service. 18.2.2.1. Configuration for ExternalIP Use of an external IP address in OpenShift Container Platform is governed by the following fields in the Network.config.openshift.io CR named cluster : spec.externalIP.autoAssignCIDRs defines an IP address block used by the load balancer when choosing an external IP address for the service. OpenShift Container Platform supports only a single IP address block for automatic assignment. This can be simpler than having to manage the port space of a limited number of shared IP addresses when manually assigning ExternalIPs to services. If automatic assignment is enabled, a Service object with spec.type=LoadBalancer is allocated an external IP address. spec.externalIP.policy defines the permissible IP address blocks when manually specifying an IP address. OpenShift Container Platform does not apply policy rules to IP address blocks defined by spec.externalIP.autoAssignCIDRs . If routed correctly, external traffic from the configured external IP address block can reach service endpoints through any TCP or UDP port that the service exposes. Important As a cluster administrator, you must configure routing to externalIPs on both OpenShiftSDN and OVN-Kubernetes network types. You must also ensure that the IP address block you assign terminates at one or more nodes in your cluster. For more information, see Kubernetes External IPs . OpenShift Container Platform supports both the automatic and manual assignment of IP addresses, and each address is guaranteed to be assigned to a maximum of one service. This ensures that each service can expose its chosen ports regardless of the ports exposed by other services. Note To use IP address blocks defined by autoAssignCIDRs in OpenShift Container Platform, you must configure the necessary IP address assignment and routing for your host network. The following YAML describes a service with an external IP address configured: Example Service object with spec.externalIPs[] set apiVersion: v1 kind: Service metadata: name: http-service spec: clusterIP: 172.30.163.110 externalIPs: - 192.168.132.253 externalTrafficPolicy: Cluster ports: - name: highport nodePort: 31903 port: 30102 protocol: TCP targetPort: 30102 selector: app: web sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: - ip: 192.168.132.253 18.2.2.2. Restrictions on the assignment of an external IP address As a cluster administrator, you can specify IP address blocks to allow and to reject. Restrictions apply only to users without cluster-admin privileges. A cluster administrator can always set the service spec.externalIPs[] field to any IP address. You configure IP address policy with a policy object defined by specifying the spec.ExternalIP.policy field. The policy object has the following shape: { "policy": { "allowedCIDRs": [], "rejectedCIDRs": [] } } When configuring policy restrictions, the following rules apply: If policy={} is set, then creating a Service object with spec.ExternalIPs[] set will fail. This is the default for OpenShift Container Platform. The behavior when policy=null is set is identical. If policy is set and either policy.allowedCIDRs[] or policy.rejectedCIDRs[] is set, the following rules apply: If allowedCIDRs[] and rejectedCIDRs[] are both set, then rejectedCIDRs[] has precedence over allowedCIDRs[] . If allowedCIDRs[] is set, creating a Service object with spec.ExternalIPs[] will succeed only if the specified IP addresses are allowed. If rejectedCIDRs[] is set, creating a Service object with spec.ExternalIPs[] will succeed only if the specified IP addresses are not rejected. 18.2.2.3. Example policy objects The examples that follow demonstrate several different policy configurations. In the following example, the policy prevents OpenShift Container Platform from creating any service with an external IP address specified: Example policy to reject any value specified for Service object spec.externalIPs[] apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: {} ... In the following example, both the allowedCIDRs and rejectedCIDRs fields are set. Example policy that includes both allowed and rejected CIDR blocks apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: allowedCIDRs: - 172.16.66.10/23 rejectedCIDRs: - 172.16.66.10/24 ... In the following example, policy is set to null . If set to null , when inspecting the configuration object by entering oc get networks.config.openshift.io -o yaml , the policy field will not appear in the output. Example policy to allow any value specified for Service object spec.externalIPs[] apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: null ... 18.2.3. ExternalIP address block configuration The configuration for ExternalIP address blocks is defined by a Network custom resource (CR) named cluster . The Network CR is part of the config.openshift.io API group. Important During cluster installation, the Cluster Version Operator (CVO) automatically creates a Network CR named cluster . Creating any other CR objects of this type is not supported. The following YAML describes the ExternalIP configuration: Network.config.openshift.io CR named cluster apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: autoAssignCIDRs: [] 1 policy: 2 ... 1 Defines the IP address block in CIDR format that is available for automatic assignment of external IP addresses to a service. Only a single IP address range is allowed. 2 Defines restrictions on manual assignment of an IP address to a service. If no restrictions are defined, specifying the spec.externalIP field in a Service object is not allowed. By default, no restrictions are defined. The following YAML describes the fields for the policy stanza: Network.config.openshift.io policy stanza policy: allowedCIDRs: [] 1 rejectedCIDRs: [] 2 1 A list of allowed IP address ranges in CIDR format. 2 A list of rejected IP address ranges in CIDR format. Example external IP configurations Several possible configurations for external IP address pools are displayed in the following examples: The following YAML describes a configuration that enables automatically assigned external IP addresses: Example configuration with spec.externalIP.autoAssignCIDRs set apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: ... externalIP: autoAssignCIDRs: - 192.168.132.254/29 The following YAML configures policy rules for the allowed and rejected CIDR ranges: Example configuration with spec.externalIP.policy set apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: ... externalIP: policy: allowedCIDRs: - 192.168.132.0/29 - 192.168.132.8/29 rejectedCIDRs: - 192.168.132.7/32 18.2.4. Configure external IP address blocks for your cluster As a cluster administrator, you can configure the following ExternalIP settings: An ExternalIP address block used by OpenShift Container Platform to automatically populate the spec.clusterIP field for a Service object. A policy object to restrict what IP addresses may be manually assigned to the spec.clusterIP array of a Service object. Prerequisites Install the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. Procedure Optional: To display the current external IP configuration, enter the following command: USD oc describe networks.config cluster To edit the configuration, enter the following command: USD oc edit networks.config cluster Modify the ExternalIP configuration, as in the following example: apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: ... externalIP: 1 ... 1 Specify the configuration for the externalIP stanza. To confirm the updated ExternalIP configuration, enter the following command: USD oc get networks.config cluster -o go-template='{{.spec.externalIP}}{{"\n"}}' 18.2.5. steps Configuring ingress cluster traffic for a service external IP 18.3. Configuring ingress cluster traffic using an Ingress Controller OpenShift Container Platform provides methods for communicating from outside the cluster with services running in the cluster. This method uses an Ingress Controller. 18.3.1. Using Ingress Controllers and routes The Ingress Operator manages Ingress Controllers and wildcard DNS. Using an Ingress Controller is the most common way to allow external access to an OpenShift Container Platform cluster. An Ingress Controller is configured to accept external requests and proxy them based on the configured routes. This is limited to HTTP, HTTPS using SNI, and TLS using SNI, which is sufficient for web applications and services that work over TLS with SNI. Work with your administrator to configure an Ingress Controller to accept external requests and proxy them based on the configured routes. The administrator can create a wildcard DNS entry and then set up an Ingress Controller. Then, you can work with the edge Ingress Controller without having to contact the administrators. By default, every ingress controller in the cluster can admit any route created in any project in the cluster. The Ingress Controller: Has two replicas by default, which means it should be running on two worker nodes. Can be scaled up to have more replicas on more nodes. Note The procedures in this section require prerequisites performed by the cluster administrator. 18.3.2. Prerequisites Before starting the following procedures, the administrator must: Set up the external port to the cluster networking environment so that requests can reach the cluster. Make sure there is at least one user with cluster admin role. To add this role to a user, run the following command: Have an OpenShift Container Platform cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic. 18.3.3. Creating a project and service If the project and service that you want to expose do not exist, first create the project, then the service. If the project and service already exist, skip to the procedure on exposing the service to create a route. Prerequisites Install the oc CLI and log in as a cluster administrator. Procedure Create a new project for your service by running the oc new-project command: USD oc new-project myproject Use the oc new-app command to create your service: USD oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git To verify that the service was created, run the following command: USD oc get svc -n myproject Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s By default, the new service does not have an external IP address. 18.3.4. Exposing the service by creating a route You can expose the service as a route by using the oc expose command. Procedure To expose the service: Log in to OpenShift Container Platform. Log in to the project where the service you want to expose is located: USD oc project myproject Run the oc expose service command to expose the route: USD oc expose service nodejs-ex Example output route.route.openshift.io/nodejs-ex exposed To verify that the service is exposed, you can use a tool, such as cURL, to make sure the service is accessible from outside the cluster. Use the oc get route command to find the route's host name: USD oc get route Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None Use cURL to check that the host responds to a GET request: USD curl --head nodejs-ex-myproject.example.com Example output HTTP/1.1 200 OK ... 18.3.5. Configuring Ingress Controller sharding by using route labels Ingress Controller sharding by using route labels means that the Ingress Controller serves any route in any namespace that is selected by the route selector. Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another. Procedure Edit the router-internal.yaml file: # cat router-internal.yaml apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: "" routeSelector: matchLabels: type: sharded status: {} kind: List metadata: resourceVersion: "" selfLink: "" Apply the Ingress Controller router-internal.yaml file: # oc apply -f router-internal.yaml The Ingress Controller selects routes in any namespace that have the label type: sharded . 18.3.6. Configuring Ingress Controller sharding by using namespace labels Ingress Controller sharding by using namespace labels means that the Ingress Controller serves any route in any namespace that is selected by the namespace selector. Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another. Warning If you deploy the Keepalived Ingress VIP, do not deploy a non-default Ingress Controller with value HostNetwork for the endpointPublishingStrategy parameter. Doing so might cause issues. Use value NodePort instead of HostNetwork for endpointPublishingStrategy . Procedure Edit the router-internal.yaml file: # cat router-internal.yaml Example output apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: "" namespaceSelector: matchLabels: type: sharded status: {} kind: List metadata: resourceVersion: "" selfLink: "" Apply the Ingress Controller router-internal.yaml file: # oc apply -f router-internal.yaml The Ingress Controller selects routes in any namespace that is selected by the namespace selector that have the label type: sharded . 18.3.7. Additional resources The Ingress Operator manages wildcard DNS. For more information, see Ingress Operator in OpenShift Container Platform , Installing a cluster on bare metal , and Installing a cluster on vSphere . 18.4. Configuring ingress cluster traffic using a load balancer OpenShift Container Platform provides methods for communicating from outside the cluster with services running in the cluster. This method uses a load balancer. 18.4.1. Using a load balancer to get traffic into the cluster If you do not need a specific external IP address, you can configure a load balancer service to allow external access to an OpenShift Container Platform cluster. A load balancer service allocates a unique IP. The load balancer has a single edge router IP, which can be a virtual IP (VIP), but is still a single machine for initial load balancing. Note If a pool is configured, it is done at the infrastructure level, not by a cluster administrator. Note The procedures in this section require prerequisites performed by the cluster administrator. 18.4.2. Prerequisites Before starting the following procedures, the administrator must: Set up the external port to the cluster networking environment so that requests can reach the cluster. Make sure there is at least one user with cluster admin role. To add this role to a user, run the following command: Have an OpenShift Container Platform cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic. 18.4.3. Creating a project and service If the project and service that you want to expose do not exist, first create the project, then the service. If the project and service already exist, skip to the procedure on exposing the service to create a route. Prerequisites Install the oc CLI and log in as a cluster administrator. Procedure Create a new project for your service by running the oc new-project command: USD oc new-project myproject Use the oc new-app command to create your service: USD oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git To verify that the service was created, run the following command: USD oc get svc -n myproject Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s By default, the new service does not have an external IP address. 18.4.4. Exposing the service by creating a route You can expose the service as a route by using the oc expose command. Procedure To expose the service: Log in to OpenShift Container Platform. Log in to the project where the service you want to expose is located: USD oc project myproject Run the oc expose service command to expose the route: USD oc expose service nodejs-ex Example output route.route.openshift.io/nodejs-ex exposed To verify that the service is exposed, you can use a tool, such as cURL, to make sure the service is accessible from outside the cluster. Use the oc get route command to find the route's host name: USD oc get route Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None Use cURL to check that the host responds to a GET request: USD curl --head nodejs-ex-myproject.example.com Example output HTTP/1.1 200 OK ... 18.4.5. Creating a load balancer service Use the following procedure to create a load balancer service. Prerequisites Make sure that the project and service you want to expose exist. Procedure To create a load balancer service: Log in to OpenShift Container Platform. Load the project where the service you want to expose is located. USD oc project project1 Open a text file on the control plane node and paste the following text, editing the file as needed: Sample load balancer configuration file 1 Enter a descriptive name for the load balancer service. 2 Enter the same port that the service you want to expose is listening on. 3 Enter a list of specific IP addresses to restrict traffic through the load balancer. This field is ignored if the cloud-provider does not support the feature. 4 Enter Loadbalancer as the type. 5 Enter the name of the service. Note To restrict traffic through the load balancer to specific IP addresses, it is recommended to use the service.beta.kubernetes.io/load-balancer-source-ranges annotation rather than setting the loadBalancerSourceRanges field. With the annotation, you can more easily migrate to the OpenShift API, which will be implemented in a future release. Save and exit the file. Run the following command to create the service: USD oc create -f <file-name> For example: USD oc create -f mysql-lb.yaml Execute the following command to view the new service: USD oc get svc Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE egress-2 LoadBalancer 172.30.22.226 ad42f5d8b303045-487804948.example.com 3306:30357/TCP 15m The service has an external IP address automatically assigned if there is a cloud provider enabled. On the master, use a tool, such as cURL, to make sure you can reach the service using the public IP address: USD curl <public-ip>:<port> For example: USD curl 172.29.121.74:3306 The examples in this section use a MySQL service, which requires a client application. If you get a string of characters with the Got packets out of order message, you are connecting with the service: If you have a MySQL client, log in with the standard CLI command: USD mysql -h 172.30.131.89 -u admin -p Example output Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. MySQL [(none)]> 18.5. Configuring ingress cluster traffic on AWS using a Network Load Balancer OpenShift Container Platform provides methods for communicating from outside the cluster with services running in the cluster. This method uses a Network Load Balancer (NLB), which forwards the client's IP address to the node. You can configure an NLB on a new or existing AWS cluster. 18.5.1. Replacing Ingress Controller Classic Load Balancer with Network Load Balancer You can replace an Ingress Controller that is using a Classic Load Balancer (CLB) with one that uses a Network Load Balancer (NLB) on AWS. Warning This procedure causes an expected outage that can last several minutes due to new DNS records propagation, new load balancers provisioning, and other factors. IP addresses and canonical names of the Ingress Controller load balancer might change after applying this procedure. Procedure Create a file with a new default Ingress Controller. The following example assumes that your default Ingress Controller has an External scope and no other customizations: Example ingresscontroller.yml file apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService If your default Ingress Controller has other customizations, ensure that you modify the file accordingly. Force replace the Ingress Controller YAML file: USD oc replace --force --wait -f ingresscontroller.yml Wait until the Ingress Controller is replaced. Expect serveral of minutes of outages. 18.5.2. Configuring an Ingress Controller Network Load Balancer on an existing AWS cluster You can create an Ingress Controller backed by an AWS Network Load Balancer (NLB) on an existing cluster. Prerequisites You must have an installed AWS cluster. PlatformStatus of the infrastructure resource must be AWS. To verify that the PlatformStatus is AWS, run: USD oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.type}' AWS Procedure Create an Ingress Controller backed by an AWS NLB on an existing cluster. Create the Ingress Controller manifest: USD cat ingresscontroller-aws-nlb.yaml Example output apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: USDmy_ingress_controller 1 namespace: openshift-ingress-operator spec: domain: USDmy_unique_ingress_domain 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: External 3 providerParameters: type: AWS aws: type: NLB 1 Replace USDmy_ingress_controller with a unique name for the Ingress Controller. 2 Replace USDmy_unique_ingress_domain with a domain name that is unique among all Ingress Controllers in the cluster. This variable must be a subdomain of the DNS name <clustername>.<domain> . 3 You can replace External with Internal to use an internal NLB. Create the resource in the cluster: USD oc create -f ingresscontroller-aws-nlb.yaml Important Before you can configure an Ingress Controller NLB on a new AWS cluster, you must complete the Creating the installation configuration file procedure. 18.5.3. Configuring an Ingress Controller Network Load Balancer on a new AWS cluster You can create an Ingress Controller backed by an AWS Network Load Balancer (NLB) on a new cluster. Prerequisites Create the install-config.yaml file and complete any modifications to it. Procedure Create an Ingress Controller backed by an AWS NLB on a new cluster. Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-ingress-default-ingresscontroller.yaml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml Example output cluster-ingress-default-ingresscontroller.yaml Open the cluster-ingress-default-ingresscontroller.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration you want: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService Save the cluster-ingress-default-ingresscontroller.yaml file and quit the text editor. Optional: Back up the manifests/cluster-ingress-default-ingresscontroller.yaml file. The installation program deletes the manifests/ directory when creating the cluster. 18.5.4. Additional resources Installing a cluster on AWS with network customizations . For more information, see Network Load Balancer support on AWS . 18.6. Configuring ingress cluster traffic for a service external IP You can attach an external IP address to a service so that it is available to traffic outside the cluster. This is generally useful only for a cluster installed on bare metal hardware. The external network infrastructure must be configured correctly to route traffic to the service. 18.6.1. Prerequisites Your cluster is configured with ExternalIPs enabled. For more information, read Configuring ExternalIPs for services . Note Do not use the same ExternalIP for the egress IP. 18.6.2. Attaching an ExternalIP to a service You can attach an ExternalIP to a service. If your cluster is configured to allocate an ExternalIP automatically, you might not need to manually attach an ExternalIP to the service. Procedure Optional: To confirm what IP address ranges are configured for use with ExternalIP, enter the following command: USD oc get networks.config cluster -o jsonpath='{.spec.externalIP}{"\n"}' If autoAssignCIDRs is set, OpenShift Container Platform automatically assigns an ExternalIP to a new Service object if the spec.externalIPs field is not specified. Attach an ExternalIP to the service. If you are creating a new service, specify the spec.externalIPs field and provide an array of one or more valid IP addresses. For example: apiVersion: v1 kind: Service metadata: name: svc-with-externalip spec: ... externalIPs: - 192.174.120.10 If you are attaching an ExternalIP to an existing service, enter the following command. Replace <name> with the service name. Replace <ip_address> with a valid ExternalIP address. You can provide multiple IP addresses separated by commas. USD oc patch svc <name> -p \ '{ "spec": { "externalIPs": [ "<ip_address>" ] } }' For example: USD oc patch svc mysql-55-rhel7 -p '{"spec":{"externalIPs":["192.174.120.10"]}}' Example output "mysql-55-rhel7" patched To confirm that an ExternalIP address is attached to the service, enter the following command. If you specified an ExternalIP for a new service, you must create the service first. USD oc get svc Example output NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE mysql-55-rhel7 172.30.131.89 192.174.120.10 3306/TCP 13m 18.6.3. Additional resources Configuring ExternalIPs for services 18.7. Configuring ingress cluster traffic using a NodePort OpenShift Container Platform provides methods for communicating from outside the cluster with services running in the cluster. This method uses a NodePort . 18.7.1. Using a NodePort to get traffic into the cluster Use a NodePort -type Service resource to expose a service on a specific port on all nodes in the cluster. The port is specified in the Service resource's .spec.ports[*].nodePort field. Important Using a node port requires additional port resources. A NodePort exposes the service on a static port on the node's IP address. NodePort s are in the 30000 to 32767 range by default, which means a NodePort is unlikely to match a service's intended port. For example, port 8080 may be exposed as port 31020 on the node. The administrator must ensure the external IP addresses are routed to the nodes. NodePort s and external IPs are independent and both can be used concurrently. Note The procedures in this section require prerequisites performed by the cluster administrator. 18.7.2. Prerequisites Before starting the following procedures, the administrator must: Set up the external port to the cluster networking environment so that requests can reach the cluster. Make sure there is at least one user with cluster admin role. To add this role to a user, run the following command: Have an OpenShift Container Platform cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic. 18.7.3. Creating a project and service If the project and service that you want to expose do not exist, first create the project, then the service. If the project and service already exist, skip to the procedure on exposing the service to create a route. Prerequisites Install the oc CLI and log in as a cluster administrator. Procedure Create a new project for your service by running the oc new-project command: USD oc new-project myproject Use the oc new-app command to create your service: USD oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git To verify that the service was created, run the following command: USD oc get svc -n myproject Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s By default, the new service does not have an external IP address. 18.7.4. Exposing the service by creating a route You can expose the service as a route by using the oc expose command. Procedure To expose the service: Log in to OpenShift Container Platform. Log in to the project where the service you want to expose is located: USD oc project myproject To expose a node port for the application, enter the following command. OpenShift Container Platform automatically selects an available port in the 30000-32767 range. USD oc expose service nodejs-ex --type=NodePort --name=nodejs-ex-nodeport --generator="service/v2" Example output service/nodejs-ex-nodeport exposed Optional: To confirm the service is available with a node port exposed, enter the following command: USD oc get svc -n myproject Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.217.127 <none> 3306/TCP 9m44s nodejs-ex-ingress NodePort 172.30.107.72 <none> 3306:31345/TCP 39s Optional: To remove the service created automatically by the oc new-app command, enter the following command: USD oc delete svc nodejs-ex 18.7.5. Additional resources Configuring the node port service range
[ "apiVersion: v1 kind: Service metadata: name: http-service spec: clusterIP: 172.30.163.110 externalIPs: - 192.168.132.253 externalTrafficPolicy: Cluster ports: - name: highport nodePort: 31903 port: 30102 protocol: TCP targetPort: 30102 selector: app: web sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: - ip: 192.168.132.253", "{ \"policy\": { \"allowedCIDRs\": [], \"rejectedCIDRs\": [] } }", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: {}", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: allowedCIDRs: - 172.16.66.10/23 rejectedCIDRs: - 172.16.66.10/24", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: null", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: autoAssignCIDRs: [] 1 policy: 2", "policy: allowedCIDRs: [] 1 rejectedCIDRs: [] 2", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: autoAssignCIDRs: - 192.168.132.254/29", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: allowedCIDRs: - 192.168.132.0/29 - 192.168.132.8/29 rejectedCIDRs: - 192.168.132.7/32", "oc describe networks.config cluster", "oc edit networks.config cluster", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: 1", "oc get networks.config cluster -o go-template='{{.spec.externalIP}}{{\"\\n\"}}'", "oc adm policy add-cluster-role-to-user cluster-admin username", "oc new-project myproject", "oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git", "oc get svc -n myproject", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s", "oc project myproject", "oc expose service nodejs-ex", "route.route.openshift.io/nodejs-ex exposed", "oc get route", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None", "curl --head nodejs-ex-myproject.example.com", "HTTP/1.1 200 OK", "cat router-internal.yaml apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\" routeSelector: matchLabels: type: sharded status: {} kind: List metadata: resourceVersion: \"\" selfLink: \"\"", "oc apply -f router-internal.yaml", "cat router-internal.yaml", "apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\" namespaceSelector: matchLabels: type: sharded status: {} kind: List metadata: resourceVersion: \"\" selfLink: \"\"", "oc apply -f router-internal.yaml", "oc adm policy add-cluster-role-to-user cluster-admin username", "oc new-project myproject", "oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git", "oc get svc -n myproject", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s", "oc project myproject", "oc expose service nodejs-ex", "route.route.openshift.io/nodejs-ex exposed", "oc get route", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None", "curl --head nodejs-ex-myproject.example.com", "HTTP/1.1 200 OK", "oc project project1", "apiVersion: v1 kind: Service metadata: name: egress-2 1 spec: ports: - name: db port: 3306 2 loadBalancerIP: loadBalancerSourceRanges: 3 - 10.0.0.0/8 - 192.168.0.0/16 type: LoadBalancer 4 selector: name: mysql 5", "oc create -f <file-name>", "oc create -f mysql-lb.yaml", "oc get svc", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE egress-2 LoadBalancer 172.30.22.226 ad42f5d8b303045-487804948.example.com 3306:30357/TCP 15m", "curl <public-ip>:<port>", "curl 172.29.121.74:3306", "mysql -h 172.30.131.89 -u admin -p", "Enter password: Welcome to the MariaDB monitor. Commands end with ; or \\g. MySQL [(none)]>", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService", "oc replace --force --wait -f ingresscontroller.yml", "oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.type}' AWS", "cat ingresscontroller-aws-nlb.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: USDmy_ingress_controller 1 namespace: openshift-ingress-operator spec: domain: USDmy_unique_ingress_domain 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: External 3 providerParameters: type: AWS aws: type: NLB", "oc create -f ingresscontroller-aws-nlb.yaml", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1", "ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml", "cluster-ingress-default-ingresscontroller.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService", "oc get networks.config cluster -o jsonpath='{.spec.externalIP}{\"\\n\"}'", "apiVersion: v1 kind: Service metadata: name: svc-with-externalip spec: externalIPs: - 192.174.120.10", "oc patch svc <name> -p '{ \"spec\": { \"externalIPs\": [ \"<ip_address>\" ] } }'", "oc patch svc mysql-55-rhel7 -p '{\"spec\":{\"externalIPs\":[\"192.174.120.10\"]}}'", "\"mysql-55-rhel7\" patched", "oc get svc", "NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE mysql-55-rhel7 172.30.131.89 192.174.120.10 3306/TCP 13m", "oc adm policy add-cluster-role-to-user cluster-admin <user_name>", "oc new-project myproject", "oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git", "oc get svc -n myproject", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s", "oc project myproject", "oc expose service nodejs-ex --type=NodePort --name=nodejs-ex-nodeport --generator=\"service/v2\"", "service/nodejs-ex-nodeport exposed", "oc get svc -n myproject", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.217.127 <none> 3306/TCP 9m44s nodejs-ex-ingress NodePort 172.30.107.72 <none> 3306:31345/TCP 39s", "oc delete svc nodejs-ex" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/networking/configuring-ingress-cluster-traffic
1.4. Instance Types
1.4. Instance Types Instance types can be used to define the hardware configuration of a virtual machine. Selecting an instance type when creating or editing a virtual machine will automatically fill in the hardware configuration fields. This allows users to create multiple virtual machines with the same hardware configuration without having to manually fill in every field. A set of predefined instance types are available by default, as outlined in the following table: Table 1.13. Predefined Instance Types Name Memory vCPUs Tiny 512 MB 1 Small 2 GB 1 Medium 4 GB 2 Large 8 GB 2 XLarge 16 GB 4 Administrators can also create, edit, and remove instance types from the Instance Types tab of the Configure window. Fields in the New Virtual Machine and Edit Virtual Machine windows that are bound to an instance type have a chain link image to them ( ). If the value of one of these fields is changed, the virtual machine will be detached from the instance type, changing to Custom , and the chain will appear broken ( ). However, if the value is changed back, the chain will relink and the instance type will move back to the selected one. 1.4.1. Creating Instance Types Administrators can create new instance types, which can then be selected by users when creating or editing virtual machines. Creating an Instance Type Click Administration Configure . Click the Instance Types tab. Click New . Enter a Name and Description for the instance type. Click Show Advanced Options and configure the instance type's settings as required. The settings that appear in the New Instance Type window are identical to those in the New Virtual Machine window, but with the relevant fields only. See Explanation of Settings in the New Virtual Machine and Edit Virtual Machine Windows in the Virtual Machine Management Guide . Click OK . The new instance type will appear in the Instance Types tab in the Configure window, and can be selected from the Instance Type drop-down list when creating or editing a virtual machine. 1.4.2. Editing Instance Types Administrators can edit existing instance types from the Configure window. Editing Instance Type Properties Click Administration Configure . Click the Instance Types tab. Select the instance type to be edited. Click Edit . Change the settings as required. Click OK . The configuration of the instance type is updated. When a new virtual machine based on this instance type is created, or when an existing virtual machine based on this instance type is updated, the new configuration is applied. Existing virtual machines based on this instance type will display fields, marked with a chain icon, that will be updated. If the existing virtual machines were running when the instance type was changed, the orange Pending Changes icon will appear beside them and the fields with the chain icon will be updated at the restart. 1.4.3. Removing Instance Types Removing an Instance Type Click Administration Configure . Click the Instance Types tab. Select the instance type to be removed. Click Remove . If any virtual machines are based on the instance type to be removed, a warning window listing the attached virtual machines will appear. To continue removing the instance type, select the Approve Operation check box. Otherwise click Cancel . Click OK . The instance type is removed from the Instance Types list and can no longer be used when creating a new virtual machine. Any virtual machines that were attached to the removed instance type will now be attached to Custom (no instance type).
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-instance_types
Chapter 61. JAX-RS 2.0 Filters and Interceptors
Chapter 61. JAX-RS 2.0 Filters and Interceptors Abstract JAX-RS 2.0 defines standard APIs and semantics for installing filters and interceptors in the processing pipeline for REST invocations. Filters and interceptors are typically used to provide such capabilities as logging, authentication, authorization, message compression, message encryption, and so on. 61.1. Introduction to JAX-RS Filters and Interceptors Overview This section provides an overview of the processing pipeline for JAX-RS filters and interceptors, highlighting the extension points where it is possible to install a filter chain or an interceptor chain. Filters A JAX-RS 2.0 filter is a type of plug-in that gives a developer access to all of the JAX-RS messages passing through a CXF client or server. A filter is suitable for processing the metadata associated with a message: HTTP headers, query parameters, media type, and other metadata. Filters have the capability to abort a message invocation (useful for security plug-ins, for example). If you like, you can install multiple filters at each extension point, in which case the filters are executed in a chain (the order of execution is undefined, however, unless you specify a priority value for each installed filter). Interceptors A JAX-RS 2.0 interceptor is a type of plug-in that gives a developer access to a message body as it is being read or written. Interceptors are wrapped around either the MessageBodyReader.readFrom method invocation (for reader interceptors) or the MessageBodyWriter.writeTo method invocation (for writer interceptors). If you like, you can install multiple interceptors at each extension point, in which case the interceptors are executed in a chain (the order of execution is undefined, however, unless you specify a priority value for each installed interceptor). Server processing pipeline Figure 61.1, "Server-Side Filter and Interceptor Extension Points" shows an outline of the processing pipeline for JAX-RS filters and interceptors installed on the server side. Figure 61.1. Server-Side Filter and Interceptor Extension Points Server extension points In the server processing pipeline, you can add a filter (or interceptor) at any of the following extension points: PreMatchContainerRequest filter ContainerRequest filter ReadInterceptor ContainerResponse filter WriteInterceptor Note that the PreMatchContainerRequest extension point is reached before resource matching has occurred, so some of the context metadata will not be available at this point. Client processing pipeline Figure 61.2, "Client-Side Filter and Interceptor Extension Points" shows an outline of the processing pipeline for JAX-RS filters and interceptors installed on the client side. Figure 61.2. Client-Side Filter and Interceptor Extension Points Client extension points In the client processing pipeline, you can add a filter (or interceptor) at any of the following extension points: ClientRequest filter WriteInterceptor ClientResponse filter ReadInterceptor Filter and interceptor order If you install multiple filters or interceptors at the same extension point, the execution order of the filters depends on the priority assigned to them (using the @Priority annotation in the Java source). A priority is represented as an integer value. In general, a filter with a higher priority number is placed closer to the resource method invocation on the server side; while a filter with a lower priority number is placed closer to the client invocation. In other words, the filters and interceptors acting on a request message are executed in ascending order of priority number; while the filters and interceptors acting on a response message are executed in descending order of priority number. Filter classes The following Java interfaces can be implemented in order to create custom REST message filters: javax.ws.rs.container.ContainerRequestFilter javax.ws.rs.container.ContainerResponseFilter javax.ws.rs.client.ClientRequestFilter javax.ws.rs.client.ClientResponseFilter Interceptor classes The following Java interfaces can be implemented in order to create custom REST message interceptors: javax.ws.rs.ext.ReaderInterceptor javax.ws.rs.ext.WriterInterceptor 61.2. Container Request Filter Overview This section explains how to implement and register a container request filter , which is used to intercept an incoming request message on the server (container) side. Container request filters are often used to process headers on the server side and can be used for any kind of generic request processing (that is, processing that is independent of the particular resource method called). Moreover, the container request filter is something of a special case, because it can be installed at two distinct extension points: PreMatchContainerRequest (before the resource matching step); and ContainerRequest (after the resource matching step). ContainerRequestFilter interface The javax.ws.rs.container.ContainerRequestFilter interface is defined as follows: By implementing the ContainerRequestFilter interface, you can create a filter for either of the following extension points on the server side: PreMatchContainerRequest ContainerRequest ContainerRequestContext interface The filter method of ContainerRequestFilter receives a single argument of type javax.ws.rs.container.ContainerRequestContext , which can be used to access the incoming request message and its related metadata. The ContainerRequestContext interface is defined as follows: Sample implementation for PreMatchContainerRequest filter To implement a container request filter for the PreMatchContainerRequest extension point (that is, where the filter is executed prior to resource matching), define a class that implements the ContainerRequestFilter interface, making sure to annotate the class with the @PreMatching annotation (to select the PreMatchContainerRequest extension point). For example, the following code shows an example of a simple container request filter that gets installed in the PreMatchContainerRequest extension point, with a priority of 20: Sample implementation for ContainerRequest filter To implement a container request filter for the ContainerRequest extension point (that is, where the filter is executed after resource matching), define a class that implements the ContainerRequestFilter interface, without the @PreMatching annotation. For example, the following code shows an example of a simple container request filter that gets installed in the ContainerRequest extension point, with a priority of 30: Injecting ResourceInfo At the ContainerRequest extension point (that is, after resource matching has occurred), it is possible to access the matched resource class and resource method by injecting the ResourceInfo class. For example, the following code shows how to inject the ResourceInfo class as a field of the ContainerRequestFilter class: Aborting the invocation It is possible to abort a server-side invocation by creating a suitable implementation of a container request filter. Typically, this is useful for implementing security features on the server side: for example, to implement an authentication feature or an authorization feature. If an incoming request fails to authenticate successfully, you could abort the invocation from within the container request filter. For example, the following pre-matching feature attempts to extract a username and password from the URI's query parameters and calls an authenticate method to check the username and password credentials. If the authentication fails, the invocation is aborted by calling abortWith on the ContainerRequestContext object, passing the error response that is to be returned to the client. Binding the server request filter To bind a server request filter (that is, to install it into the Apache CXF runtime), perform the following steps: Add the @Provider annotation to the container request filter class, as shown in the following code fragment: When the container request filter implementation is loaded into the Apache CXF runtime, the REST implementation automatically scans the loaded classes to search for the classes marked with the @Provider annotation (the scanning phase ). When defining a JAX-RS server endpoint in XML (for example, see Section 18.1, "Configuring JAX-RS Server Endpoints" ), add the server request filter to the list of providers in the jaxrs:providers element. Note This step is a non-standard requirement of Apache CXF. Strictly speaking, according to the JAX-RS standard, the @Provider annotation should be all that is required to bind the filter. But in practice, the standard approach is somewhat inflexible and can lead to clashing providers when many libraries are included in a large project. 61.3. Container Response Filter Overview This section explains how to implement and register a container response filter , which is used to intercept an outgoing response message on the server side. Container response filters can be used to populate headers automatically in a response message and, in general, can be used for any kind of generic response processing. ContainerResponseFilter interface The javax.ws.rs.container.ContainerResponseFilter interface is defined as follows: By implementing the ContainerResponseFilter , you can create a filter for the ContainerResponse extension point on the server side, which filters the response message after the invocation has executed. Note The container response filter gives you access both to the request message (through the requestContext argument) and the response message (through the responseContext message), but only the response can be modified at this stage. ContainerResponseContext interface The filter method of ContainerResponseFilter receives two arguments: an argument of type javax.ws.rs.container.ContainerRequestContext (see the section called "ContainerRequestContext interface" ); and an argument of type javax.ws.rs.container.ContainerResponseContext , which can be used to access the outgoing response message and its related metadata. The ContainerResponseContext interface is defined as follows: Sample implementation To implement a container response filter for the ContainerResponse extension point (that is, where the filter is executed after the invocation has been executed on the server side), define a class that implements the ContainerResponseFilter interface. For example, the following code shows an example of a simple container response filter that gets installed in the ContainerResponse extension point, with a priority of 10: Binding the server response filter To bind a server response filter (that is, to install it into the Apache CXF runtime), perform the following steps: Add the @Provider annotation to the container response filter class, as shown in the following code fragment: When the container response filter implementation is loaded into the Apache CXF runtime, the REST implementation automatically scans the loaded classes to search for the classes marked with the @Provider annotation (the scanning phase ). When defining a JAX-RS server endpoint in XML (for example, see Section 18.1, "Configuring JAX-RS Server Endpoints" ), add the server response filter to the list of providers in the jaxrs:providers element. Note This step is a non-standard requirement of Apache CXF. Strictly speaking, according to the JAX-RS standard, the @Provider annotation should be all that is required to bind the filter. But in practice, the standard approach is somewhat inflexible and can lead to clashing providers when many libraries are included in a large project. 61.4. Client Request Filter Overview This section explains how to implement and register a client request filter , which is used to intercept an outgoing request message on the client side. Client request filters are often used to process headers and can be used for any kind of generic request processing. ClientRequestFilter interface The javax.ws.rs.client.ClientRequestFilter interface is defined as follows: By implementing the ClientRequestFilter , you can create a filter for the ClientRequest extension point on the client side, which filters the request message before sending the message to the server. ClientRequestContext interface The filter method of ClientRequestFilter receives a single argument of type javax.ws.rs.client.ClientRequestContext , which can be used to access the outgoing request message and its related metadata. The ClientRequestContext interface is defined as follows: Sample implementation To implement a client request filter for the ClientRequest extension point (that is, where the filter is executed prior to sending the request message), define a class that implements the ClientRequestFilter interface. For example, the following code shows an example of a simple client request filter that gets installed in the ClientRequest extension point, with a priority of 20: Aborting the invocation It is possible to abort a client-side invocation by implementing a suitable client request filter. For example, you might implement a client-side filter to check whether a request is correctly formatted and, if necessary, abort the request. The following test code always aborts the request, returning the BAD_REQUEST HTTP status to the client calling code: Registering the client request filter Using the JAX-RS 2.0 client API, you can register a client request filter directly on a javax.ws.rs.client.Client object or on a javax.ws.rs.client.WebTarget object. Effectively, this means that the client request filter can optionally be applied to different scopes, so that only certain URI paths are affected by the filter. For example, the following code shows how to register the SampleClientRequestFilter filter so that it applies to all invocations made using the client object; and how to register the TestAbortClientRequestFilter filter, so that it applies only to sub-paths of rest/TestAbortClientRequest . 61.5. Client Response Filter Overview This section explains how to implement and register a client response filter , which is used to intercept an incoming response message on the client side. Client response filters can be used for any kind of generic response processing on the client side. ClientResponseFilter interface The javax.ws.rs.client.ClientResponseFilter interface is defined as follows: By implementing the ClientResponseFilter , you can create a filter for the ClientResponse extension point on the client side, which filters the response message after it is received from the server. ClientResponseContext interface The filter method of ClientResponseFilter receives two arguments: an argument of type javax.ws.rs.client.ClientRequestContext (see the section called "ClientRequestContext interface" ); and an argument of type javax.ws.rs.client.ClientResponseContext , which can be used to access the outgoing response message and its related metadata. The ClientResponseContext interface is defined as follows: Sample implementation To implement a client response filter for the ClientResponse extension point (that is, where the filter is executed after receiving a response message from the server), define a class that implements the ClientResponseFilter interface. For example, the following code shows an example of a simple client response filter that gets installed in the ClientResponse extension point, with a priority of 20: Registering the client response filter Using the JAX-RS 2.0 client API, you can register a client response filter directly on a javax.ws.rs.client.Client object or on a javax.ws.rs.client.WebTarget object. Effectively, this means that the client request filter can optionally be applied to different scopes, so that only certain URI paths are affected by the filter. For example, the following code shows how to register the SampleClientResponseFilter filter so that it applies to all invocations made using the client object: 61.6. Entity Reader Interceptor Overview This section explains how to implement and register an entity reader interceptor , which enables you to intercept the input stream when reading a message body either on the client side or on the server side. This is typically useful for generic transformations of the request body, such as encryption and decryption, or compressing and decompressing. ReaderInterceptor interface The javax.ws.rs.ext.ReaderInterceptor interface is defined as follows: By implementing the ReaderInterceptor interface, you can intercept the message body ( Entity object) as it is being read either on the server side or the client side. You can use an entity reader interceptor in either of the following contexts: Server side -if bound as a server-side interceptor, the entity reader interceptor intercepts the request message body when it is accessed by the application code (in the matched resource). Depending on the semantics of the REST request, the message body might not be accessed by the matched resource, in which case the reader interceptor is not called. Client side -if bound as a client-side interceptor, the entity reader interceptor intercepts the response message body when it is accessed by the client code. If the client code does not explicitly access the response message (for example, by calling the Response.getEntity method), the reader interceptor is not called. ReaderInterceptorContext interface The aroundReadFrom method of ReaderInterceptor receives one argument of type javax.ws.rs.ext.ReaderInterceptorContext , which can be used to access both the message body ( Entity object) and message metadata. The ReaderInterceptorContext interface is defined as follows: InterceptorContext interface The ReaderInterceptorContext interface also supports the methods inherited from the base InterceptorContext interface. The InterceptorContext interface is defined as follows: Sample implementation on the client side To implement an entity reader interceptor for the client side, define a class that implements the ReaderInterceptor interface. For example, the following code shows an example of an entity reader interceptor for the client side (with a priority of 10), which replaces all instances of COMPANY_NAME by Red Hat in the message body of the incoming response: Sample implementation on the server side To implement an entity reader interceptor for the server side, define a class that implements the ReaderInterceptor interface and annotate it with the @Provider annotation. For example, the following code shows an example of an entity reader interceptor for the server side (with a priority of 10), which replaces all instances of COMPANY_NAME by Red Hat in the message body of the incoming request: Binding a reader interceptor on the client side Using the JAX-RS 2.0 client API, you can register an entity reader interceptor directly on a javax.ws.rs.client.Client object or on a javax.ws.rs.client.WebTarget object. Effectively, this means that the reader interceptor can optionally be applied to different scopes, so that only certain URI paths are affected by the interceptor. For example, the following code shows how to register the SampleClientReaderInterceptor interceptor so that it applies to all invocations made using the client object: For more details about registering interceptors with a JAX-RS 2.0 client, see Section 49.5, "Configuring the Client Endpoint" . Binding a reader interceptor on the server side To bind a reader interceptor on the server side (that is, to install it into the Apache CXF runtime), perform the following steps: Add the @Provider annotation to the reader interceptor class, as shown in the following code fragment: When the reader interceptor implementation is loaded into the Apache CXF runtime, the REST implementation automatically scans the loaded classes to search for the classes marked with the @Provider annotation (the scanning phase ). When defining a JAX-RS server endpoint in XML (for example, see Section 18.1, "Configuring JAX-RS Server Endpoints" ), add the reader interceptor to the list of providers in the jaxrs:providers element. Note This step is a non-standard requirement of Apache CXF. Strictly speaking, according to the JAX-RS standard, the @Provider annotation should be all that is required to bind the interceptor. But in practice, the standard approach is somewhat inflexible and can lead to clashing providers when many libraries are included in a large project. 61.7. Entity Writer Interceptor Overview This section explains how to implement and register an entity writer interceptor , which enables you to intercept the output stream when writing a message body either on the client side or on the server side. This is typically useful for generic transformations of the request body, such as encryption and decryption, or compressing and decompressing. WriterInterceptor interface The javax.ws.rs.ext.WriterInterceptor interface is defined as follows: By implementing the WriterInterceptor interface, you can intercept the message body ( Entity object) as it is being written either on the server side or the client side. You can use an entity writer interceptor in either of the following contexts: Server side -if bound as a server-side interceptor, the entity writer interceptor intercepts the response message body just before it is marshalled and sent back to the client. Client side -if bound as a client-side interceptor, the entity writer interceptor intercepts the request message body just before it is marshalled and sent out to the server. WriterInterceptorContext interface The aroundWriteTo method of WriterInterceptor receives one argument of type javax.ws.rs.ext.WriterInterceptorContext , which can be used to access both the message body ( Entity object) and message metadata. The WriterInterceptorContext interface is defined as follows: InterceptorContext interface The WriterInterceptorContext interface also supports the methods inherited from the base InterceptorContext interface. For the definition of InterceptorContext , see the section called "InterceptorContext interface" . Sample implementation on the client side To implement an entity writer interceptor for the client side, define a class that implements the WriterInterceptor interface. For example, the following code shows an example of an entity writer interceptor for the client side (with a priority of 10), which appends an extra line of text to the message body of the outgoing request: Sample implementation on the server side To implement an entity writer interceptor for the server side, define a class that implements the WriterInterceptor interface and annotate it with the @Provider annotation. For example, the following code shows an example of an entity writer interceptor for the server side (with a priority of 10), which appends an extra line of text to the message body of the outgoing request: Binding a writer interceptor on the client side Using the JAX-RS 2.0 client API, you can register an entity writer interceptor directly on a javax.ws.rs.client.Client object or on a javax.ws.rs.client.WebTarget object. Effectively, this means that the writer interceptor can optionally be applied to different scopes, so that only certain URI paths are affected by the interceptor. For example, the following code shows how to register the SampleClientReaderInterceptor interceptor so that it applies to all invocations made using the client object: For more details about registering interceptors with a JAX-RS 2.0 client, see Section 49.5, "Configuring the Client Endpoint" . Binding a writer interceptor on the server side To bind a writer interceptor on the server side (that is, to install it into the Apache CXF runtime), perform the following steps: Add the @Provider annotation to the writer interceptor class, as shown in the following code fragment: When the writer interceptor implementation is loaded into the Apache CXF runtime, the REST implementation automatically scans the loaded classes to search for the classes marked with the @Provider annotation (the scanning phase ). When defining a JAX-RS server endpoint in XML (for example, see Section 18.1, "Configuring JAX-RS Server Endpoints" ), add the writer interceptor to the list of providers in the jaxrs:providers element. Note This step is a non-standard requirement of Apache CXF. Strictly speaking, according to the JAX-RS standard, the @Provider annotation should be all that is required to bind the interceptor. But in practice, the standard approach is somewhat inflexible and can lead to clashing providers when many libraries are included in a large project. 61.8. Dynamic Binding Overview The standard approach to binding container filters and container interceptors to resources is to annotate the filters and interceptors with the @Provider annotation. This ensures that the binding is global : that is, the filters and interceptors are bound to every resource class and resource method on the server side. Dynamic binding is an alternative approach to binding on the server side, which enables you to pick and choose which resource methods your interceptors and filters are applied to. To enable dynamic binding for your filters and interceptors, you must implement a custom DynamicFeature interface, as described here. DynamicFeature interface The DynamicFeature interface is defined in the javax.ws.rx.container package, as follows: Implementing a dynamic feature You implement a dynamic feature, as follows: Implement one or more container filters or container interceptors, as described previously. But do not annotate them with the @Provider annotation (otherwise, they would be bound globally, making the dynamic feature effectively irrelevant). Create your own dynamic feature by implementing the DynamicFeature class, overriding the configure method. In the configure method, you can use the resourceInfo argument to discover which resource class and which resource method this feature is being called for. You can use this information as the basis for deciding whether or not to register some of the filters or interceptors. If you decide to register a filter or an interceptor with the current resource method, you can do so by invoking one of the context.register methods. Remember to annotate your dynamic feature class with the @Provider annotation, to ensure that it gets picked up during the scanning phase of deployment. Example dynamic feature The following example shows you how to define a dynamic feature that registers the LoggingFilter filter for any method of the MyResource class (or subclass) that is annotated with @GET : Dynamic binding process The JAX-RS standard requires that the DynamicFeature.configure method is called exactly once for each resource method . This means that every resource method could potentially have filters or interceptors installed by the dynamic feature, but it is up to the dynamic feature to decide whether to register the filters or interceptors in each case. In other words, the granularity of binding supported by the dynamic feature is at the level of individual resource methods. FeatureContext interface The FeatureContext interface (which enables you to register filters and interceptors in the configure method) is defined as a sub-interface of Configurable<> , as follows: The Configurable<> interface defines a variety of methods for registering filters and interceptors on a single resource method, as follows:
[ "// Java package javax.ws.rs.container; import java.io.IOException; public interface ContainerRequestFilter { public void filter(ContainerRequestContext requestContext) throws IOException; }", "// Java package javax.ws.rs.container; import java.io.InputStream; import java.net.URI; import java.util.Collection; import java.util.Date; import java.util.List; import java.util.Locale; import java.util.Map; import javax.ws.rs.core.Cookie; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.MultivaluedMap; import javax.ws.rs.core.Request; import javax.ws.rs.core.Response; import javax.ws.rs.core.SecurityContext; import javax.ws.rs.core.UriInfo; public interface ContainerRequestContext { public Object getProperty(String name); public Collection getPropertyNames(); public void setProperty(String name, Object object); public void removeProperty(String name); public UriInfo getUriInfo(); public void setRequestUri(URI requestUri); public void setRequestUri(URI baseUri, URI requestUri); public Request getRequest(); public String getMethod(); public void setMethod(String method); public MultivaluedMap getHeaders(); public String getHeaderString(String name); public Date getDate(); public Locale getLanguage(); public int getLength(); public MediaType getMediaType(); public List getAcceptableMediaTypes(); public List getAcceptableLanguages(); public Map getCookies(); public boolean hasEntity(); public InputStream getEntityStream(); public void setEntityStream(InputStream input); public SecurityContext getSecurityContext(); public void setSecurityContext(SecurityContext context); public void abortWith(Response response); }", "// Java package org.jboss.fuse.example; import javax.ws.rs.container.ContainerRequestContext; import javax.ws.rs.container.ContainerRequestFilter; import javax.ws.rs.container.PreMatching; import javax.annotation.Priority; import javax.ws.rs.ext.Provider; @PreMatching @Priority(value = 20) @Provider public class SamplePreMatchContainerRequestFilter implements ContainerRequestFilter { public SamplePreMatchContainerRequestFilter() { System.out.println(\"SamplePreMatchContainerRequestFilter starting up\"); } @Override public void filter(ContainerRequestContext requestContext) { System.out.println(\"SamplePreMatchContainerRequestFilter.filter() invoked\"); } }", "// Java package org.jboss.fuse.example; import javax.ws.rs.container.ContainerRequestContext; import javax.ws.rs.container.ContainerRequestFilter; import javax.ws.rs.ext.Provider; import javax.annotation.Priority; @Provider @Priority(value = 30) public class SampleContainerRequestFilter implements ContainerRequestFilter { public SampleContainerRequestFilter() { System.out.println(\"SampleContainerRequestFilter starting up\"); } @Override public void filter(ContainerRequestContext requestContext) { System.out.println(\"SampleContainerRequestFilter.filter() invoked\"); } }", "// Java package org.jboss.fuse.example; import javax.ws.rs.container.ContainerRequestContext; import javax.ws.rs.container.ContainerRequestFilter; import javax.ws.rs.container.ResourceInfo; import javax.ws.rs.ext.Provider; import javax.annotation.Priority; import javax.ws.rs.core.Context; @Provider @Priority(value = 30) public class SampleContainerRequestFilter implements ContainerRequestFilter { @Context private ResourceInfo resinfo; public SampleContainerRequestFilter() { } @Override public void filter(ContainerRequestContext requestContext) { String resourceClass = resinfo.getResourceClass().getName(); String methodName = resinfo.getResourceMethod().getName(); System.out.println(\"REST invocation bound to resource class: \" + resourceClass); System.out.println(\"REST invocation bound to resource method: \" + methodName); } }", "// Java package org.jboss.fuse.example; import javax.annotation.Priority; import javax.ws.rs.container.ContainerRequestContext; import javax.ws.rs.container.ContainerRequestFilter; import javax.ws.rs.container.PreMatching; import javax.ws.rs.core.Response; import javax.ws.rs.core.Response.ResponseBuilder; import javax.ws.rs.core.Response.Status; import javax.ws.rs.ext.Provider; @PreMatching @Priority(value = 20) @Provider public class SampleAuthenticationRequestFilter implements ContainerRequestFilter { public SampleAuthenticationRequestFilter() { System.out.println(\"SampleAuthenticationRequestFilter starting up\"); } @Override public void filter(ContainerRequestContext requestContext) { ResponseBuilder responseBuilder = null; Response response = null; String userName = requestContext.getUriInfo().getQueryParameters().getFirst(\"UserName\"); String password = requestContext.getUriInfo().getQueryParameters().getFirst(\"Password\"); if (authenticate(userName, password) == false) { responseBuilder = Response.serverError(); response = responseBuilder.status(Status.BAD_REQUEST).build(); requestContext.abortWith(response); } } public boolean authenticate(String userName, String password) { // Perform authentication of 'user' } }", "// Java package org.jboss.fuse.example; import javax.ws.rs.container.ContainerRequestContext; import javax.ws.rs.container.ContainerRequestFilter; import javax.ws.rs.ext.Provider; import javax.annotation.Priority; @Provider @Priority(value = 30) public class SampleContainerRequestFilter implements ContainerRequestFilter { }", "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:jaxrs=\"http://cxf.apache.org/blueprint/jaxrs\" xmlns:cxf=\"http://cxf.apache.org/blueprint/core\" > <jaxrs:server id=\"customerService\" address=\"/customers\"> <jaxrs:providers> <ref bean=\"filterProvider\" /> </jaxrs:providers> <bean id=\"filterProvider\" class=\"org.jboss.fuse.example.SampleContainerRequestFilter\"/> </jaxrs:server> </blueprint>", "// Java package javax.ws.rs.container; import java.io.IOException; public interface ContainerResponseFilter { public void filter(ContainerRequestContext requestContext, ContainerResponseContext responseContext) throws IOException; }", "// Java package javax.ws.rs.container; import java.io.OutputStream; import java.lang.annotation.Annotation; import java.lang.reflect.Type; import java.net.URI; import java.util.Date; import java.util.Locale; import java.util.Map; import java.util.Set; import javax.ws.rs.core.EntityTag; import javax.ws.rs.core.Link; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.MultivaluedMap; import javax.ws.rs.core.NewCookie; import javax.ws.rs.core.Response; import javax.ws.rs.ext.MessageBodyWriter; public interface ContainerResponseContext { public int getStatus(); public void setStatus(int code); public Response.StatusType getStatusInfo(); public void setStatusInfo(Response.StatusType statusInfo); public MultivaluedMap<String, Object> getHeaders(); public abstract MultivaluedMap<String, String> getStringHeaders(); public String getHeaderString(String name); public Set<String> getAllowedMethods(); public Date getDate(); public Locale getLanguage(); public int getLength(); public MediaType getMediaType(); public Map<String, NewCookie> getCookies(); public EntityTag getEntityTag(); public Date getLastModified(); public URI getLocation(); public Set<Link> getLinks(); boolean hasLink(String relation); public Link getLink(String relation); public Link.Builder getLinkBuilder(String relation); public boolean hasEntity(); public Object getEntity(); public Class<?> getEntityClass(); public Type getEntityType(); public void setEntity(final Object entity); public void setEntity( final Object entity, final Annotation[] annotations, final MediaType mediaType); public Annotation[] getEntityAnnotations(); public OutputStream getEntityStream(); public void setEntityStream(OutputStream outputStream); }", "// Java package org.jboss.fuse.example; import javax.annotation.Priority; import javax.ws.rs.container.ContainerRequestContext; import javax.ws.rs.container.ContainerResponseContext; import javax.ws.rs.container.ContainerResponseFilter; import javax.ws.rs.ext.Provider; @Provider @Priority(value = 10) public class SampleContainerResponseFilter implements ContainerResponseFilter { public SampleContainerResponseFilter() { System.out.println(\"SampleContainerResponseFilter starting up\"); } @Override public void filter( ContainerRequestContext requestContext, ContainerResponseContext responseContext ) { // This filter replaces the response message body with a fixed string if (responseContext.hasEntity()) { responseContext.setEntity(\"New message body!\"); } } }", "// Java package org.jboss.fuse.example; import javax.annotation.Priority; import javax.ws.rs.container.ContainerRequestContext; import javax.ws.rs.container.ContainerResponseContext; import javax.ws.rs.container.ContainerResponseFilter; import javax.ws.rs.ext.Provider; @Provider @Priority(value = 10) public class SampleContainerResponseFilter implements ContainerResponseFilter { }", "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:jaxrs=\"http://cxf.apache.org/blueprint/jaxrs\" xmlns:cxf=\"http://cxf.apache.org/blueprint/core\" > <jaxrs:server id=\"customerService\" address=\"/customers\"> <jaxrs:providers> <ref bean=\"filterProvider\" /> </jaxrs:providers> <bean id=\"filterProvider\" class=\"org.jboss.fuse.example.SampleContainerResponseFilter\"/> </jaxrs:server> </blueprint>", "// Java package javax.ws.rs.client; import javax.ws.rs.client.ClientRequestFilter; import javax.ws.rs.client.ClientRequestContext; public interface ClientRequestFilter { void filter(ClientRequestContext requestContext) throws IOException; }", "// Java package javax.ws.rs.client; import java.io.OutputStream; import java.lang.annotation.Annotation; import java.lang.reflect.Type; import java.net.URI; import java.util.Collection; import java.util.Date; import java.util.List; import java.util.Locale; import java.util.Map; import javax.ws.rs.core.Configuration; import javax.ws.rs.core.Cookie; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.MultivaluedMap; import javax.ws.rs.core.Response; import javax.ws.rs.ext.MessageBodyWriter; public interface ClientRequestContext { public Object getProperty(String name); public Collection<String> getPropertyNames(); public void setProperty(String name, Object object); public void removeProperty(String name); public URI getUri(); public void setUri(URI uri); public String getMethod(); public void setMethod(String method); public MultivaluedMap<String, Object> getHeaders(); public abstract MultivaluedMap<String, String> getStringHeaders(); public String getHeaderString(String name); public Date getDate(); public Locale getLanguage(); public MediaType getMediaType(); public List<MediaType> getAcceptableMediaTypes(); public List<Locale> getAcceptableLanguages(); public Map<String, Cookie> getCookies(); public boolean hasEntity(); public Object getEntity(); public Class<?> getEntityClass(); public Type getEntityType(); public void setEntity(final Object entity); public void setEntity( final Object entity, final Annotation[] annotations, final MediaType mediaType); public Annotation[] getEntityAnnotations(); public OutputStream getEntityStream(); public void setEntityStream(OutputStream outputStream); public Client getClient(); public Configuration getConfiguration(); public void abortWith(Response response); }", "// Java package org.jboss.fuse.example; import javax.ws.rs.client.ClientRequestContext; import javax.ws.rs.client.ClientRequestFilter; import javax.annotation.Priority; @Priority(value = 20) public class SampleClientRequestFilter implements ClientRequestFilter { public SampleClientRequestFilter() { System.out.println(\"SampleClientRequestFilter starting up\"); } @Override public void filter(ClientRequestContext requestContext) { System.out.println(\"ClientRequestFilter.filter() invoked\"); } }", "// Java package org.jboss.fuse.example; import javax.ws.rs.client.ClientRequestContext; import javax.ws.rs.client.ClientRequestFilter; import javax.ws.rs.core.Response; import javax.ws.rs.core.Response.Status; import javax.annotation.Priority; @Priority(value = 10) public class TestAbortClientRequestFilter implements ClientRequestFilter { public TestAbortClientRequestFilter() { System.out.println(\"TestAbortClientRequestFilter starting up\"); } @Override public void filter(ClientRequestContext requestContext) { // Test filter: aborts with BAD_REQUEST status requestContext.abortWith(Response.status(Status.BAD_REQUEST).build()); } }", "// Java import javax.ws.rs.client.Client; import javax.ws.rs.client.ClientBuilder; import javax.ws.rs.client.Invocation; import javax.ws.rs.client.WebTarget; import javax.ws.rs.core.Response; Client client = ClientBuilder.newClient(); client.register(new SampleClientRequestFilter()); WebTarget target = client .target(\"http://localhost:8001/rest/TestAbortClientRequest\"); target.register(new TestAbortClientRequestFilter());", "// Java package javax.ws.rs.client; import java.io.IOException; public interface ClientResponseFilter { void filter(ClientRequestContext requestContext, ClientResponseContext responseContext) throws IOException; }", "// Java package javax.ws.rs.client; import java.io.InputStream; import java.net.URI; import java.util.Date; import java.util.Locale; import java.util.Map; import java.util.Set; import javax.ws.rs.core.EntityTag; import javax.ws.rs.core.Link; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.MultivaluedMap; import javax.ws.rs.core.NewCookie; import javax.ws.rs.core.Response; public interface ClientResponseContext { public int getStatus(); public void setStatus(int code); public Response.StatusType getStatusInfo(); public void setStatusInfo(Response.StatusType statusInfo); public MultivaluedMap<String, String> getHeaders(); public String getHeaderString(String name); public Set<String> getAllowedMethods(); public Date getDate(); public Locale getLanguage(); public int getLength(); public MediaType getMediaType(); public Map<String, NewCookie> getCookies(); public EntityTag getEntityTag(); public Date getLastModified(); public URI getLocation(); public Set<Link> getLinks(); boolean hasLink(String relation); public Link getLink(String relation); public Link.Builder getLinkBuilder(String relation); public boolean hasEntity(); public InputStream getEntityStream(); public void setEntityStream(InputStream input); }", "// Java package org.jboss.fuse.example; import javax.ws.rs.client.ClientRequestContext; import javax.ws.rs.client.ClientResponseContext; import javax.ws.rs.client.ClientResponseFilter; import javax.annotation.Priority; @Priority(value = 20) public class SampleClientResponseFilter implements ClientResponseFilter { public SampleClientResponseFilter() { System.out.println(\"SampleClientResponseFilter starting up\"); } @Override public void filter( ClientRequestContext requestContext, ClientResponseContext responseContext ) { // Add an extra header on the response responseContext.getHeaders().putSingle(\"MyCustomHeader\", \"my custom data\"); } }", "// Java import javax.ws.rs.client.Client; import javax.ws.rs.client.ClientBuilder; import javax.ws.rs.client.Invocation; import javax.ws.rs.client.WebTarget; import javax.ws.rs.core.Response; Client client = ClientBuilder.newClient(); client.register(new SampleClientResponseFilter());", "// Java package javax.ws.rs.ext; public interface ReaderInterceptor { public Object aroundReadFrom(ReaderInterceptorContext context) throws java.io.IOException, javax.ws.rs.WebApplicationException; }", "// Java package javax.ws.rs.ext; import java.io.IOException; import java.io.InputStream; import javax.ws.rs.WebApplicationException; import javax.ws.rs.core.MultivaluedMap; public interface ReaderInterceptorContext extends InterceptorContext { public Object proceed() throws IOException, WebApplicationException; public InputStream getInputStream(); public void setInputStream(InputStream is); public MultivaluedMap<String, String> getHeaders(); }", "// Java package javax.ws.rs.ext; import java.lang.annotation.Annotation; import java.lang.reflect.Type; import java.util.Collection; import javax.ws.rs.core.MediaType; public interface InterceptorContext { public Object getProperty(String name); public Collection<String> getPropertyNames(); public void setProperty(String name, Object object); public void removeProperty(String name); public Annotation[] getAnnotations(); public void setAnnotations(Annotation[] annotations); Class<?> getType(); public void setType(Class<?> type); Type getGenericType(); public void setGenericType(Type genericType); public MediaType getMediaType(); public void setMediaType(MediaType mediaType); }", "// Java package org.jboss.fuse.example; import java.io.ByteArrayInputStream; import java.io.IOException; import java.io.InputStream; import javax.annotation.Priority; import javax.ws.rs.WebApplicationException; import javax.ws.rs.ext.ReaderInterceptor; import javax.ws.rs.ext.ReaderInterceptorContext; @Priority(value = 10) public class SampleClientReaderInterceptor implements ReaderInterceptor { @Override public Object aroundReadFrom(ReaderInterceptorContext interceptorContext) throws IOException, WebApplicationException { InputStream inputStream = interceptorContext.getInputStream(); byte[] bytes = new byte[inputStream.available()]; inputStream.read(bytes); String responseContent = new String(bytes); responseContent = responseContent.replaceAll(\"COMPANY_NAME\", \"Red Hat\"); interceptorContext.setInputStream(new ByteArrayInputStream(responseContent.getBytes())); return interceptorContext.proceed(); } }", "// Java package org.jboss.fuse.example; import java.io.ByteArrayInputStream; import java.io.IOException; import java.io.InputStream; import javax.annotation.Priority; import javax.ws.rs.WebApplicationException; import javax.ws.rs.ext.Provider; import javax.ws.rs.ext.ReaderInterceptor; import javax.ws.rs.ext.ReaderInterceptorContext; @Priority(value = 10) @Provider public class SampleServerReaderInterceptor implements ReaderInterceptor { @Override public Object aroundReadFrom(ReaderInterceptorContext interceptorContext) throws IOException, WebApplicationException { InputStream inputStream = interceptorContext.getInputStream(); byte[] bytes = new byte[inputStream.available()]; inputStream.read(bytes); String requestContent = new String(bytes); requestContent = requestContent.replaceAll(\"COMPANY_NAME\", \"Red Hat\"); interceptorContext.setInputStream(new ByteArrayInputStream(requestContent.getBytes())); return interceptorContext.proceed(); } }", "// Java import javax.ws.rs.client.Client; import javax.ws.rs.client.ClientBuilder; import javax.ws.rs.client.Invocation; import javax.ws.rs.client.WebTarget; import javax.ws.rs.core.Response; Client client = ClientBuilder.newClient(); client.register(SampleClientReaderInterceptor.class);", "// Java package org.jboss.fuse.example; import javax.annotation.Priority; import javax.ws.rs.WebApplicationException; import javax.ws.rs.ext.Provider; import javax.ws.rs.ext.ReaderInterceptor; import javax.ws.rs.ext.ReaderInterceptorContext; @Priority(value = 10) @Provider public class SampleServerReaderInterceptor implements ReaderInterceptor { }", "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:jaxrs=\"http://cxf.apache.org/blueprint/jaxrs\" xmlns:cxf=\"http://cxf.apache.org/blueprint/core\" > <jaxrs:server id=\"customerService\" address=\"/customers\"> <jaxrs:providers> <ref bean=\"interceptorProvider\" /> </jaxrs:providers> <bean id=\"interceptorProvider\" class=\"org.jboss.fuse.example.SampleServerReaderInterceptor\"/> </jaxrs:server> </blueprint>", "// Java package javax.ws.rs.ext; public interface WriterInterceptor { void aroundWriteTo(WriterInterceptorContext context) throws java.io.IOException, javax.ws.rs.WebApplicationException; }", "// Java package javax.ws.rs.ext; import java.io.IOException; import java.io.OutputStream; import javax.ws.rs.WebApplicationException; import javax.ws.rs.core.MultivaluedMap; public interface WriterInterceptorContext extends InterceptorContext { void proceed() throws IOException, WebApplicationException; Object getEntity(); void setEntity(Object entity); OutputStream getOutputStream(); public void setOutputStream(OutputStream os); MultivaluedMap<String, Object> getHeaders(); }", "// Java package org.jboss.fuse.example; import java.io.IOException; import java.io.OutputStream; import javax.ws.rs.WebApplicationException; import javax.ws.rs.ext.WriterInterceptor; import javax.ws.rs.ext.WriterInterceptorContext; import javax.annotation.Priority; @Priority(value = 10) public class SampleClientWriterInterceptor implements WriterInterceptor { @Override public void aroundWriteTo(WriterInterceptorContext interceptorContext) throws IOException, WebApplicationException { OutputStream outputStream = interceptorContext.getOutputStream(); String appendedContent = \"\\nInterceptors always get the last word in.\"; outputStream.write(appendedContent.getBytes()); interceptorContext.setOutputStream(outputStream); interceptorContext.proceed(); } }", "// Java package org.jboss.fuse.example; import java.io.IOException; import java.io.OutputStream; import javax.ws.rs.WebApplicationException; import javax.ws.rs.ext.Provider; import javax.ws.rs.ext.WriterInterceptor; import javax.ws.rs.ext.WriterInterceptorContext; import javax.annotation.Priority; @Priority(value = 10) @Provider public class SampleServerWriterInterceptor implements WriterInterceptor { @Override public void aroundWriteTo(WriterInterceptorContext interceptorContext) throws IOException, WebApplicationException { OutputStream outputStream = interceptorContext.getOutputStream(); String appendedContent = \"\\nInterceptors always get the last word in.\"; outputStream.write(appendedContent.getBytes()); interceptorContext.setOutputStream(outputStream); interceptorContext.proceed(); } }", "// Java import javax.ws.rs.client.Client; import javax.ws.rs.client.ClientBuilder; import javax.ws.rs.client.Invocation; import javax.ws.rs.client.WebTarget; import javax.ws.rs.core.Response; Client client = ClientBuilder.newClient(); client.register(SampleClientReaderInterceptor.class);", "// Java package org.jboss.fuse.example; import javax.ws.rs.WebApplicationException; import javax.ws.rs.ext.Provider; import javax.ws.rs.ext.WriterInterceptor; import javax.ws.rs.ext.WriterInterceptorContext; import javax.annotation.Priority; @Priority(value = 10) @Provider public class SampleServerWriterInterceptor implements WriterInterceptor { }", "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:jaxrs=\"http://cxf.apache.org/blueprint/jaxrs\" xmlns:cxf=\"http://cxf.apache.org/blueprint/core\" > <jaxrs:server id=\"customerService\" address=\"/customers\"> <jaxrs:providers> <ref bean=\"interceptorProvider\" /> </jaxrs:providers> <bean id=\"interceptorProvider\" class=\"org.jboss.fuse.example.SampleServerWriterInterceptor\"/> </jaxrs:server> </blueprint>", "// Java package javax.ws.rs.container; import javax.ws.rs.core.FeatureContext; import javax.ws.rs.ext.ReaderInterceptor; import javax.ws.rs.ext.WriterInterceptor; public interface DynamicFeature { public void configure(ResourceInfo resourceInfo, FeatureContext context); }", "// Java import javax.ws.rs.container.DynamicFeature; import javax.ws.rs.container.ResourceInfo; import javax.ws.rs.core.FeatureContext; import javax.ws.rs.ext.Provider; @Provider public class DynamicLoggingFilterFeature implements DynamicFeature { @Override void configure(ResourceInfo resourceInfo, FeatureContext context) { if (MyResource.class.isAssignableFrom(resourceInfo.getResourceClass()) && resourceInfo.getResourceMethod().isAnnotationPresent(GET.class)) { context.register(new LoggingFilter()); } }", "// Java package javax.ws.rs.core; public interface FeatureContext extends Configurable<FeatureContext> { }", "// Java package javax.ws.rs.core; import java.util.Map; public interface Configurable<C extends Configurable> { public Configuration getConfiguration(); public C property(String name, Object value); public C register(Class<?> componentClass); public C register(Class<?> componentClass, int priority); public C register(Class<?> componentClass, Class<?>... contracts); public C register(Class<?> componentClass, Map<Class<?>, Integer> contracts); public C register(Object component); public C register(Object component, int priority); public C register(Object component, Class<?>... contracts); public C register(Object component, Map<Class<?>, Integer> contracts); }" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/JAXRS20Filters
Updating clusters
Updating clusters OpenShift Container Platform 4.7 Updating OpenShift Container Platform clusters Red Hat OpenShift Documentation Team
[ "Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.", "apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: updateservice-registry: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 2 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----", "oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1", "oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3", "oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1", "apiVersion: v1 kind: Namespace metadata: name: openshift-update-service annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 1", "oc create -f <filename>.yaml", "oc create -f update-service-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: update-service-operator-group spec: targetNamespaces: - openshift-update-service", "oc -n openshift-update-service create -f <filename>.yaml", "oc -n openshift-update-service create -f update-service-operator-group.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: update-service-subscription spec: channel: v1 installPlanApproval: \"Automatic\" source: \"redhat-operators\" 1 sourceNamespace: \"openshift-marketplace\" name: \"cincinnati-operator\"", "oc create -f <filename>.yaml", "oc -n openshift-update-service create -f update-service-subscription.yaml", "oc -n openshift-update-service get clusterserviceversions", "NAME DISPLAY VERSION REPLACES PHASE update-service-operator.v4.6.0 OpenShift Update Service 4.6.0 Succeeded", "FROM registry.access.redhat.com/ubi8/ubi:8.1 RUN curl -L -o cincinnati-graph-data.tar.gz https://github.com/openshift/cincinnati-graph-data/archive/master.tar.gz CMD exec /bin/bash -c \"tar xvzf cincinnati-graph-data.tar.gz -C /var/lib/cincinnati/graph-data/ --strip-components=1\"", "podman build -f ./Dockerfile -t registry.example.com/openshift/graph-data:latest", "podman push registry.example.com/openshift/graph-data:latest", "x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0", "OCP_RELEASE=<release_version>", "LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'", "LOCAL_REPOSITORY='<local_repository_name>'", "LOCAL_RELEASE_IMAGES_REPOSITORY='<local_release_images_repository_name>'", "PRODUCT_REPO='openshift-release-dev'", "LOCAL_SECRET_JSON='<path_to_pull_secret>'", "RELEASE_NAME=\"ocp-release\"", "ARCHITECTURE=<server_architecture>", "REMOVABLE_MEDIA_PATH=<path> 1", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_RELEASE_IMAGES_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1", "oc image mirror -a USD{LOCAL_SECRET_JSON} USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} USD{LOCAL_REGISTRY}/USD{LOCAL_RELEASE_IMAGES_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_RELEASE_IMAGES_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "NAMESPACE=openshift-update-service", "NAME=service", "RELEASE_IMAGES=registry.example.com/ocp4/openshift4-release-images", "GRAPH_DATA_IMAGE=registry.example.com/openshift/graph-data:latest", "oc -n \"USD{NAMESPACE}\" create -f - <<EOF apiVersion: updateservice.operator.openshift.io/v1 kind: UpdateService metadata: name: USD{NAME} spec: replicas: 2 releases: USD{RELEASE_IMAGES} graphDataImage: USD{GRAPH_DATA_IMAGE} EOF", "while sleep 1; do POLICY_ENGINE_GRAPH_URI=\"USD(oc -n \"USD{NAMESPACE}\" get -o jsonpath='{.status.policyEngineURI}/api/upgrades_info/v1/graph{\"\\n\"}' updateservice \"USD{NAME}\")\"; SCHEME=\"USD{POLICY_ENGINE_GRAPH_URI%%:*}\"; if test \"USD{SCHEME}\" = http -o \"USD{SCHEME}\" = https; then break; fi; done", "while sleep 10; do HTTP_CODE=\"USD(curl --header Accept:application/json --output /dev/stderr --write-out \"%{http_code}\" \"USD{POLICY_ENGINE_GRAPH_URI}?channel=stable-4.6\")\"; if test \"USD{HTTP_CODE}\" -eq 200; then break; fi; echo \"USD{HTTP_CODE}\"; done", "NAMESPACE=openshift-update-service", "NAME=service", "POLICY_ENGINE_GRAPH_URI=\"USD(oc -n \"USD{NAMESPACE}\" get -o jsonpath='{.status.policyEngineURI}/api/upgrades_info/v1/graph{\"\\n\"}' updateservice \"USD{NAME}\")\"", "PATCH=\"{\\\"spec\\\":{\\\"upstream\\\":\\\"USD{POLICY_ENGINE_GRAPH_URI}\\\"}}\"", "oc patch clusterversion version -p USDPATCH --type merge", "oc get updateservice -n openshift-update-service", "NAME AGE service 6s", "oc delete updateservice service -n openshift-update-service", "updateservice.updateservice.operator.openshift.io \"service\" deleted", "oc project openshift-update-service", "Now using project \"openshift-update-service\" on server \"https://example.com:6443\".", "oc get operatorgroup", "NAME AGE openshift-update-service-fprx2 4m41s", "oc delete operatorgroup openshift-update-service-fprx2", "operatorgroup.operators.coreos.com \"openshift-update-service-fprx2\" deleted", "oc get subscription", "NAME PACKAGE SOURCE CHANNEL update-service-operator update-service-operator updateservice-index-catalog v1", "oc get subscription update-service-operator -o yaml | grep \" currentCSV\"", "currentCSV: update-service-operator.v0.0.1", "oc delete subscription update-service-operator", "subscription.operators.coreos.com \"update-service-operator\" deleted", "oc delete clusterserviceversion update-service-operator.v0.0.1", "clusterserviceversion.operators.coreos.com \"update-service-operator.v0.0.1\" deleted", "oc patch clusterversion version --type json -p '[{\"op\": \"add\", \"path\": \"/spec/channel\", \"value\": \"<channel>\"}]'", "spec: clusterID: db93436d-7b05-42cc-b856-43e11ad2d31a upstream: '<update-server-url>' 1", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.9 True False 158m Cluster version is 4.6.9", "oc get clusterversion -o json|jq \".items[0].spec\"", "{ \"channel\": \"stable-4.7\", \"clusterID\": \"990f7ab8-109b-4c95-8480-2bd1deec55ff\" }", "oc adm upgrade", "Cluster version is 4.1.0 Updates: VERSION IMAGE 4.1.2 quay.io/openshift-release-dev/ocp-release@sha256:9c5f0df8b192a0d7b46cd5f6a4da2289c155fd5302dec7954f8f06c878160b8b", "oc adm upgrade --to-latest=true 1", "oc adm upgrade --to=<version> 1", "oc get clusterversion -o json|jq \".items[0].spec\"", "{ \"channel\": \"stable-4.7\", \"clusterID\": \"990f7ab8-109b-4c95-8480-2bd1deec55ff\", \"desiredUpdate\": { \"force\": false, \"image\": \"quay.io/openshift-release-dev/ocp-release@sha256:9c5f0df8b192a0d7b46cd5f6a4da2289c155fd5302dec7954f8f06c878160b8b\", \"version\": \"4.7.0\" 1 } }", "oc get clusterversion -o json|jq \".items[0].status.history\"", "[ { \"completionTime\": null, \"image\": \"quay.io/openshift-release-dev/ocp-release@sha256:b8fa13e09d869089fc5957c32b02b7d3792a0b6f36693432acc0409615ab23b7\", \"startedTime\": \"2021-01-28T20:30:50Z\", \"state\": \"Partial\", \"verified\": true, \"version\": \"4.7.0\" }, { \"completionTime\": \"2021-01-28T20:30:50Z\", \"image\": \"quay.io/openshift-release-dev/ocp-release@sha256:b8fa13e09d869089fc5957c32b02b7d3792a0b6f36693432acc0409615ab23b7\", \"startedTime\": \"2021-01-28T17:38:10Z\", \"state\": \"Completed\", \"verified\": false, \"version\": \"4.7.0\" } ]", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.7.0 True False 2m Cluster version is 4.7.0", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 82m v1.20.0 ip-10-0-170-223.ec2.internal Ready master 82m v1.20.0 ip-10-0-179-95.ec2.internal Ready worker 70m v1.20.0 ip-10-0-182-134.ec2.internal Ready worker 70m v1.20.0 ip-10-0-211-16.ec2.internal Ready master 82m v1.20.0 ip-10-0-250-100.ec2.internal Ready worker 69m v1.20.0", "oc patch clusterversion/version --patch '{\"spec\":{\"upstream\":\"<update-server-url>\"}}' --type=merge", "clusterversion.config.openshift.io/version patched", "oc get -l 'node-role.kubernetes.io/master!=' -o 'jsonpath={range .items[*]}{.metadata.name}{\"\\n\"}{end}' nodes", "ci-ln-pwnll6b-f76d1-s8t9n-worker-a-s75z4 ci-ln-pwnll6b-f76d1-s8t9n-worker-b-dglj2 ci-ln-pwnll6b-f76d1-s8t9n-worker-c-lldbm", "oc label node <node name> node-role.kubernetes.io/<custom-label>=", "oc label node ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz node-role.kubernetes.io/workerpool-canary=", "node/ci-ln-gtrwm8t-f76d1-spbl7-worker-a-xk76k labeled", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: workerpool-canary 1 spec: machineConfigSelector: matchExpressions: 2 - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,workerpool-canary] } nodeSelector: matchLabels: node-role.kubernetes.io/workerpool-canary: \"\" 3", "oc create -f <file_name>", "machineconfigpool.machineconfiguration.openshift.io/workerpool-canary created", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-b0bb90c4921860f2a5d8a2f8137c1867 True False False 3 3 3 0 97m workerpool-canary rendered-workerpool-canary-87ba3dec1ad78cb6aecebf7fbb476a36 True False False 1 1 1 0 2m42s worker rendered-worker-87ba3dec1ad78cb6aecebf7fbb476a36 True False False 2 2 2 0 97m", "oc patch mcp/<mcp_name> --patch '{\"spec\":{\"paused\":true}}' --type=merge", "oc patch mcp/workerpool-canary --patch '{\"spec\":{\"paused\":true}}' --type=merge", "machineconfigpool.machineconfiguration.openshift.io/workerpool-canary patched", "oc patch mcp/<mcp_name> --patch '{\"spec\":{\"paused\":false}}' --type=merge", "oc patch mcp/workerpool-canary --patch '{\"spec\":{\"paused\":false}}' --type=merge", "machineconfigpool.machineconfiguration.openshift.io/workerpool-canary patched", "oc label node <node_name> node-role.kubernetes.io/<custom-label>-", "oc label node ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz node-role.kubernetes.io/workerpool-canary-", "node/ci-ln-0qv1yp2-f76d1-kl2tq-worker-a-j2ssz labeled", "USDoc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-1203f157d053fd987c7cbd91e3fbc0ed True False False 3 3 3 0 61m workerpool-canary rendered-mcp-noupdate-5ad4791166c468f3a35cd16e734c9028 True False False 0 0 0 0 21m worker rendered-worker-5ad4791166c468f3a35cd16e734c9028 True False False 3 3 3 0 61m", "oc delete mcp <mcp_name>", "--- Trivial example forcing an operator to acknowledge the start of an upgrade file=/home/user/openshift-ansible/hooks/pre_compute.yml - name: note the start of a compute machine update debug: msg: \"Compute machine upgrade of {{ inventory_hostname }} is about to start\" - name: require the user agree to start an upgrade pause: prompt: \"Press Enter to start the compute machine update\"", "[all:vars] openshift_node_pre_upgrade_hook=/home/user/openshift-ansible/hooks/pre_node.yml openshift_node_post_upgrade_hook=/home/user/openshift-ansible/hooks/post_node.yml", "systemctl disable --now firewalld.service", "subscription-manager repos --disable=rhel-7-server-ose-4.6-rpms --enable=rhel-7-server-ansible-2.9-rpms --enable=rhel-7-server-ose-4.7-rpms", "yum update openshift-ansible openshift-clients", "subscription-manager repos --disable=rhel-7-server-ose-4.6-rpms --enable=rhel-7-server-ose-4.7-rpms --enable=rhel-7-fast-datapath-rpms --enable=rhel-7-server-optional-rpms", "oc get node", "NAME STATUS ROLES AGE VERSION mycluster-control-plane-0 Ready master 145m v1.20.0 mycluster-control-plane-1 Ready master 145m v1.20.0 mycluster-control-plane-2 Ready master 145m v1.20.0 mycluster-rhel7-0 NotReady,SchedulingDisabled worker 98m v1.14.6+97c81d00e mycluster-rhel7-1 Ready worker 98m v1.14.6+97c81d00e mycluster-rhel7-2 Ready worker 98m v1.14.6+97c81d00e mycluster-rhel7-3 Ready worker 98m v1.14.6+97c81d00e", "[all:vars] ansible_user=root #ansible_become=True openshift_kubeconfig_path=\"~/.kube/config\" [workers] mycluster-rhel7-0.example.com", "cd /usr/share/ansible/openshift-ansible", "ansible-playbook -i /<path>/inventory/hosts playbooks/upgrade.yml 1", "oc get node", "NAME STATUS ROLES AGE VERSION mycluster-control-plane-0 Ready master 145m v1.20.0 mycluster-control-plane-1 Ready master 145m v1.20.0 mycluster-control-plane-2 Ready master 145m v1.20.0 mycluster-rhel7-0 NotReady,SchedulingDisabled worker 98m v1.20.0 mycluster-rhel7-1 Ready worker 98m v1.20.0 mycluster-rhel7-2 Ready worker 98m v1.20.0 mycluster-rhel7-3 Ready worker 98m v1.20.0", "yum update", "tar xvzf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs=", "cat ./pull-secret.text | jq . > <path>/<pull_secret_file_in_json> 1", "{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" },", "{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "export OCP_RELEASE=<release_version>", "LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'", "LOCAL_REPOSITORY='<local_repository_name>'", "PRODUCT_REPO='openshift-release-dev'", "LOCAL_SECRET_JSON='<path_to_pull_secret>'", "RELEASE_NAME=\"ocp-release\"", "ARCHITECTURE=<server_architecture>", "REMOVABLE_MEDIA_PATH=<path> 1", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1", "oc apply -f USD{REMOVABLE_MEDIA_PATH}/mirror/config/<image_signature_file> 1", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --apply-release-image-signature", "OCP_RELEASE_NUMBER=<release_version> 1", "ARCHITECTURE=<server_architecture> 1", "DIGEST=\"USD(oc adm release info quay.io/openshift-release-dev/ocp-release:USD{OCP_RELEASE_NUMBER}-USD{ARCHITECTURE} | sed -n 's/Pull From: .*@//p')\"", "DIGEST_ALGO=\"USD{DIGEST%%:*}\"", "DIGEST_ENCODED=\"USD{DIGEST#*:}\"", "SIGNATURE_BASE64=USD(curl -s \"https://mirror.openshift.com/pub/openshift-v4/signatures/openshift/release/USD{DIGEST_ALGO}=USD{DIGEST_ENCODED}/signature-1\" | base64 -w0 && echo)", "cat >checksum-USD{OCP_RELEASE_NUMBER}.yaml <<EOF apiVersion: v1 kind: ConfigMap metadata: name: release-image-USD{OCP_RELEASE_NUMBER} namespace: openshift-config-managed labels: release.openshift.io/verification-signatures: \"\" binaryData: USD{DIGEST_ALGO}-USD{DIGEST_ENCODED}: USD{SIGNATURE_BASE64} EOF", "oc apply -f checksum-USD{OCP_RELEASE_NUMBER}.yaml", "oc adm upgrade --allow-explicit-upgrade --to-image USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}<sha256_sum_value> 1", "skopeo copy docker://registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6 docker://example.io/example/ubi-minimal", "apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: ubi8repo spec: repositoryDigestMirrors: - mirrors: - example.io/example/ubi-minimal 1 source: registry.access.redhat.com/ubi8/ubi-minimal 2 - mirrors: - example.com/example/ubi-minimal source: registry.access.redhat.com/ubi8/ubi-minimal - mirrors: - mirror.example.com/redhat source: registry.redhat.io/openshift4 3", "oc create -f registryrepomirror.yaml", "oc get node", "NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.20.0 ip-10-0-138-148.ec2.internal Ready master 11m v1.20.0 ip-10-0-139-122.ec2.internal Ready master 11m v1.20.0 ip-10-0-147-35.ec2.internal Ready,SchedulingDisabled worker 7m v1.20.0 ip-10-0-153-12.ec2.internal Ready worker 7m v1.20.0 ip-10-0-154-10.ec2.internal Ready master 11m v1.20.0", "oc debug node/ip-10-0-147-35.ec2.internal", "Starting pod/ip-10-0-147-35ec2internal-debug To use host binaries, run `chroot /host`", "sh-4.2# chroot /host", "sh-4.2# cat /etc/containers/registries.conf", "unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] location = \"registry.access.redhat.com/ubi8/\" insecure = false blocked = false mirror-by-digest-only = true prefix = \"\" [[registry.mirror]] location = \"example.io/example/ubi8-minimal\" insecure = false [[registry.mirror]] location = \"example.com/example/ubi8-minimal\" insecure = false", "sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6", "oc adm catalog mirror <local_registry>/<pull_spec> <local_registry> -a <pull_secret_file> --icsp-scope=registry", "oc apply -f imageContentSourcePolicy.yaml", "oc get ImageContentSourcePolicy -o yaml", "apiVersion: v1 items: - apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.openshift.io/v1alpha1\",\"kind\":\"ImageContentSourcePolicy\",\"metadata\":{\"annotations\":{},\"name\":\"redhat-operator-index\"},\"spec\":{\"repositoryDigestMirrors\":[{\"mirrors\":[\"local.registry:5000\"],\"source\":\"registry.redhat.io\"}]}}" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html-single/updating_clusters/index
Chapter 6. Installing a three-node cluster on vSphere
Chapter 6. Installing a three-node cluster on vSphere In OpenShift Container Platform version 4.18, you can install a three-node cluster on VMware vSphere. A three-node cluster consists of three control plane machines, which also act as compute machines. This type of cluster provides a smaller, more resource efficient cluster, for cluster administrators and developers to use for testing, development, and production. You can install a three-node cluster using either installer-provisioned or user-provisioned infrastructure. 6.1. Configuring a three-node cluster You configure a three-node cluster by setting the number of worker nodes to 0 in the install-config.yaml file before deploying the cluster. Setting the number of worker nodes to 0 ensures that the control plane machines are schedulable. This allows application workloads to be scheduled to run from the control plane nodes. Note Because application workloads run from control plane nodes, additional subscriptions are required, as the control plane nodes are considered to be compute nodes. Prerequisites You have an existing install-config.yaml file. Procedure Set the number of compute replicas to 0 in your install-config.yaml file, as shown in the following compute stanza: Example install-config.yaml file for a three-node cluster apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0 # ... If you are deploying a cluster with user-provisioned infrastructure: Configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. In a three-node cluster, the Ingress Controller pods run on the control plane nodes. For more information, see the "Load balancing requirements for user-provisioned infrastructure". After you create the Kubernetes manifest files, make sure that the spec.mastersSchedulable parameter is set to true in cluster-scheduler-02-config.yml file. You can locate this file in <installation_directory>/manifests . For more information, see "Creating the Kubernetes manifest and Ignition config files" in "Installing a cluster on vSphere with user-provisioned infrastructure". Do not create additional worker nodes. Example cluster-scheduler-02-config.yml file for a three-node cluster apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: "" status: {} 6.2. steps Installing a cluster on vSphere with customizations Installing a cluster on vSphere with user-provisioned infrastructure
[ "apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: \"\" status: {}" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_vmware_vsphere/installing-vsphere-three-node
Using Octavia for Load Balancing-as-a-Service
Using Octavia for Load Balancing-as-a-Service Red Hat OpenStack Platform 16.2 Octavia administration and how to use octavia to load balance network traffic across the data plane. OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/using_octavia_for_load_balancing-as-a-service/index
Chapter 6. Managing alerts
Chapter 6. Managing alerts 6.1. Managing alerts as an Administrator In OpenShift Container Platform, the Alerting UI enables you to manage alerts, silences, and alerting rules. Note The alerts, silences, and alerting rules that are available in the Alerting UI relate to the projects that you have access to. For example, if you are logged in as a user with the cluster-admin role, you can access all alerts, silences, and alerting rules. 6.1.1. Accessing the Alerting UI from the Administrator perspective The Alerting UI is accessible through the Administrator perspective of the OpenShift Container Platform web console. From the Administrator perspective, go to Observe Alerting . The three main pages in the Alerting UI in this perspective are the Alerts , Silences , and Alerting rules pages. Additional resources Searching and filtering alerts, silences, and alerting rules 6.1.2. Getting information about alerts, silences, and alerting rules from the Administrator perspective The Alerting UI provides detailed information about alerts and their governing alerting rules and silences. Prerequisites You have access to the cluster as a user with view permissions for the project that you are viewing alerts for. Procedure To obtain information about alerts: From the Administrator perspective of the OpenShift Container Platform web console, go to the Observe Alerting Alerts page. Optional: Search for alerts by name by using the Name field in the search list. Optional: Filter alerts by state, severity, and source by selecting filters in the Filter list. Optional: Sort the alerts by clicking one or more of the Name , Severity , State , and Source column headers. Click the name of an alert to view its Alert details page. The page includes a graph that illustrates alert time series data. It also provides the following information about the alert: A description of the alert Messages associated with the alert Labels attached to the alert A link to its governing alerting rule Silences for the alert, if any exist To obtain information about silences: From the Administrator perspective of the OpenShift Container Platform web console, go to the Observe Alerting Silences page. Optional: Filter the silences by name using the Search by name field. Optional: Filter silences by state by selecting filters in the Filter list. By default, Active and Pending filters are applied. Optional: Sort the silences by clicking one or more of the Name , Firing alerts , State , and Creator column headers. Select the name of a silence to view its Silence details page. The page includes the following details: Alert specification Start time End time Silence state Number and list of firing alerts To obtain information about alerting rules: From the Administrator perspective of the OpenShift Container Platform web console, go to the Observe Alerting Alerting rules page. Optional: Filter alerting rules by state, severity, and source by selecting filters in the Filter list. Optional: Sort the alerting rules by clicking one or more of the Name , Severity , Alert state , and Source column headers. Select the name of an alerting rule to view its Alerting rule details page. The page provides the following details about the alerting rule: Alerting rule name, severity, and description. The expression that defines the condition for firing the alert. The time for which the condition should be true for an alert to fire. A graph for each alert governed by the alerting rule, showing the value with which the alert is firing. A table of all alerts governed by the alerting rule. Additional resources Cluster Monitoring Operator runbooks (Cluster Monitoring Operator GitHub repository) 6.1.3. Managing silences You can create a silence for an alert in the OpenShift Container Platform web console in the Administrator perspective. After you create silences, you can view, edit, and expire them. You also do not receive notifications about a silenced alert when the alert fires. Note When you create silences, they are replicated across Alertmanager pods. However, if you do not configure persistent storage for Alertmanager, silences might be lost. This can happen, for example, if all Alertmanager pods restart at the same time. Additional resources Managing silences Configuring persistent storage 6.1.3.1. Silencing alerts from the Administrator perspective You can silence a specific alert or silence alerts that match a specification that you define. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure To silence a specific alert: From the Administrator perspective of the OpenShift Container Platform web console, go to Observe Alerting Alerts . For the alert that you want to silence, click and select Silence alert to open the Silence alert page with a default configuration for the chosen alert. Optional: Change the default configuration details for the silence. Note You must add a comment before saving a silence. To save the silence, click Silence . To silence a set of alerts: From the Administrator perspective of the OpenShift Container Platform web console, go to Observe Alerting Silences . Click Create silence . On the Create silence page, set the schedule, duration, and label details for an alert. Note You must add a comment before saving a silence. To create silences for alerts that match the labels that you entered, click Silence . 6.1.3.2. Editing silences from the Administrator perspective You can edit a silence, which expires the existing silence and creates a new one with the changed configuration. Prerequisites If you are a cluster administrator, you have access to the cluster as a user with the cluster-admin role. If you are a non-administrator user, you have access to the cluster as a user with the following user roles: The cluster-monitoring-view cluster role, which allows you to access Alertmanager. The monitoring-alertmanager-edit role, which permits you to create and silence alerts in the Administrator perspective in the web console. Procedure From the Administrator perspective of the OpenShift Container Platform web console, go to Observe Alerting Silences . For the silence you want to modify, click and select Edit silence . Alternatively, you can click Actions and select Edit silence on the Silence details page for a silence. On the Edit silence page, make changes and click Silence . Doing so expires the existing silence and creates one with the updated configuration. 6.1.3.3. Expiring silences from the Administrator perspective You can expire a single silence or multiple silences. Expiring a silence deactivates it permanently. Note You cannot delete expired, silenced alerts. Expired silences older than 120 hours are garbage collected. Prerequisites If you are a cluster administrator, you have access to the cluster as a user with the cluster-admin role. If you are a non-administrator user, you have access to the cluster as a user with the following user roles: The cluster-monitoring-view cluster role, which allows you to access Alertmanager. The monitoring-alertmanager-edit role, which permits you to create and silence alerts in the Administrator perspective in the web console. Procedure Go to Observe Alerting Silences . For the silence or silences you want to expire, select the checkbox in the corresponding row. Click Expire 1 silence to expire a single selected silence or Expire <n> silences to expire multiple selected silences, where <n> is the number of silences you selected. Alternatively, to expire a single silence you can click Actions and select Expire silence on the Silence details page for a silence. 6.1.4. Managing alerting rules for core platform monitoring The OpenShift Container Platform monitoring includes a large set of default alerting rules for platform metrics. As a cluster administrator, you can customize this set of rules in two ways: Modify the settings for existing platform alerting rules by adjusting thresholds or by adding and modifying labels. For example, you can change the severity label for an alert from warning to critical to help you route and triage issues flagged by an alert. Define and add new custom alerting rules by constructing a query expression based on core platform metrics in the openshift-monitoring project. Additional resources Managing alerting rules for core platform monitoring Tips for optimizing alerting rules for core platform monitoring 6.1.4.1. Creating new alerting rules As a cluster administrator, you can create new alerting rules based on platform metrics. These alerting rules trigger alerts based on the values of chosen metrics. Note If you create a customized AlertingRule resource based on an existing platform alerting rule, silence the original alert to avoid receiving conflicting alerts. To help users understand the impact and cause of the alert, ensure that your alerting rule contains an alert message and severity value. Prerequisites You have access to the cluster as a user that has the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). Procedure Create a new YAML configuration file named example-alerting-rule.yaml . Add an AlertingRule resource to the YAML file. The following example creates a new alerting rule named example , similar to the default Watchdog alert: apiVersion: monitoring.openshift.io/v1 kind: AlertingRule metadata: name: example namespace: openshift-monitoring 1 spec: groups: - name: example-rules rules: - alert: ExampleAlert 2 for: 1m 3 expr: vector(1) 4 labels: severity: warning 5 annotations: message: This is an example alert. 6 1 Ensure that the namespace is openshift-monitoring . 2 The name of the alerting rule you want to create. 3 The duration for which the condition should be true before an alert is fired. 4 The PromQL query expression that defines the new rule. 5 The severity that alerting rule assigns to the alert. 6 The message associated with the alert. Important You must create the AlertingRule object in the openshift-monitoring namespace. Otherwise, the alerting rule is not accepted. Apply the configuration file to the cluster: USD oc apply -f example-alerting-rule.yaml 6.1.4.2. Modifying core platform alerting rules As a cluster administrator, you can modify core platform alerts before Alertmanager routes them to a receiver. For example, you can change the severity label of an alert, add a custom label, or exclude an alert from being sent to Alertmanager. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). Procedure Create a new YAML configuration file named example-modified-alerting-rule.yaml . Add an AlertRelabelConfig resource to the YAML file. The following example modifies the severity setting to critical for the default platform watchdog alerting rule: apiVersion: monitoring.openshift.io/v1 kind: AlertRelabelConfig metadata: name: watchdog namespace: openshift-monitoring 1 spec: configs: - sourceLabels: [alertname,severity] 2 regex: "Watchdog;none" 3 targetLabel: severity 4 replacement: critical 5 action: Replace 6 1 Ensure that the namespace is openshift-monitoring . 2 The source labels for the values you want to modify. 3 The regular expression against which the value of sourceLabels is matched. 4 The target label of the value you want to modify. 5 The new value to replace the target label. 6 The relabel action that replaces the old value based on regex matching. The default action is Replace . Other possible values are Keep , Drop , HashMod , LabelMap , LabelDrop , and LabelKeep . Important You must create the AlertRelabelConfig object in the openshift-monitoring namespace. Otherwise, the alert label will not change. Apply the configuration file to the cluster: USD oc apply -f example-modified-alerting-rule.yaml Additional resources Monitoring stack architecture Alertmanager (Prometheus documentation) relabel_config configuration (Prometheus documentation) Alerting (Prometheus documentation) 6.1.5. Managing alerting rules for user-defined projects In OpenShift Container Platform, you can create, view, edit, and remove alerting rules for user-defined projects. Those alerting rules will trigger alerts based on the values of the chosen metrics. Additional resources Creating alerting rules for user-defined projects Managing alerting rules for user-defined projects Optimizing alerting for user-defined projects 6.1.5.1. Creating alerting rules for user-defined projects You can create alerting rules for user-defined projects. Those alerting rules will trigger alerts based on the values of the chosen metrics. Note To help users understand the impact and cause of the alert, ensure that your alerting rule contains an alert message and severity value. Prerequisites You have enabled monitoring for user-defined projects. You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file for alerting rules. In this example, it is called example-app-alerting-rule.yaml . Add an alerting rule configuration to the YAML file. The following example creates a new alerting rule named example-alert . The alerting rule fires an alert when the version metric exposed by the sample service becomes 0 : apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-alert namespace: ns1 spec: groups: - name: example rules: - alert: VersionAlert 1 for: 1m 2 expr: version{job="prometheus-example-app"} == 0 3 labels: severity: warning 4 annotations: message: This is an example alert. 5 1 The name of the alerting rule you want to create. 2 The duration for which the condition should be true before an alert is fired. 3 The PromQL query expression that defines the new rule. 4 The severity that alerting rule assigns to the alert. 5 The message associated with the alert. Apply the configuration file to the cluster: USD oc apply -f example-app-alerting-rule.yaml 6.1.5.2. Creating cross-project alerting rules for user-defined projects You can create alerting rules for user-defined projects that are not bound to their project of origin by configuring a project in the user-workload-monitoring-config config map. This allows you to create generic alerting rules that get applied to multiple user-defined projects instead of having individual PrometheusRule objects in each user project. Prerequisites If you are a cluster administrator, you have access to the cluster as a user with the cluster-admin cluster role. If you are a non-administrator user, you have access to the cluster as a user with the following user roles: The user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project to edit the user-workload-monitoring-config config map. The monitoring-rules-edit cluster role for the project where you want to create an alerting rule. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Configure projects in which you want to create alerting rules that are not bound to a specific project: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | namespacesWithoutLabelEnforcement: [ <namespace> ] 1 # ... 1 Specify one or more projects in which you want to create cross-project alerting rules. Prometheus and Thanos Ruler for user-defined monitoring do not enforce the namespace label in PrometheusRule objects created in these projects. Create a YAML file for alerting rules. In this example, it is called example-cross-project-alerting-rule.yaml . Add an alerting rule configuration to the YAML file. The following example creates a new cross-project alerting rule called example-security . The alerting rule fires when a user project does not enforce the restricted pod security policy: Example cross-project alerting rule apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-security namespace: ns1 1 spec: groups: - name: pod-security-policy rules: - alert: "ProjectNotEnforcingRestrictedPolicy" 2 for: 5m 3 expr: kube_namespace_labels{namespace!~"(openshift|kube).*|default",label_pod_security_kubernetes_io_enforce!="restricted"} 4 annotations: message: "Restricted policy not enforced. Project {{ USDlabels.namespace }} does not enforce the restricted pod security policy." 5 labels: severity: warning 6 1 Ensure that you specify the project that you defined in the namespacesWithoutLabelEnforcement field. 2 The name of the alerting rule you want to create. 3 The duration for which the condition should be true before an alert is fired. 4 The PromQL query expression that defines the new rule. 5 The message associated with the alert. 6 The severity that alerting rule assigns to the alert. Important Ensure that you create a specific cross-project alerting rule in only one of the projects that you specified in the namespacesWithoutLabelEnforcement field. If you create the same cross-project alerting rule in multiple projects, it results in repeated alerts. Apply the configuration file to the cluster: USD oc apply -f example-cross-project-alerting-rule.yaml Additional resources Monitoring stack architecture Alerting (Prometheus documentation) 6.1.5.3. Listing alerting rules for all projects in a single view As a cluster administrator, you can list alerting rules for core OpenShift Container Platform and user-defined projects together in a single view. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure From the Administrator perspective of the OpenShift Container Platform web console, go to Observe Alerting Alerting rules . Select the Platform and User sources in the Filter drop-down menu. Note The Platform source is selected by default. 6.1.5.4. Removing alerting rules for user-defined projects You can remove alerting rules for user-defined projects. Prerequisites You have enabled monitoring for user-defined projects. You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. You have installed the OpenShift CLI ( oc ). Procedure To remove rule <foo> in <namespace> , run the following: USD oc -n <namespace> delete prometheusrule <foo> 6.1.5.5. Disabling cross-project alerting rules for user-defined projects Creating cross-project alerting rules for user-defined projects is enabled by default. Cluster administrators can disable the capability in the cluster-monitoring-config config map for the following reasons: To prevent user-defined monitoring from overloading the cluster monitoring stack. To prevent buggy alerting rules from being applied to the cluster without having to identify the rule that causes the issue. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config In the cluster-monitoring-config config map, disable the option to create cross-project alerting rules by setting the rulesWithoutLabelEnforcementAllowed value under data/config.yaml/userWorkload to false : kind: ConfigMap apiVersion: v1 metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | userWorkload: rulesWithoutLabelEnforcementAllowed: false # ... Save the file to apply the changes. Additional resources Alertmanager (Prometheus documentation) 6.2. Managing alerts as a Developer In OpenShift Container Platform, the Alerting UI enables you to manage alerts, silences, and alerting rules. Note The alerts, silences, and alerting rules that are available in the Alerting UI relate to the projects that you have access to. 6.2.1. Accessing the Alerting UI from the Developer perspective The Alerting UI is accessible through the Developer perspective of the OpenShift Container Platform web console. From the Developer perspective, go to Observe and go to the Alerts tab. Select the project that you want to manage alerts for from the Project: list. In this perspective, alerts, silences, and alerting rules are all managed from the Alerts tab. The results shown in the Alerts tab are specific to the selected project. Note In the Developer perspective, you can select from core OpenShift Container Platform and user-defined projects that you have access to in the Project: <project_name> list. However, alerts, silences, and alerting rules relating to core OpenShift Container Platform projects are not displayed if you are not logged in as a cluster administrator. Additional resources Searching and filtering alerts, silences, and alerting rules 6.2.2. Getting information about alerts, silences, and alerting rules from the Developer perspective The Alerting UI provides detailed information about alerts and their governing alerting rules and silences. Prerequisites You have access to the cluster as a user with view permissions for the project that you are viewing alerts for. Procedure To obtain information about alerts, silences, and alerting rules: From the Developer perspective of the OpenShift Container Platform web console, go to the Observe <project_name> Alerts page. View details for an alert, silence, or an alerting rule: Alert details can be viewed by clicking a greater than symbol ( > ) to an alert name and then selecting the alert from the list. Silence details can be viewed by clicking a silence in the Silenced by section of the Alert details page. The Silence details page includes the following information: Alert specification Start time End time Silence state Number and list of firing alerts Alerting rule details can be viewed by clicking the menu to an alert in the Alerts page and then clicking View Alerting Rule . Note Only alerts, silences, and alerting rules relating to the selected project are displayed in the Developer perspective. Additional resources Cluster Monitoring Operator runbooks (Cluster Monitoring Operator GitHub repository) 6.2.3. Managing silences You can create a silence for an alert in the OpenShift Container Platform web console in the Developer perspective. After you create silences, you can view, edit, and expire them. You also do not receive notifications about a silenced alert when the alert fires. Note When you create silences, they are replicated across Alertmanager pods. However, if you do not configure persistent storage for Alertmanager, silences might be lost. This can happen, for example, if all Alertmanager pods restart at the same time. Additional resources Managing silences Configuring persistent storage 6.2.3.1. Silencing alerts from the Developer perspective You can silence a specific alert or silence alerts that match a specification that you define. Prerequisites If you are a cluster administrator, you have access to the cluster as a user with the cluster-admin role. If you are a non-administrator user, you have access to the cluster as a user with the following user roles: The cluster-monitoring-view cluster role, which allows you to access Alertmanager. The monitoring-alertmanager-edit role, which permits you to create and silence alerts in the Administrator perspective in the web console. The monitoring-rules-edit cluster role, which permits you to create and silence alerts in the Developer perspective in the web console. Procedure To silence a specific alert: From the Developer perspective of the OpenShift Container Platform web console, go to Observe and go to the Alerts tab. Select the project that you want to silence an alert for from the Project: list. If necessary, expand the details for the alert by clicking a greater than symbol ( > ) to the alert name. Click the alert message in the expanded view to open the Alert details page for the alert. Click Silence alert to open the Silence alert page with a default configuration for the alert. Optional: Change the default configuration details for the silence. Note You must add a comment before saving a silence. To save the silence, click Silence . To silence a set of alerts: From the Developer perspective of the OpenShift Container Platform web console, go to Observe and go to the Silences tab. Select the project that you want to silence alerts for from the Project: list. Click Create silence . On the Create silence page, set the duration and label details for an alert. Note You must add a comment before saving a silence. To create silences for alerts that match the labels that you entered, click Silence . 6.2.3.2. Editing silences from the Developer perspective You can edit a silence, which expires the existing silence and creates a new one with the changed configuration. Prerequisites If you are a cluster administrator, you have access to the cluster as a user with the cluster-admin role. If you are a non-administrator user, you have access to the cluster as a user with the following user roles: The cluster-monitoring-view cluster role, which allows you to access Alertmanager. The monitoring-rules-edit cluster role, which permits you to create and silence alerts in the Developer perspective in the web console. Procedure From the Developer perspective of the OpenShift Container Platform web console, go to Observe and go to the Silences tab. Select the project that you want to edit silences for from the Project: list. For the silence you want to modify, click and select Edit silence . Alternatively, you can click Actions and select Edit silence on the Silence details page for a silence. On the Edit silence page, make changes and click Silence . Doing so expires the existing silence and creates one with the updated configuration. 6.2.3.3. Expiring silences from the Developer perspective You can expire a single silence or multiple silences. Expiring a silence deactivates it permanently. Note You cannot delete expired, silenced alerts. Expired silences older than 120 hours are garbage collected. Prerequisites If you are a cluster administrator, you have access to the cluster as a user with the cluster-admin role. If you are a non-administrator user, you have access to the cluster as a user with the following user roles: The cluster-monitoring-view cluster role, which allows you to access Alertmanager. The monitoring-rules-edit cluster role, which permits you to create and silence alerts in the Developer perspective in the web console. Procedure From the Developer perspective of the OpenShift Container Platform web console, go to Observe and go to the Silences tab. Select the project that you want to expire a silence for from the Project: list. For the silence or silences you want to expire, select the checkbox in the corresponding row. Click Expire 1 silence to expire a single selected silence or Expire <n> silences to expire multiple selected silences, where <n> is the number of silences you selected. Alternatively, to expire a single silence you can click Actions and select Expire silence on the Silence details page for a silence. 6.2.4. Managing alerting rules for user-defined projects In OpenShift Container Platform, you can create, view, edit, and remove alerting rules for user-defined projects. Those alerting rules will trigger alerts based on the values of the chosen metrics. Additional resources Creating alerting rules for user-defined projects Managing alerting rules for user-defined projects Optimizing alerting for user-defined projects 6.2.4.1. Creating alerting rules for user-defined projects You can create alerting rules for user-defined projects. Those alerting rules will trigger alerts based on the values of the chosen metrics. Note To help users understand the impact and cause of the alert, ensure that your alerting rule contains an alert message and severity value. Prerequisites You have enabled monitoring for user-defined projects. You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file for alerting rules. In this example, it is called example-app-alerting-rule.yaml . Add an alerting rule configuration to the YAML file. The following example creates a new alerting rule named example-alert . The alerting rule fires an alert when the version metric exposed by the sample service becomes 0 : apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-alert namespace: ns1 spec: groups: - name: example rules: - alert: VersionAlert 1 for: 1m 2 expr: version{job="prometheus-example-app"} == 0 3 labels: severity: warning 4 annotations: message: This is an example alert. 5 1 The name of the alerting rule you want to create. 2 The duration for which the condition should be true before an alert is fired. 3 The PromQL query expression that defines the new rule. 4 The severity that alerting rule assigns to the alert. 5 The message associated with the alert. Apply the configuration file to the cluster: USD oc apply -f example-app-alerting-rule.yaml 6.2.4.2. Creating cross-project alerting rules for user-defined projects You can create alerting rules for user-defined projects that are not bound to their project of origin by configuring a project in the user-workload-monitoring-config config map. This allows you to create generic alerting rules that get applied to multiple user-defined projects instead of having individual PrometheusRule objects in each user project. Prerequisites If you are a cluster administrator, you have access to the cluster as a user with the cluster-admin cluster role. If you are a non-administrator user, you have access to the cluster as a user with the following user roles: The user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project to edit the user-workload-monitoring-config config map. The monitoring-rules-edit cluster role for the project where you want to create an alerting rule. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Configure projects in which you want to create alerting rules that are not bound to a specific project: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | namespacesWithoutLabelEnforcement: [ <namespace> ] 1 # ... 1 Specify one or more projects in which you want to create cross-project alerting rules. Prometheus and Thanos Ruler for user-defined monitoring do not enforce the namespace label in PrometheusRule objects created in these projects. Create a YAML file for alerting rules. In this example, it is called example-cross-project-alerting-rule.yaml . Add an alerting rule configuration to the YAML file. The following example creates a new cross-project alerting rule called example-security . The alerting rule fires when a user project does not enforce the restricted pod security policy: Example cross-project alerting rule apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-security namespace: ns1 1 spec: groups: - name: pod-security-policy rules: - alert: "ProjectNotEnforcingRestrictedPolicy" 2 for: 5m 3 expr: kube_namespace_labels{namespace!~"(openshift|kube).*|default",label_pod_security_kubernetes_io_enforce!="restricted"} 4 annotations: message: "Restricted policy not enforced. Project {{ USDlabels.namespace }} does not enforce the restricted pod security policy." 5 labels: severity: warning 6 1 Ensure that you specify the project that you defined in the namespacesWithoutLabelEnforcement field. 2 The name of the alerting rule you want to create. 3 The duration for which the condition should be true before an alert is fired. 4 The PromQL query expression that defines the new rule. 5 The message associated with the alert. 6 The severity that alerting rule assigns to the alert. Important Ensure that you create a specific cross-project alerting rule in only one of the projects that you specified in the namespacesWithoutLabelEnforcement field. If you create the same cross-project alerting rule in multiple projects, it results in repeated alerts. Apply the configuration file to the cluster: USD oc apply -f example-cross-project-alerting-rule.yaml Additional resources Monitoring stack architecture Alerting (Prometheus documentation) 6.2.4.3. Accessing alerting rules for user-defined projects To list alerting rules for a user-defined project, you must have been assigned the monitoring-rules-view cluster role for the project. Prerequisites You have enabled monitoring for user-defined projects. You are logged in as a user that has the monitoring-rules-view cluster role for your project. You have installed the OpenShift CLI ( oc ). Procedure To list alerting rules in <project> : USD oc -n <project> get prometheusrule To list the configuration of an alerting rule, run the following: USD oc -n <project> get prometheusrule <rule> -o yaml 6.2.4.4. Removing alerting rules for user-defined projects You can remove alerting rules for user-defined projects. Prerequisites You have enabled monitoring for user-defined projects. You are logged in as a cluster administrator or as a user that has the monitoring-rules-edit cluster role for the project where you want to create an alerting rule. You have installed the OpenShift CLI ( oc ). Procedure To remove rule <foo> in <namespace> , run the following: USD oc -n <namespace> delete prometheusrule <foo> Additional resources Alertmanager (Prometheus documentation)
[ "apiVersion: monitoring.openshift.io/v1 kind: AlertingRule metadata: name: example namespace: openshift-monitoring 1 spec: groups: - name: example-rules rules: - alert: ExampleAlert 2 for: 1m 3 expr: vector(1) 4 labels: severity: warning 5 annotations: message: This is an example alert. 6", "oc apply -f example-alerting-rule.yaml", "apiVersion: monitoring.openshift.io/v1 kind: AlertRelabelConfig metadata: name: watchdog namespace: openshift-monitoring 1 spec: configs: - sourceLabels: [alertname,severity] 2 regex: \"Watchdog;none\" 3 targetLabel: severity 4 replacement: critical 5 action: Replace 6", "oc apply -f example-modified-alerting-rule.yaml", "apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-alert namespace: ns1 spec: groups: - name: example rules: - alert: VersionAlert 1 for: 1m 2 expr: version{job=\"prometheus-example-app\"} == 0 3 labels: severity: warning 4 annotations: message: This is an example alert. 5", "oc apply -f example-app-alerting-rule.yaml", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | namespacesWithoutLabelEnforcement: [ <namespace> ] 1 #", "apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-security namespace: ns1 1 spec: groups: - name: pod-security-policy rules: - alert: \"ProjectNotEnforcingRestrictedPolicy\" 2 for: 5m 3 expr: kube_namespace_labels{namespace!~\"(openshift|kube).*|default\",label_pod_security_kubernetes_io_enforce!=\"restricted\"} 4 annotations: message: \"Restricted policy not enforced. Project {{ USDlabels.namespace }} does not enforce the restricted pod security policy.\" 5 labels: severity: warning 6", "oc apply -f example-cross-project-alerting-rule.yaml", "oc -n <namespace> delete prometheusrule <foo>", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "kind: ConfigMap apiVersion: v1 metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | userWorkload: rulesWithoutLabelEnforcementAllowed: false #", "apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-alert namespace: ns1 spec: groups: - name: example rules: - alert: VersionAlert 1 for: 1m 2 expr: version{job=\"prometheus-example-app\"} == 0 3 labels: severity: warning 4 annotations: message: This is an example alert. 5", "oc apply -f example-app-alerting-rule.yaml", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | namespacesWithoutLabelEnforcement: [ <namespace> ] 1 #", "apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-security namespace: ns1 1 spec: groups: - name: pod-security-policy rules: - alert: \"ProjectNotEnforcingRestrictedPolicy\" 2 for: 5m 3 expr: kube_namespace_labels{namespace!~\"(openshift|kube).*|default\",label_pod_security_kubernetes_io_enforce!=\"restricted\"} 4 annotations: message: \"Restricted policy not enforced. Project {{ USDlabels.namespace }} does not enforce the restricted pod security policy.\" 5 labels: severity: warning 6", "oc apply -f example-cross-project-alerting-rule.yaml", "oc -n <project> get prometheusrule", "oc -n <project> get prometheusrule <rule> -o yaml", "oc -n <namespace> delete prometheusrule <foo>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/monitoring/managing-alerts