title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 146. Vert.x HTTP Client | Chapter 146. Vert.x HTTP Client Since Camel 3.5 Only producer is supported The Vert.x HTTP component provides the capability to produce messages to HTTP endpoints via the Vert.x Web Client . 146.1. Dependencies When using vertx-http with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-vertx-http-starter</artifactId> </dependency> 146.2. URI format 146.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 146.3.1. Configuring component options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 146.3.2. Configuring endpoint options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 146.4. Component Options The Vert.x HTTP Client component supports 19 options, which are listed below. Name Description Default Type lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean responsePayloadAsByteArray (producer) Whether the response body should be byte or as io.vertx.core.buffer.Buffer. true boolean allowJavaSerializedObject (advanced) Whether to allow java serialization when a request has the Content-Type application/x-java-serialized-object This is disabled by default. If you enable this, be aware that Java will deserialize the incoming data from the request. This can be a potential security risk. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean vertx (advanced) To use an existing vertx instead of creating a new instance. Vertx vertxHttpBinding (advanced) A custom VertxHttpBinding which can control how to bind between Vert.x and Camel. VertxHttpBinding vertxOptions (advanced) To provide a custom set of vertx options for configuring vertx. VertxOptions webClientOptions (advanced) To provide a custom set of options for configuring vertx web client. WebClientOptions headerFilterStrategy (filter) To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy proxyHost (proxy) The proxy server host address. String proxyPassword (proxy) The proxy server password if authentication is required. String proxyPort (proxy) The proxy server port. Integer proxyType (proxy) The proxy server type. Enum values: HTTP SOCKS4 SOCKS5 ProxyType proxyUsername (proxy) The proxy server username if authentication is required. String basicAuthPassword (security) The password to use for basic authentication. String basicAuthUsername (security) The user name to use for basic authentication. String bearerToken (security) The bearer token to use for bearer token authentication. String sslContextParameters (security) To configure security using SSLContextParameters. SSLContextParameters useGlobalSslContextParameters (security) Enable usage of global SSL context parameters. false boolean 146.5. Endpoint Options The Vert.x HTTP Client endpoint is configured using URI syntax: with the following path and query parameters: 146.5.1. Path Parameters (1 parameters) Name Description Default Type httpUri (producer) Required The HTTP URI to connect to. URI 146.5.2. Query Parameters (23 parameters) Name Description Default Type connectTimeout (producer) The amount of time in milliseconds until a connection is established. A timeout value of zero is interpreted as an infinite timeout. 60000 int cookieStore (producer) A custom CookieStore to use when session management is enabled. If this option is not set then an in-memory CookieStore is used. InMemoryCookieStore CookieStore headerFilterStrategy (producer) A custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. VertxHttpHeaderFilterStrategy HeaderFilterStrategy httpMethod (producer) The HTTP method to use. The HttpMethod header cannot override this option if set. HttpMethod okStatusCodeRange (producer) The status codes which are considered a success response. The values are inclusive. Multiple ranges can be defined, separated by comma, e.g. 200-204,209,301-304. Each range must be a single number or from-to with the dash included. 200-299 String responsePayloadAsByteArray (producer) Whether the response body should be byte or as io.vertx.core.buffer.Buffer. true boolean sessionManagement (producer) Enables session management via WebClientSession. By default the client is configured to use an in-memory CookieStore. The cookieStore option can be used to override this. false boolean throwExceptionOnFailure (producer) Disable throwing HttpOperationFailedException in case of failed responses from the remote server. true boolean timeout (producer) The amount of time in milliseconds after which if the request does not return any data within the timeout period a TimeoutException fails the request. Setting zero or a negative value disables the timeout. -1 long transferException (producer) If enabled and an Exchange failed processing on the consumer side, and if the caused Exception was sent back serialized in the response as a application/x-java-serialized-object content type. On the producer side the exception will be deserialized and thrown as is, instead of HttpOperationFailedException. The caused exception is required to be serialized. This is by default turned off. If you enable this then be aware that Camel will deserialize the incoming data from the request to a Java object, which can be a potential security risk. false boolean useCompression (producer) Set whether compression is enabled to handled compressed (E.g gzipped) responses. false boolean vertxHttpBinding (producer) A custom VertxHttpBinding which can control how to bind between Vert.x and Camel. VertxHttpBinding webClientOptions (producer) Sets customized options for configuring the Vert.x WebClient. WebClientOptions lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean proxyHost (proxy) The proxy server host address. String proxyPassword (proxy) The proxy server password if authentication is required. String proxyPort (proxy) The proxy server port. Integer proxyType (proxy) The proxy server type. Enum values: HTTP SOCKS4 SOCKS5 ProxyType proxyUsername (proxy) The proxy server username if authentication is required. String basicAuthPassword (security) The password to use for basic authentication. String basicAuthUsername (security) The user name to use for basic authentication. String bearerToken (security) The bearer token to use for bearer token authentication. String sslContextParameters (security) To configure security using SSLContextParameters. SSLContextParameters 146.6. Message Headers The Vert.x HTTP Client component supports 8 message header(s), which is/are listed below: Name Description Default Type CamelHttpMethod (producer) Constant: HTTP_METHOD The http method. HttpMethod CamelHttpResponseCode (producer) Constant: HTTP_RESPONSE_CODE The HTTP response code from the external server. Integer CamelHttpResponseText (producer) Constant: HTTP_RESPONSE_TEXT The HTTP response text from the external server. String Content-Type (producer) Constant: CONTENT_TYPE The HTTP content type. Is set on both the IN and OUT message to provide a content type, such as text/html. String CamelHttpQuery (producer) Constant: HTTP_QUERY URI parameters. Will override existing URI parameters set directly on the endpoint. String CamelHttpUri (producer) Constant: HTTP_URI URI to call. Will override the existing URI set directly on the endpoint. This URI is the URI of the http server to call. Its not the same as the Camel endpoint URI, where you can configure endpoint options such as security etc. This header does not support that, its only the URI of the http server. String CamelHttpPath (producer) Constant: HTTP_PATH Request URI's path, the header will be used to build the request URI with the HTTP_URI. String Content-Encoding (producer) Constant: CONTENT_ENCODING The HTTP content encoding. Is set to provide a content encoding, such as gzip. String 146.7. Usage The following example shows how to send a request to an HTTP endpoint. You can override the URI configured on the vertx-http producer via headers Exchange.HTTP_URI and Exchange.HTTP_PATH . from("direct:start") .to("vertx-http:https://camel.apache.org"); 146.8. URI Parameters The vertx-http producer supports URI parameters to be sent to the HTTP server. The URI parameters can either be set directly on the endpoint URI, or as a header with the key Exchange.HTTP_QUERY on the message. 146.9. Response code Camel will handle according to the HTTP response code: Response code is in the range 100..299, Camel regards it as a success response. Response code is in the range 300..399, Camel regards it as a redirection response and will throw a HttpOperationFailedException with the information. Response code is 400+, Camel regards it as an external server failure and will throw a HttpOperationFailedException with the information. 146.10. throwExceptionOnFailure The option, throwExceptionOnFailure , can be set to false to prevent the HttpOperationFailedException from being thrown for failed response codes. This allows you to get any response from the remote server. 146.11. Exceptions HttpOperationFailedException exception contains the following information: The HTTP status code The HTTP status line (text of the status code) Redirect location, if server returned a redirect Response body as a java.lang.String , if server provided a body as response 146.12. HTTP method The following algorithm determines the HTTP method to be used: Use method provided as endpoint configuration ( httpMethod ). Use method provided in header ( Exchange.HTTP_METHOD ). GET if query string is provided in header. GET if endpoint is configured with a query string. POST if there is data to send (body is not null ). GET otherwise. 146.13. HTTP form parameters You can send HTTP form parameters in one of two ways. Set the Exchange.CONTENT_TYPE header to the value application/x-www-form-urlencoded and ensure the message body is a String formatted as form variables. For example, param1=value1¶m2=value2 . Set the message body as a MultiMap which allows you to configure form parameter names and values. 146.14. Multipart form data You can upload text or binary files by setting the message body as a MultipartForm . 146.15. Customizing Vert.x Web Client options When finer control of the Vert.x Web Client configuration is required, you can bind a custom WebClientOptions instance to the registry. WebClientOptions options = new WebClientOptions().setMaxRedirects(5) .setIdleTimeout(10) .setConnectTimeout(3); camelContext.getRegistry.bind("clientOptions", options); Then reference the options on the vertx-http producer. from("direct:start") .to("vertx-http:http://localhost:8080?webClientOptions=#clientOptions") 146.15.1. SSL The Vert.x HTTP component supports SSL/TLS configuration through the Camel JSSE Configuration Utility . It is also possible to configure SSL options by providing a custom WebClientOptions . 146.16. Session Management Session management can be enabled via the sessionManagement URI option. When enabled, an in-memory cookie store is used to track cookies. This can be overridden by providing a custom CookieStore via the cookieStore URI option. 146.17. Spring Boot Auto-Configuration The component supports 20 options, which are listed below. Name Description Default Type camel.component.vertx-http.allow-java-serialized-object Whether to allow java serialization when a request has the Content-Type application/x-java-serialized-object This is disabled by default. If you enable this, be aware that Java will deserialize the incoming data from the request. This can be a potential security risk. false Boolean camel.component.vertx-http.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.vertx-http.basic-auth-password The password to use for basic authentication. String camel.component.vertx-http.basic-auth-username The user name to use for basic authentication. String camel.component.vertx-http.bearer-token The bearer token to use for bearer token authentication. String camel.component.vertx-http.enabled Whether to enable auto configuration of the vertx-http component. This is enabled by default. Boolean camel.component.vertx-http.header-filter-strategy To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. HeaderFilterStrategy camel.component.vertx-http.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.vertx-http.proxy-host The proxy server host address. String camel.component.vertx-http.proxy-password The proxy server password if authentication is required. String camel.component.vertx-http.proxy-port The proxy server port. Integer camel.component.vertx-http.proxy-type The proxy server type. ProxyType camel.component.vertx-http.proxy-username The proxy server username if authentication is required. String camel.component.vertx-http.response-payload-as-byte-array Whether the response body should be byte or as io.vertx.core.buffer.Buffer. true Boolean camel.component.vertx-http.ssl-context-parameters To configure security using SSLContextParameters. The option is a org.apache.camel.support.jsse.SSLContextParameters type. SSLContextParameters camel.component.vertx-http.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean camel.component.vertx-http.vertx To use an existing vertx instead of creating a new instance. The option is a io.vertx.core.Vertx type. Vertx camel.component.vertx-http.vertx-http-binding A custom VertxHttpBinding which can control how to bind between Vert.x and Camel. The option is a org.apache.camel.component.vertx.http.VertxHttpBinding type. VertxHttpBinding camel.component.vertx-http.vertx-options To provide a custom set of vertx options for configuring vertx. The option is a io.vertx.core.VertxOptions type. VertxOptions camel.component.vertx-http.web-client-options To provide a custom set of options for configuring vertx web client. The option is a io.vertx.ext.web.client.WebClientOptions type. WebClientOptions | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-vertx-http-starter</artifactId> </dependency>",
"vertx-http:hostname[:port][/resourceUri][?options]",
"vertx-http:httpUri",
"from(\"direct:start\") .to(\"vertx-http:https://camel.apache.org\");",
"WebClientOptions options = new WebClientOptions().setMaxRedirects(5) .setIdleTimeout(10) .setConnectTimeout(3); camelContext.getRegistry.bind(\"clientOptions\", options);",
"from(\"direct:start\") .to(\"vertx-http:http://localhost:8080?webClientOptions=#clientOptions\")"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-vertx-http-component-starter |
Chapter 7. Configuring SCAP contents | Chapter 7. Configuring SCAP contents You can upload SCAP data streams and tailoring files to define compliance policies. 7.1. Loading the default SCAP contents By loading the default SCAP contents on Satellite Server, you ensure that the data streams from the SCAP Security Guide (SSG) are loaded and assigned to all organizations and locations. SSG is provided by the operating system of Satellite Server and installed in /usr/share/xml/scap/ssg/content/ . Note that the available data streams depend on the operating system version on which Satellite runs. You can only use this SCAP content to scan hosts that have the same minor RHEL version as your Satellite Server. For more information, see Section 7.2, "Getting supported SCAP contents for RHEL" . Prerequisites Your user account has a role assigned that has the create_scap_contents permission. Procedure Use the following Hammer command on Satellite Server: 7.2. Getting supported SCAP contents for RHEL You can get the latest SCAP Security Guide (SSG) for Red Hat Enterprise Linux on the Red Hat Customer Portal. You have to get a version of SSG that is designated for the minor RHEL version of your hosts. Procedure Access the SCAP Security Guide in the package browser . From the Version menu, select the latest SSG version for the minor version of RHEL that your hosts are running. For example, for RHEL 8.6, select a version named *.el8_6 . Download the package RPM. Extract the data-stream file ( *-ds.xml ) from the RPM. For example: Upload the data stream to Satellite. For more information, see Section 7.3, "Uploading additional SCAP content" . Additional resources Supported versions of the SCAP Security Guide in RHEL in the Red Hat Knowledgebase SCAP Security Guide profiles supported in RHEL 9 in Red Hat Enterprise Linux 9 Security hardening SCAP Security Guide profiles supported in RHEL 8 in Red Hat Enterprise Linux 8 Security hardening SCAP Security Guide profiles supported in RHEL 7 in Red Hat Enterprise Linux 7 Security Guide 7.3. Uploading additional SCAP content You can upload additional SCAP content into Satellite Server, either content created by yourself or obtained elsewhere. Note that Red Hat only provides support for SCAP content obtained from Red Hat. To use the CLI instead of the Satellite web UI, see the CLI procedure . Prerequisites Your user account has a role assigned that has the create_scap_contents permission. You have acquired a SCAP data-stream file. Procedure In the Satellite web UI, navigate to Hosts > Compliance > SCAP contents . Click Upload New SCAP Content . Enter a title in the Title text box, such as My SCAP Content . In Scap File , click Choose file , navigate to the location containing a SCAP data-stream file and click Open . On the Locations tab, select locations. On the Organizations tab, select organizations. Click Submit . If the SCAP content file is loaded successfully, a message similar to Successfully created My SCAP Content is displayed. CLI procedure Place the SCAP data-stream file to a directory on your Satellite Server, such as /usr/share/xml/scap/my_content/ . Run the following Hammer command on Satellite Server: Verification List the available SCAP contents . The list of SCAP contents includes the new title. 7.4. Tailoring XCCDF profiles You can customize existing XCCDF profiles using tailoring files without editing the original SCAP content. A single tailoring file can contain customizations of multiple XCCDF profiles. You can create a tailoring file using the SCAP Workbench tool. For more information on using the SCAP Workbench tool, see Customizing SCAP Security Guide for your use case . Then you can assign a tailoring file to a compliance policy to customize an XCCDF profile in the policy. 7.5. Uploading a tailoring file After uploading a tailoring file, you can apply it in a compliance policy to customize an XCCDF profile. Prerequisites Your user account has a role assigned that has the create_tailoring_files permission. Procedure In the Satellite web UI, navigate to Hosts > Compliance > Tailoring Files and click New Tailoring File . Enter a name in the Name text box. Click Choose File , navigate to the location containing the tailoring file and select Open . Click Submit to upload the chosen tailoring file. | [
"hammer scap-content bulk-upload --type default",
"rpm2cpio scap-security-guide-0.1.69-3.el8_6.noarch.rpm | cpio -iv --to-stdout ./usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml > ssg-rhel-8.6-ds.xml",
"hammer scap-content bulk-upload --type directory --directory /usr/share/xml/scap/my_content/ --location \" My_Location \" --organization \" My_Organization \""
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_security_compliance/Configuring_SCAP_Contents_security-compliance |
Chapter 1. About Serverless | Chapter 1. About Serverless 1.1. OpenShift Serverless overview OpenShift Serverless provides Kubernetes native building blocks that enable developers to create and deploy serverless, event-driven applications on OpenShift Container Platform. OpenShift Serverless is based on the open source Knative project , which provides portability and consistency for hybrid and multi-cloud environments by enabling an enterprise-grade serverless platform. Note Because OpenShift Serverless releases on a different cadence from OpenShift Container Platform, the OpenShift Serverless documentation is now available as a separate documentation set at Red Hat OpenShift Serverless . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/serverless/about-serverless |
Chapter 3. Red Hat Quay user accounts overview | Chapter 3. Red Hat Quay user accounts overview A user account represents an individual with authenticated access to the platform's features and functionalities. User accounts provide the capability to create and manage repositories, upload and retrieve container images, and control access permissions for these resources. This account is pivotal for organizing and overseeing container image management within Red Hat Quay. You can create and delete new users on the zRed Hat Quay UI or by using the Red Hat Quay API. 3.1. Creating a user account by using the UI Use the following procedure to create a new user for your Red Hat Quay repository using the UI. Prerequisites You are logged into your Red Hat Quay deployment as a superuser. Procedure Log in to your Red Hat Quay repository as the superuser. In the navigation pane, select your account name, and then click Super User Admin Panel . Click the Users icon in the column. Click the Create User button. Enter the new user's Username and Email address, and then click the Create User button. You are redirected to the Users page, where there is now another Red Hat Quay user. Note You might need to refresh the Users page to show the additional user. On the Users page, click the Options cogwheel associated with the new user. A drop-down menu appears, as shown in the following figure: Click Change Password . Add the new password, and then click Change User Password . The new user can now use that username and password to log in using the web UI or through their preferred container client, like Podman. 3.2. Creating a user account by using the Red Hat Quay API Use the following procedure to create a new user for your Red Hat Quay repository by using the API. Prerequisites You are logged into your Red Hat Quay deployment as a superuser. You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the following command to create a new user using the POST /api/v1/superuser/users/ endpoint: USD curl -X POST -H "Authorization: Bearer <bearer_token>" -H "Content-Type: application/json" -d '{ "username": "newuser", "email": "[email protected]" }' "https://<quay-server.example.com>/api/v1/superuser/users/" Example output {"username": "newuser", "email": "[email protected]", "password": "123456789", "encrypted_password": "<example_encrypted_password>/JKY9pnDcsw="} Navigate to your Red Hat Quay registry endpoint, for example, quay-server.example.com and login with the username and password generated from the API call. In this scenario, the username is newuser and the password is 123456789 . Alternatively, you can log in to the registry with the CLI. For example: USD podman login <quay-server.example.com> Example output username: newuser password: 123456789 Optional. You can obtain a list of all users, including superusers, by using the GET /api/v1/superuser/users/ endpoint: USD curl -X GET -H "Authorization: Bearer <bearer_token>" "https://<quay-server.example.com>/api/v1/superuser/users/" Example output {"users": [{"kind": "user", "name": "quayadmin", "username": "quayadmin", "email": "[email protected]", "verified": true, "avatar": {"name": "quayadmin", "hash": "b28d563a6dc76b4431fc7b0524bbff6b810387dac86d9303874871839859c7cc", "color": "#17becf", "kind": "user"}, "super_user": true, "enabled": true}, {"kind": "user", "name": "newuser", "username": "newuser", "email": "[email protected]", "verified": true, "avatar": {"name": "newuser", "hash": "f338a2c83bfdde84abe2d3348994d70c34185a234cfbf32f9e323e3578e7e771", "color": "#9edae5", "kind": "user"}, "super_user": false, "enabled": true}]} 3.3. Deleting a user by using the UI Use the following procedure to delete a user from your Red Hat Quay repository using the UI. Note that after deleting the user, any repositories that the user had in their private account become unavailable. Note In some cases, when accessing the Users tab in the Superuser Admin Panel of the Red Hat Quay UI, you might encounter a situation where no users are listed. Instead, a message appears, indicating that Red Hat Quay is configured to use external authentication, and users can only be created in that system. This error occurs for one of two reasons: The web UI times out when loading users. When this happens, users are not accessible to perform any operations on. On LDAP authentication. When a userID is changed but the associated email is not. Currently, Red Hat Quay does not allow the creation of a new user with an old email address. When this happens, you must delete the user using the Red Hat Quay API. Prerequisites You are logged into your Red Hat Quay deployment as a superuser. Procedure Log in to your Red Hat Quay repository as the superuser. In the navigation pane, select your account name, and then click Super User Admin Panel . Click the Users icon in the navigation pane. Click the Options cogwheel beside the user to be deleted. Click Delete User , and then confirm deletion by clicking Delete User . 3.4. Deleting a user by using the Red Hat Quay API Use the following procedure to delete a user from Red Hat Quay using the API. Important After deleting the user, any repositories that this user had in his private account become unavailable. Prerequisites You are logged into your Red Hat Quay deployment as a superuser. You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the following DELETE /api/v1/superuser/users/{username} command to delete a user from the command line: USD curl -X DELETE -H "Authorization: Bearer <insert token here>" https://<quay-server.example.com>/api/v1/superuser/users/<username> The CLI does not return information when deleting a user from the CLI. To confirm deletion, you can check the Red Hat Quay UI by navigating to Superuser Admin Panel Users , or by entering the following GET /api/v1/superuser/users/ command. You can then check to see if they are present. USD curl -X GET -H "Authorization: Bearer <bearer_token>" "https://<quay-server.example.com>/api/v1/superuser/users/" | [
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"username\": \"newuser\", \"email\": \"[email protected]\" }' \"https://<quay-server.example.com>/api/v1/superuser/users/\"",
"{\"username\": \"newuser\", \"email\": \"[email protected]\", \"password\": \"123456789\", \"encrypted_password\": \"<example_encrypted_password>/JKY9pnDcsw=\"}",
"podman login <quay-server.example.com>",
"username: newuser password: 123456789",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/superuser/users/\"",
"{\"users\": [{\"kind\": \"user\", \"name\": \"quayadmin\", \"username\": \"quayadmin\", \"email\": \"[email protected]\", \"verified\": true, \"avatar\": {\"name\": \"quayadmin\", \"hash\": \"b28d563a6dc76b4431fc7b0524bbff6b810387dac86d9303874871839859c7cc\", \"color\": \"#17becf\", \"kind\": \"user\"}, \"super_user\": true, \"enabled\": true}, {\"kind\": \"user\", \"name\": \"newuser\", \"username\": \"newuser\", \"email\": \"[email protected]\", \"verified\": true, \"avatar\": {\"name\": \"newuser\", \"hash\": \"f338a2c83bfdde84abe2d3348994d70c34185a234cfbf32f9e323e3578e7e771\", \"color\": \"#9edae5\", \"kind\": \"user\"}, \"super_user\": false, \"enabled\": true}]}",
"curl -X DELETE -H \"Authorization: Bearer <insert token here>\" https://<quay-server.example.com>/api/v1/superuser/users/<username>",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/superuser/users/\""
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/use_red_hat_quay/user-create |
Chapter 1. Red Hat build of Keycloak features and concepts | Chapter 1. Red Hat build of Keycloak features and concepts Red Hat build of Keycloak is a single sign on solution for web apps and RESTful web services. The goal of Red Hat build of Keycloak is to make security simple so that it is easy for application developers to secure the apps and services they have deployed in their organization. Security features that developers normally have to write for themselves are provided out of the box and are easily tailorable to the individual requirements of your organization. Red Hat build of Keycloak provides customizable user interfaces for login, registration, administration, and account management. You can also use Red Hat build of Keycloak as an integration platform to hook it into existing LDAP and Active Directory servers. You can also delegate authentication to third party identity providers like Facebook and Google. 1.1. Features Red Hat build of Keycloak provides the following features: Single-Sign On and Single-Sign Out for browser applications. OpenID Connect support. OAuth 2.0 support. SAML support. Identity Brokering - Authenticate with external OpenID Connect or SAML Identity Providers. Social Login - Enable login with Google, GitHub, Facebook, Twitter, and other social networks. User Federation - Sync users from LDAP and Active Directory servers. Kerberos bridge - Automatically authenticate users that are logged-in to a Kerberos server. Admin Console for central management of users, roles, role mappings, clients and configuration. Account Console that allows users to centrally manage their account. Theme support - Customize all user facing pages to integrate with your applications and branding. Two-factor Authentication - Support for TOTP/HOTP via Google Authenticator or FreeOTP. Login flows - optional user self-registration, recover password, verify email, require password update, etc. Session management - Admins and users themselves can view and manage user sessions. Token mappers - Map user attributes, roles, etc. how you want into tokens and statements. Not-before revocation policies per realm, application and user. CORS support - Client adapters have built-in support for CORS. Client adapters for JavaScript applications, JBoss EAP, etc. Supports any platform/language that has an OpenID Connect Relying Party library or SAML 2.0 Service Provider library. 1.2. Basic Red Hat build of Keycloak operations Red Hat build of Keycloak is a separate server that you manage on your network. Applications are configured to point to and be secured by this server. Red Hat build of Keycloak uses open protocol standards like OpenID Connect or SAML 2.0 to secure your applications. Browser applications redirect a user's browser from the application to the Red Hat build of Keycloak authentication server where they enter their credentials. This redirection is important because users are completely isolated from applications and applications never see a user's credentials. Applications instead are given an identity token or assertion that is cryptographically signed. These tokens can have identity information like username, address, email, and other profile data. They can also hold permission data so that applications can make authorization decisions. These tokens can also be used to make secure invocations on REST-based services. 1.3. Core concepts and terms Consider these core concepts and terms before attempting to use Red Hat build of Keycloak to secure your web applications and REST services. users Users are entities that are able to log into your system. They can have attributes associated with themselves like email, username, address, phone number, and birthday. They can be assigned group membership and have specific roles assigned to them. authentication The process of identifying and validating a user. authorization The process of granting access to a user. credentials Credentials are pieces of data that Red Hat build of Keycloak uses to verify the identity of a user. Some examples are passwords, one-time-passwords, digital certificates, or even fingerprints. roles Roles identify a type or category of user. Admin , user , manager , and employee are all typical roles that may exist in an organization. Applications often assign access and permissions to specific roles rather than individual users as dealing with users can be too fine-grained and hard to manage. user role mapping A user role mapping defines a mapping between a role and a user. A user can be associated with zero or more roles. This role mapping information can be encapsulated into tokens and assertions so that applications can decide access permissions on various resources they manage. composite roles A composite role is a role that can be associated with other roles. For example a superuser composite role could be associated with the sales-admin and order-entry-admin roles. If a user is mapped to the superuser role they also inherit the sales-admin and order-entry-admin roles. groups Groups manage groups of users. Attributes can be defined for a group. You can map roles to a group as well. Users that become members of a group inherit the attributes and role mappings that group defines. realms A realm manages a set of users, credentials, roles, and groups. A user belongs to and logs into a realm. Realms are isolated from one another and can only manage and authenticate the users that they control. clients Clients are entities that can request Red Hat build of Keycloak to authenticate a user. Most often, clients are applications and services that want to use Red Hat build of Keycloak to secure themselves and provide a single sign-on solution. Clients can also be entities that just want to request identity information or an access token so that they can securely invoke other services on the network that are secured by Red Hat build of Keycloak. client adapters Client adapters are plugins that you install into your application environment to be able to communicate and be secured by Red Hat build of Keycloak. Red Hat build of Keycloak has a number of adapters for different platforms that you can download. There are also third-party adapters you can get for environments that we don't cover. consent Consent is when you as an admin want a user to give permission to a client before that client can participate in the authentication process. After a user provides their credentials, Red Hat build of Keycloak will pop up a screen identifying the client requesting a login and what identity information is requested of the user. User can decide whether or not to grant the request. client scopes When a client is registered, you must define protocol mappers and role scope mappings for that client. It is often useful to store a client scope, to make creating new clients easier by sharing some common settings. This is also useful for requesting some claims or roles to be conditionally based on the value of scope parameter. Red Hat build of Keycloak provides the concept of a client scope for this. client role Clients can define roles that are specific to them. This is basically a role namespace dedicated to the client. identity token A token that provides identity information about the user. Part of the OpenID Connect specification. access token A token that can be provided as part of an HTTP request that grants access to the service being invoked on. This is part of the OpenID Connect and OAuth 2.0 specification. assertion Information about a user. This usually pertains to an XML blob that is included in a SAML authentication response that provided identity metadata about an authenticated user. service account Each client has a built-in service account which allows it to obtain an access token. direct grant A way for a client to obtain an access token on behalf of a user via a REST invocation. protocol mappers For each client you can tailor what claims and assertions are stored in the OIDC token or SAML assertion. You do this per client by creating and configuring protocol mappers. session When a user logs in, a session is created to manage the login session. A session contains information like when the user logged in and what applications have participated within single-sign on during that session. Both admins and users can view session information. user federation provider Red Hat build of Keycloak can store and manage users. Often, companies already have LDAP or Active Directory services that store user and credential information. You can point Red Hat build of Keycloak to validate credentials from those external stores and pull in identity information. identity provider An identity provider (IDP) is a service that can authenticate a user. Red Hat build of Keycloak is an IDP. identity provider federation Red Hat build of Keycloak can be configured to delegate authentication to one or more IDPs. Social login via Facebook or Google+ is an example of identity provider federation. You can also hook Red Hat build of Keycloak to delegate authentication to any other OpenID Connect or SAML 2.0 IDP. identity provider mappers When doing IDP federation you can map incoming tokens and assertions to user and session attributes. This helps you propagate identity information from the external IDP to your client requesting authentication. required actions Required actions are actions a user must perform during the authentication process. A user will not be able to complete the authentication process until these actions are complete. For example, an admin may schedule users to reset their passwords every month. An update password required action would be set for all these users. authentication flows Authentication flows are work flows a user must perform when interacting with certain aspects of the system. A login flow can define what credential types are required. A registration flow defines what profile information a user must enter and whether something like reCAPTCHA must be used to filter out bots. Credential reset flow defines what actions a user must do before they can reset their password. events Events are audit streams that admins can view and hook into. themes Every screen provided by Red Hat build of Keycloak is backed by a theme. Themes define HTML templates and stylesheets which you can override as needed. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/server_administration_guide/red_hat_build_of_keycloak_features_and_concepts |
30.2. System Requirements | 30.2. System Requirements Processor Architectures One or more processors implementing the Intel 64 instruction set are required: that is, a processor of the AMD64 or Intel 64 architecture. RAM Each VDO volume has two distinct memory requirements: The VDO module requires 370 MB plus an additional 268 MB per each 1 TB of physical storage managed. The Universal Deduplication Service (UDS) index requires a minimum of 250 MB of DRAM, which is also the default amount that deduplication uses. For details on the memory usage of UDS, see Section 30.2.1, "UDS Index Memory Requirements" . Storage A VDO volume is a thinly provisioned block device. To prevent running out of physical space, place the volume on top of storage that you can expand at a later time. Examples of such expandable storage are LVM volumes or MD RAID arrays. A single VDO volume can be configured to use up to 256 TB of physical storage. See Section 30.2.2, "VDO Storage Space Requirements" for the calculations to determine the usable size of a VDO-managed volume from the physical size of the storage pool the VDO is given. Additional System Software VDO depends on the following software: LVM Python 2.7 The yum package manager will install all necessary software dependencies automatically. Placement of VDO in the Storage Stack As a general rule, you should place certain storage layers under VDO and others on top of VDO: Under VDO: DM-Multipath, DM-Crypt, and software RAID (LVM or mdraid ). On top of VDO: LVM cache, LVM snapshots, and LVM Thin Provisioning. The following configurations are not supported: VDO on top of VDO volumes: storage VDO LVM VDO VDO on top of LVM Snapshots VDO on top of LVM Cache VDO on top of the loopback device VDO on top of LVM Thin Provisioning Encrypted volumes on top of VDO: storage VDO DM-Crypt Partitions on a VDO volume: fdisk , parted , and similar partitions RAID (LVM, MD, or any other type) on top of a VDO volume Important VDO supports two write modes: sync and async . When VDO is in sync mode, writes to the VDO device are acknowledged when the underlying storage has written the data permanently. When VDO is in async mode, writes are acknowledged before being written to persistent storage. It is critical to set the VDO write policy to match the behavior of the underlying storage. By default, VDO write policy is set to the auto option, which selects the appropriate policy automatically. For more information, see Section 30.4.2, "Selecting VDO Write Modes" . 30.2.1. UDS Index Memory Requirements The UDS index consists of two parts: A compact representation is used in memory that contains at most one entry per unique block. An on-disk component which records the associated block names presented to the index as they occur, in order. UDS uses an average of 4 bytes per entry in memory (including cache). The on-disk component maintains a bounded history of data passed to UDS. UDS provides deduplication advice for data that falls within this deduplication window, containing the names of the most recently seen blocks. The deduplication window allows UDS to index data as efficiently as possible while limiting the amount of memory required to index large data repositories. Despite the bounded nature of the deduplication window, most datasets which have high levels of deduplication also exhibit a high degree of temporal locality - in other words, most deduplication occurs among sets of blocks that were written at about the same time. Furthermore, in general, data being written is more likely to duplicate data that was recently written than data that was written a long time ago. Therefore, for a given workload over a given time interval, deduplication rates will often be the same whether UDS indexes only the most recent data or all the data. Because duplicate data tends to exhibit temporal locality, it is rarely necessary to index every block in the storage system. Were this not so, the cost of index memory would outstrip the savings of reduced storage costs from deduplication. Index size requirements are more closely related to the rate of data ingestion. For example, consider a storage system with 100 TB of total capacity but with an ingestion rate of 1 TB per week. With a deduplication window of 4 TB, UDS can detect most redundancy among the data written within the last month. UDS's Sparse Indexing feature (the recommended mode for VDO) further exploits temporal locality by attempting to retain only the most relevant index entries in memory. UDS can maintain a deduplication window that is ten times larger while using the same amount of memory. While the sparse index provides the greatest coverage, the dense index provides more advice. For most workloads, given the same amount of memory, the difference in deduplication rates between dense and sparse indexes is negligible. The memory required for the index is determined by the desired size of the deduplication window: For a dense index, UDS will provide a deduplication window of 1 TB per 1 GB of RAM. A 1 GB index is generally sufficient for storage systems of up to 4 TB. For a sparse index, UDS will provide a deduplication window of 10 TB per 1 GB of RAM. A 1 GB sparse index is generally sufficient for up to 40 TB of physical storage. For concrete examples of UDS Index memory requirements, see Section 30.2.3, "Examples of VDO System Requirements by Physical Volume Size" 30.2.2. VDO Storage Space Requirements VDO requires storage space both for VDO metadata and for the actual UDS deduplication index: VDO writes two types of metadata to its underlying physical storage: The first type scales with the physical size of the VDO volume and uses approximately 1 MB for each 4 GB of physical storage plus an additional 1 MB per slab. The second type scales with the logical size of the VDO volume and consumes approximately 1.25 MB for each 1 GB of logical storage, rounded up to the nearest slab. See Section 30.1.3, "VDO Volume" for a description of slabs. The UDS index is stored within the VDO volume group and is managed by the associated VDO instance. The amount of storage required depends on the type of index and the amount of RAM allocated to the index. For each 1 GB of RAM, a dense UDS index will use 17 GB of storage, and a sparse UDS index will use 170 GB of storage. For concrete examples of VDO storage requirements, see Section 30.2.3, "Examples of VDO System Requirements by Physical Volume Size" 30.2.3. Examples of VDO System Requirements by Physical Volume Size The following tables provide approximate system requirements of VDO based on the size of the underlying physical volume. Each table lists requirements appropriate to the intended deployment, such as primary storage or backup storage. The exact numbers depend on your configuration of the VDO volume. Primary Storage Deployment In the primary storage case, the UDS index is between 0.01% to 25% the size of the physical volume. Table 30.2. VDO Storage and Memory Requirements for Primary Storage Physical Volume Size 10 GB - 1-TB 2-10 TB 11-50 TB 51-100 TB 101-256 TB RAM Usage 250 MB Dense: 1 GB Sparse: 250 MB 2 GB 3 GB 12 GB Disk Usage 2.5 GB Dense: 10 GB Sparse: 22 GB 170 GB 255 GB 1020 GB Index Type Dense Dense or Sparse Sparse Sparse Sparse Backup Storage Deployment In the backup storage case, the UDS index covers the size of the backup set but is not bigger than the physical volume. If you expect the backup set or the physical size to grow in the future, factor this into the index size. Table 30.3. VDO Storage and Memory Requirements for Backup Storage Physical Volume Size 10 GB - 1 TB 2-10 TB 11-50 TB 51-100 TB 101-256 TB RAM Usage 250 MB 2 GB 10 GB 20 GB 26 GB Disk Usage 2.5 GB 170 GB 850 GB 1700 GB 3400 GB Index Type Dense Sparse Sparse Sparse Sparse | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/vdo-qs-requirements |
6.4. Using Maven Dependencies for Red Hat JBoss Data Virtualization | 6.4. Using Maven Dependencies for Red Hat JBoss Data Virtualization In order to use the correct Maven dependencies in your Red Hat JBoss Data Virtualization project, you must add relevant Bill Of Materials (BOM) and parent POM files to the project's pom.xml file. Adding the BOM and parent POM files ensures that the correct versions of plug-ins and transitive dependencies from the provided Maven repositories are included in the project. The Maven repository is designed to be used only in combination with Maven Central and no other repositories are required. The parent POM file to use is org.jboss.dv.component.management:dv-parent-[VERSION].pom . The BOM file to use is org.jboss.dv.component.management:dv-dependency-management-all-[VERSION].pom . <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <!-- Example POM file using the DV 6.4.0 and EAP 6.4 component versions. - Parent is set to the DV 6.4.0 parent management POM, which will - bring in the correct toolchain (plugin) versions. - DependencyManagement dependencies include the DV 6.4.0 and EAP 6.4 - BOMs - which will bring in the correct compile-time (and other - scoped) versions. --> <name>Example POM for DV 6.4.0</name> <groupId>org.jboss.dv</groupId> <artifactId>dv-example</artifactId> <version>0.0.1</version> <packaging>pom</packaging> <parent> <!-- DV version (parent) --> <groupId>org.jboss.dv.component.management</groupId> <artifactId>dv-parent</artifactId> <version>[VERSION]</version> </parent> <dependencyManagement> <dependencies> <!-- DV BOM --> <dependency> <groupId>org.jboss.dv.component.management</groupId> <artifactId>dv-dependency-management-all</artifactId> <version>[VERSION]</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> </project> | [
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\"> <modelVersion>4.0.0</modelVersion> <!-- Example POM file using the DV 6.4.0 and EAP 6.4 component versions. - Parent is set to the DV 6.4.0 parent management POM, which will - bring in the correct toolchain (plugin) versions. - DependencyManagement dependencies include the DV 6.4.0 and EAP 6.4 - BOMs - which will bring in the correct compile-time (and other - scoped) versions. --> <name>Example POM for DV 6.4.0</name> <groupId>org.jboss.dv</groupId> <artifactId>dv-example</artifactId> <version>0.0.1</version> <packaging>pom</packaging> <parent> <!-- DV version (parent) --> <groupId>org.jboss.dv.component.management</groupId> <artifactId>dv-parent</artifactId> <version>[VERSION]</version> </parent> <dependencyManagement> <dependencies> <!-- DV BOM --> <dependency> <groupId>org.jboss.dv.component.management</groupId> <artifactId>dv-dependency-management-all</artifactId> <version>[VERSION]</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> </project>"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/installation_guide/using_maven_dependencies_for_red_hat_jboss_data_virtualization |
Chapter 6. Using the Node Tuning Operator | Chapter 6. Using the Node Tuning Operator Learn about the Node Tuning Operator and how you can use it to manage node-level tuning by orchestrating the tuned daemon. 6.1. About the Node Tuning Operator The Node Tuning Operator helps you manage node-level tuning by orchestrating the TuneD daemon and achieves low latency performance by using the Performance Profile controller. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs. The Operator manages the containerized TuneD daemon for OpenShift Container Platform as a Kubernetes daemon set. It ensures the custom tuning specification is passed to all containerized TuneD daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node. Node-level settings applied by the containerized TuneD daemon are rolled back on an event that triggers a profile change or when the containerized TuneD daemon is terminated gracefully by receiving and handling a termination signal. The Node Tuning Operator uses the Performance Profile controller to implement automatic tuning to achieve low latency performance for OpenShift Container Platform applications. The cluster administrator configures a performance profile to define node-level settings such as the following: Updating the kernel to kernel-rt. Choosing CPUs for housekeeping. Choosing CPUs for running workloads. Note Currently, disabling CPU load balancing is not supported by cgroup v2. As a result, you might not get the desired behavior from performance profiles if you have cgroup v2 enabled. Enabling cgroup v2 is not recommended if you are using performance profiles. The Node Tuning Operator is part of a standard OpenShift Container Platform installation in version 4.1 and later. Note In earlier versions of OpenShift Container Platform, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance for OpenShift applications. In OpenShift Container Platform 4.11 and later, this functionality is part of the Node Tuning Operator. 6.2. Accessing an example Node Tuning Operator specification Use this process to access an example Node Tuning Operator specification. Procedure Run the following command to access an example Node Tuning Operator specification: oc get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator The default CR is meant for delivering standard node-level tuning for the OpenShift Container Platform platform and it can only be modified to set the Operator Management state. Any other custom changes to the default CR will be overwritten by the Operator. For custom tuning, create your own Tuned CRs. Newly created CRs will be combined with the default CR and custom tuning applied to OpenShift Container Platform nodes based on node or pod labels and profile priorities. Warning While in certain situations the support for pod labels can be a convenient way of automatically delivering required tuning, this practice is discouraged and strongly advised against, especially in large-scale clusters. The default Tuned CR ships without pod label matching. If a custom profile is created with pod label matching, then the functionality will be enabled at that time. The pod label functionality will be deprecated in future versions of the Node Tuning Operator. 6.3. Default profiles set on a cluster The following are the default profiles set on a cluster. apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40 Starting with OpenShift Container Platform 4.9, all OpenShift TuneD profiles are shipped with the TuneD package. You can use the oc exec command to view the contents of these profiles: USD oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \; 6.4. Verifying that the TuneD profiles are applied Verify the TuneD profiles that are applied to your cluster node. USD oc get profile.tuned.openshift.io -n openshift-cluster-node-tuning-operator Example output NAME TUNED APPLIED DEGRADED AGE master-0 openshift-control-plane True False 6h33m master-1 openshift-control-plane True False 6h33m master-2 openshift-control-plane True False 6h33m worker-a openshift-node True False 6h28m worker-b openshift-node True False 6h28m NAME : Name of the Profile object. There is one Profile object per node and their names match. TUNED : Name of the desired TuneD profile to apply. APPLIED : True if the TuneD daemon applied the desired profile. ( True/False/Unknown ). DEGRADED : True if any errors were reported during application of the TuneD profile ( True/False/Unknown ). AGE : Time elapsed since the creation of Profile object. 6.5. Custom tuning specification The custom resource (CR) for the Operator has two major sections. The first section, profile: , is a list of TuneD profiles and their names. The second, recommend: , defines the profile selection logic. Multiple custom tuning specifications can co-exist as multiple CRs in the Operator's namespace. The existence of new CRs or the deletion of old CRs is detected by the Operator. All existing custom tuning specifications are merged and appropriate objects for the containerized TuneD daemons are updated. Management state The Operator Management state is set by adjusting the default Tuned CR. By default, the Operator is in the Managed state and the spec.managementState field is not present in the default Tuned CR. Valid values for the Operator Management state are as follows: Managed: the Operator will update its operands as configuration resources are updated Unmanaged: the Operator will ignore changes to the configuration resources Removed: the Operator will remove its operands and resources the Operator provisioned Profile data The profile: section lists TuneD profiles and their names. profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD # ... - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings Recommended profiles The profile: selection logic is defined by the recommend: section of the CR. The recommend: section is a list of items to recommend the profiles based on a selection criteria. recommend: <recommend-item-1> # ... <recommend-item-n> The individual items of the list: - machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9 1 Optional. 2 A dictionary of key/value MachineConfig labels. The keys must be unique. 3 If omitted, profile match is assumed unless a profile with a higher priority matches first or machineConfigLabels is set. 4 An optional list. 5 Profile ordering priority. Lower numbers mean higher priority ( 0 is the highest priority). 6 A TuneD profile to apply on a match. For example tuned_profile_1 . 7 Optional operand configuration. 8 Turn debugging on or off for the TuneD daemon. Options are true for on or false for off. The default is false . 9 Turn reapply_sysctl functionality on or off for the TuneD daemon. Options are true for on and false for off. <match> is an optional list recursively defined as follows: - label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4 1 Node or pod label name. 2 Optional node or pod label value. If omitted, the presence of <label_name> is enough to match. 3 Optional object type ( node or pod ). If omitted, node is assumed. 4 An optional <match> list. If <match> is not omitted, all nested <match> sections must also evaluate to true . Otherwise, false is assumed and the profile with the respective <match> section will not be applied or recommended. Therefore, the nesting (child <match> sections) works as logical AND operator. Conversely, if any item of the <match> list matches, the entire <match> list evaluates to true . Therefore, the list acts as logical OR operator. If machineConfigLabels is defined, machine config pool based matching is turned on for the given recommend: list item. <mcLabels> specifies the labels for a machine config. The machine config is created automatically to apply host settings, such as kernel boot parameters, for the profile <tuned_profile_name> . This involves finding all machine config pools with machine config selector matching <mcLabels> and setting the profile <tuned_profile_name> on all nodes that are assigned the found machine config pools. To target nodes that have both master and worker roles, you must use the master role. The list items match and machineConfigLabels are connected by the logical OR operator. The match item is evaluated first in a short-circuit manner. Therefore, if it evaluates to true , the machineConfigLabels item is not considered. Important When using machine config pool based matching, it is advised to group nodes with the same hardware configuration into the same machine config pool. Not following this practice might result in TuneD operands calculating conflicting kernel parameters for two or more nodes sharing the same machine config pool. Example: node or pod label based matching - match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node The CR above is translated for the containerized TuneD daemon into its recommend.conf file based on the profile priorities. The profile with the highest priority ( 10 ) is openshift-control-plane-es and, therefore, it is considered first. The containerized TuneD daemon running on a given node looks to see if there is a pod running on the same node with the tuned.openshift.io/elasticsearch label set. If not, the entire <match> section evaluates as false . If there is such a pod with the label, in order for the <match> section to evaluate to true , the node label also needs to be node-role.kubernetes.io/master or node-role.kubernetes.io/infra . If the labels for the profile with priority 10 matched, openshift-control-plane-es profile is applied and no other profile is considered. If the node/pod label combination did not match, the second highest priority profile ( openshift-control-plane ) is considered. This profile is applied if the containerized TuneD pod runs on a node with labels node-role.kubernetes.io/master or node-role.kubernetes.io/infra . Finally, the profile openshift-node has the lowest priority of 30 . It lacks the <match> section and, therefore, will always match. It acts as a profile catch-all to set openshift-node profile, if no other profile with higher priority matches on a given node. Example: machine config pool based matching apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: "worker-custom" priority: 20 profile: openshift-node-custom To minimize node reboots, label the target nodes with a label the machine config pool's node selector will match, then create the Tuned CR above and finally create the custom machine config pool itself. Cloud provider-specific TuneD profiles With this functionality, all Cloud provider-specific nodes can conveniently be assigned a TuneD profile specifically tailored to a given Cloud provider on a OpenShift Container Platform cluster. This can be accomplished without adding additional node labels or grouping nodes into machine config pools. This functionality takes advantage of spec.providerID node object values in the form of <cloud-provider>://<cloud-provider-specific-id> and writes the file /var/lib/tuned/provider with the value <cloud-provider> in NTO operand containers. The content of this file is then used by TuneD to load provider-<cloud-provider> profile if such profile exists. The openshift profile that both openshift-control-plane and openshift-node profiles inherit settings from is now updated to use this functionality through the use of conditional profile loading. Neither NTO nor TuneD currently ship any Cloud provider-specific profiles. However, it is possible to create a custom profile provider-<cloud-provider> that will be applied to all Cloud provider-specific cluster nodes. Example GCE Cloud provider profile apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce Note Due to profile inheritance, any setting specified in the provider-<cloud-provider> profile will be overwritten by the openshift profile and its child profiles. 6.6. Custom tuning examples Using TuneD profiles from the default CR The following CR applies custom node-level tuning for OpenShift Container Platform nodes with label tuned.openshift.io/ingress-node-label set to any value. Example: custom tuning using the openshift-control-plane TuneD profile apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: ingress namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=A custom OpenShift ingress profile include=openshift-control-plane [sysctl] net.ipv4.ip_local_port_range="1024 65535" net.ipv4.tcp_tw_reuse=1 name: openshift-ingress recommend: - match: - label: tuned.openshift.io/ingress-node-label priority: 10 profile: openshift-ingress Important Custom profile writers are strongly encouraged to include the default TuneD daemon profiles shipped within the default Tuned CR. The example above uses the default openshift-control-plane profile to accomplish this. Using built-in TuneD profiles Given the successful rollout of the NTO-managed daemon set, the TuneD operands all manage the same version of the TuneD daemon. To list the built-in TuneD profiles supported by the daemon, query any TuneD pod in the following way: USD oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/ -name tuned.conf -printf '%h\n' | sed 's|^.*/||' You can use the profile names retrieved by this in your custom tuning specification. Example: using built-in hpc-compute TuneD profile apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-hpc-compute namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile for HPC compute workloads include=openshift-node,hpc-compute name: openshift-node-hpc-compute recommend: - match: - label: tuned.openshift.io/openshift-node-hpc-compute priority: 20 profile: openshift-node-hpc-compute In addition to the built-in hpc-compute profile, the example above includes the openshift-node TuneD daemon profile shipped within the default Tuned CR to use OpenShift-specific tuning for compute nodes. 6.7. Supported TuneD daemon plugins Excluding the [main] section, the following TuneD plugins are supported when using custom profiles defined in the profile: section of the Tuned CR: audio cpu disk eeepc_she modules mounts net scheduler scsi_host selinux sysctl sysfs usb video vm bootloader There is some dynamic tuning functionality provided by some of these plugins that is not supported. The following TuneD plugins are currently not supported: script systemd Note The TuneD bootloader plugin only supports Red Hat Enterprise Linux CoreOS (RHCOS) worker nodes. Additional resources Available TuneD Plugins Getting Started with TuneD 6.8. Configuring node tuning in a hosted cluster Important Hosted control planes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . To set node-level tuning on the nodes in your hosted cluster, you can use the Node Tuning Operator. In hosted control planes, you can configure node tuning by creating config maps that contain Tuned objects and referencing those config maps in your node pools. Procedure Create a config map that contains a valid tuned manifest, and reference the manifest in a node pool. In the following example, a Tuned manifest defines a profile that sets vm.dirty_ratio to 55 on nodes that contain the tuned-1-node-label node label with any value. Save the following ConfigMap manifest in a file named tuned-1.yaml : apiVersion: v1 kind: ConfigMap metadata: name: tuned-1 namespace: clusters data: tuning: | apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: tuned-1 namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift profile include=openshift-node [sysctl] vm.dirty_ratio="55" name: tuned-1-profile recommend: - priority: 20 profile: tuned-1-profile Note If you do not add any labels to an entry in the spec.recommend section of the Tuned spec, node-pool-based matching is assumed, so the highest priority profile in the spec.recommend section is applied to nodes in the pool. Although you can achieve more fine-grained node-label-based matching by setting a label value in the Tuned .spec.recommend.match section, node labels will not persist during an upgrade unless you set the .spec.management.upgradeType value of the node pool to InPlace . Create the ConfigMap object in the management cluster: USD oc --kubeconfig="USDMGMT_KUBECONFIG" create -f tuned-1.yaml Reference the ConfigMap object in the spec.tuningConfig field of the node pool, either by editing a node pool or creating one. In this example, assume that you have only one NodePool , named nodepool-1 , which contains 2 nodes. apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: ... name: nodepool-1 namespace: clusters ... spec: ... tuningConfig: - name: tuned-1 status: ... Note You can reference the same config map in multiple node pools. In hosted control planes, the Node Tuning Operator appends a hash of the node pool name and namespace to the name of the Tuned CRs to distinguish them. Outside of this case, do not create multiple TuneD profiles of the same name in different Tuned CRs for the same hosted cluster. Verification Now that you have created the ConfigMap object that contains a Tuned manifest and referenced it in a NodePool , the Node Tuning Operator syncs the Tuned objects into the hosted cluster. You can verify which Tuned objects are defined and which TuneD profiles are applied to each node. List the Tuned objects in the hosted cluster: USD oc --kubeconfig="USDHC_KUBECONFIG" get tuned.tuned.openshift.io -n openshift-cluster-node-tuning-operator Example output NAME AGE default 7m36s rendered 7m36s tuned-1 65s List the Profile objects in the hosted cluster: USD oc --kubeconfig="USDHC_KUBECONFIG" get profile.tuned.openshift.io -n openshift-cluster-node-tuning-operator Example output NAME TUNED APPLIED DEGRADED AGE nodepool-1-worker-1 tuned-1-profile True False 7m43s nodepool-1-worker-2 tuned-1-profile True False 7m14s Note If no custom profiles are created, the openshift-node profile is applied by default. To confirm that the tuning was applied correctly, start a debug shell on a node and check the sysctl values: USD oc --kubeconfig="USDHC_KUBECONFIG" debug node/nodepool-1-worker-1 -- chroot /host sysctl vm.dirty_ratio Example output vm.dirty_ratio = 55 6.9. Advanced node tuning for hosted clusters by setting kernel boot parameters Important Hosted control planes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . For more advanced tuning in hosted control planes, which requires setting kernel boot parameters, you can also use the Node Tuning Operator. The following example shows how you can create a node pool with huge pages reserved. Procedure Create a ConfigMap object that contains a Tuned object manifest for creating 10 huge pages that are 2 MB in size. Save this ConfigMap manifest in a file named tuned-hugepages.yaml : apiVersion: v1 kind: ConfigMap metadata: name: tuned-hugepages namespace: clusters data: tuning: | apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 name: openshift-node-hugepages recommend: - priority: 20 profile: openshift-node-hugepages Note The .spec.recommend.match field is intentionally left blank. In this case, this Tuned object is applied to all nodes in the node pool where this ConfigMap object is referenced. Group nodes with the same hardware configuration into the same node pool. Otherwise, TuneD operands can calculate conflicting kernel parameters for two or more nodes that share the same node pool. Create the ConfigMap object in the management cluster: USD oc --kubeconfig="USDMGMT_KUBECONFIG" create -f tuned-hugepages.yaml Create a NodePool manifest YAML file, customize the upgrade type of the NodePool , and reference the ConfigMap object that you created in the spec.tuningConfig section. Create the NodePool manifest and save it in a file named hugepages-nodepool.yaml by using the hypershift CLI: NODEPOOL_NAME=hugepages-example INSTANCE_TYPE=m5.2xlarge NODEPOOL_REPLICAS=2 hypershift create nodepool aws \ --cluster-name USDCLUSTER_NAME \ --name USDNODEPOOL_NAME \ --node-count USDNODEPOOL_REPLICAS \ --instance-type USDINSTANCE_TYPE \ --render > hugepages-nodepool.yaml In the hugepages-nodepool.yaml file, set .spec.management.upgradeType to InPlace , and set .spec.tuningConfig to reference the tuned-hugepages ConfigMap object that you created. apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: name: hugepages-nodepool namespace: clusters ... spec: management: ... upgradeType: InPlace ... tuningConfig: - name: tuned-hugepages Note To avoid the unnecessary re-creation of nodes when you apply the new MachineConfig objects, set .spec.management.upgradeType to InPlace . If you use the Replace upgrade type, nodes are fully deleted and new nodes can replace them when you apply the new kernel boot parameters that the TuneD operand calculated. Create the NodePool in the management cluster: USD oc --kubeconfig="USDMGMT_KUBECONFIG" create -f hugepages-nodepool.yaml Verification After the nodes are available, the containerized TuneD daemon calculates the required kernel boot parameters based on the applied TuneD profile. After the nodes are ready and reboot once to apply the generated MachineConfig object, you can verify that the TuneD profile is applied and that the kernel boot parameters are set. List the Tuned objects in the hosted cluster: USD oc --kubeconfig="USDHC_KUBECONFIG" get tuned.tuned.openshift.io -n openshift-cluster-node-tuning-operator Example output NAME AGE default 123m hugepages-8dfb1fed 1m23s rendered 123m List the Profile objects in the hosted cluster: USD oc --kubeconfig="USDHC_KUBECONFIG" get profile.tuned.openshift.io -n openshift-cluster-node-tuning-operator Example output NAME TUNED APPLIED DEGRADED AGE nodepool-1-worker-1 openshift-node True False 132m nodepool-1-worker-2 openshift-node True False 131m hugepages-nodepool-worker-1 openshift-node-hugepages True False 4m8s hugepages-nodepool-worker-2 openshift-node-hugepages True False 3m57s Both of the worker nodes in the new NodePool have the openshift-node-hugepages profile applied. To confirm that the tuning was applied correctly, start a debug shell on a node and check /proc/cmdline . USD oc --kubeconfig="USDHC_KUBECONFIG" debug node/nodepool-1-worker-1 -- chroot /host cat /proc/cmdline Example output BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-... hugepagesz=2M hugepages=50 Additional resources For more information about hosted control planes, see Hosted control planes for Red Hat OpenShift Container Platform (Technology Preview) . | [
"get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40",
"oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \\;",
"oc get profile.tuned.openshift.io -n openshift-cluster-node-tuning-operator",
"NAME TUNED APPLIED DEGRADED AGE master-0 openshift-control-plane True False 6h33m master-1 openshift-control-plane True False 6h33m master-2 openshift-control-plane True False 6h33m worker-a openshift-node True False 6h28m worker-b openshift-node True False 6h28m",
"profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings",
"recommend: <recommend-item-1> <recommend-item-n>",
"- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9",
"- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4",
"- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: ingress namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=A custom OpenShift ingress profile include=openshift-control-plane [sysctl] net.ipv4.ip_local_port_range=\"1024 65535\" net.ipv4.tcp_tw_reuse=1 name: openshift-ingress recommend: - match: - label: tuned.openshift.io/ingress-node-label priority: 10 profile: openshift-ingress",
"oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/ -name tuned.conf -printf '%h\\n' | sed 's|^.*/||'",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-hpc-compute namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile for HPC compute workloads include=openshift-node,hpc-compute name: openshift-node-hpc-compute recommend: - match: - label: tuned.openshift.io/openshift-node-hpc-compute priority: 20 profile: openshift-node-hpc-compute",
"apiVersion: v1 kind: ConfigMap metadata: name: tuned-1 namespace: clusters data: tuning: | apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: tuned-1 namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift profile include=openshift-node [sysctl] vm.dirty_ratio=\"55\" name: tuned-1-profile recommend: - priority: 20 profile: tuned-1-profile",
"oc --kubeconfig=\"USDMGMT_KUBECONFIG\" create -f tuned-1.yaml",
"apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: name: nodepool-1 namespace: clusters spec: tuningConfig: - name: tuned-1 status:",
"oc --kubeconfig=\"USDHC_KUBECONFIG\" get tuned.tuned.openshift.io -n openshift-cluster-node-tuning-operator",
"NAME AGE default 7m36s rendered 7m36s tuned-1 65s",
"oc --kubeconfig=\"USDHC_KUBECONFIG\" get profile.tuned.openshift.io -n openshift-cluster-node-tuning-operator",
"NAME TUNED APPLIED DEGRADED AGE nodepool-1-worker-1 tuned-1-profile True False 7m43s nodepool-1-worker-2 tuned-1-profile True False 7m14s",
"oc --kubeconfig=\"USDHC_KUBECONFIG\" debug node/nodepool-1-worker-1 -- chroot /host sysctl vm.dirty_ratio",
"vm.dirty_ratio = 55",
"apiVersion: v1 kind: ConfigMap metadata: name: tuned-hugepages namespace: clusters data: tuning: | apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 name: openshift-node-hugepages recommend: - priority: 20 profile: openshift-node-hugepages",
"oc --kubeconfig=\"USDMGMT_KUBECONFIG\" create -f tuned-hugepages.yaml",
"NODEPOOL_NAME=hugepages-example INSTANCE_TYPE=m5.2xlarge NODEPOOL_REPLICAS=2 hypershift create nodepool aws --cluster-name USDCLUSTER_NAME --name USDNODEPOOL_NAME --node-count USDNODEPOOL_REPLICAS --instance-type USDINSTANCE_TYPE --render > hugepages-nodepool.yaml",
"apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: name: hugepages-nodepool namespace: clusters spec: management: upgradeType: InPlace tuningConfig: - name: tuned-hugepages",
"oc --kubeconfig=\"USDMGMT_KUBECONFIG\" create -f hugepages-nodepool.yaml",
"oc --kubeconfig=\"USDHC_KUBECONFIG\" get tuned.tuned.openshift.io -n openshift-cluster-node-tuning-operator",
"NAME AGE default 123m hugepages-8dfb1fed 1m23s rendered 123m",
"oc --kubeconfig=\"USDHC_KUBECONFIG\" get profile.tuned.openshift.io -n openshift-cluster-node-tuning-operator",
"NAME TUNED APPLIED DEGRADED AGE nodepool-1-worker-1 openshift-node True False 132m nodepool-1-worker-2 openshift-node True False 131m hugepages-nodepool-worker-1 openshift-node-hugepages True False 4m8s hugepages-nodepool-worker-2 openshift-node-hugepages True False 3m57s",
"oc --kubeconfig=\"USDHC_KUBECONFIG\" debug node/nodepool-1-worker-1 -- chroot /host cat /proc/cmdline",
"BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-... hugepagesz=2M hugepages=50"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/scalability_and_performance/using-node-tuning-operator |
Logging | Logging OpenShift Container Platform 4.12 Configuring and using logging in OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/logging/index |
Scalability and performance | Scalability and performance OpenShift Container Platform 4.14 Scaling your OpenShift Container Platform cluster and tuning performance in production environments Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/scalability_and_performance/index |
Chapter 17. configuration | Chapter 17. configuration This chapter describes the commands under the configuration command. 17.1. configuration show Display configuration details Usage: Table 17.1. Command arguments Value Summary -h, --help Show this help message and exit --mask Attempt to mask passwords (default) --unmask Show password in clear text Table 17.2. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 17.3. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 17.4. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 17.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack configuration show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--mask | --unmask]"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/configuration |
3.5. Configuring FTP | 3.5. Configuring FTP File Transport Protocol (FTP) is an old and complex multi-port protocol that presents a distinct set of challenges to an Load Balancer Add-On environment. To understand the nature of these challenges, you must first understand some key things about how FTP works. 3.5.1. How FTP Works With most other server client relationships, the client machine opens up a connection to the server on a particular port and the server then responds to the client on that port. When an FTP client connects to an FTP server it opens a connection to the FTP control port 21. Then the client tells the FTP server whether to establish an active or passive connection. The type of connection chosen by the client determines how the server responds and on what ports transactions will occur. The two types of data connections are: Active Connections When an active connection is established, the server opens a data connection to the client from port 20 to a high range port on the client machine. All data from the server is then passed over this connection. Passive Connections When a passive connection is established, the client asks the FTP server to establish a passive connection port, which can be on any port higher than 10,000. The server then binds to this high-numbered port for this particular session and relays that port number back to the client. The client then opens the newly bound port for the data connection. Each data request the client makes results in a separate data connection. Most modern FTP clients attempt to establish a passive connection when requesting data from servers. Note The client determines the type of connection, not the server. This means to effectively cluster FTP, you must configure the LVS routers to handle both active and passive connections. The FTP client-server relationship can potentially open a large number of ports that the Piranha Configuration Tool and IPVS do not know about. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/load_balancer_administration/s1-lvs-ftp-vsa |
Chapter 19. Flow Control | Chapter 19. Flow Control Flow control can be used to limit the flow of messaging data between a client and server so that messaging participants are not overwhelmed. You can manage the flow of data from both the consumer side and the producer side. 19.1. Consumer Flow Control JBoss EAP messaging includes configuration that defines how much data to pre-fetch on behalf of consumers and that controls the rate at which consumers can consume messages. Window-based Flow Control JBoss EAP messaging pre-fetches messages into a buffer on each consumer. The size of the buffer is determined by the consumer-window-size attribute of a connection-factory . The example configuration below shows a connection-factory with the consumer-window-size attribute explicitly set. <connection-factory name="MyConnFactory" ... consumer-window-size="1048576" /> Use the management CLI to read and write the value of consumer-window-size attribute for a given connection-factory . The examples below show how this done using the InVmConnectionFactory connection factory, which is the default for consumers residing in the same virtual machine as the server, for example, a local MessageDrivenBean . Read the consumer-window-size attribute of the InVmConnectionFactory from the management CLI Write the consumer-window-size attribute from the management CLI The value for consumer-window-size must be an integer. Some values have special meaning as noted in the table below. Table 19.1. Values for consumer-window-size Value Description n An integer value used to set the buffer's size to n bytes. The default is 1048576 , which should be fine in most cases. Benchmarking will help you find an optimal value for the window size if the default value is not adequate. 0 Turns off buffering. This can help with slow consumers and can give deterministic distribution across multiple consumers. -1 Creates an unbounded buffer. This can help facilitate very fast consumers that pull and process messages as quickly as they are received. Warning Setting consumer-window-size to -1 can overflow the client memory if the consumer is not able to process messages as fast as it receives them. If you are using the core API, the consumer window size can be set from the ServerLocator using its setConsumerWindowSize() method. If you are using Jakarta Messaging, the client can specify the consumer window size by using the setConsumerWindowSize() method of the instantiated ConnectionFactory . Rate-limited Flow Control JBoss EAP messaging can regulate the rate of messages consumed per second, a flow control method known as throttling. Use the consumer-max-rate attribute of the appropriate connection-factory to ensure that a consumer never consumes messages at a rate faster than specified. <connection-factory name="MyConnFactory" ... consumer-max-rate="10" /> The default value is -1 , which disables rate limited flow control. The management CLI is the recommended way to read and write the consumer-max-rate attribute. The examples below show how this done using the InVmConnectionFactory connection factory, which is the default for consumers residing in the same virtual machine as the server, e.g. a local MessageDrivenBean . Read the consumer-max-rate attribute using the management CLI Write the consumer-max-rate attribute using the management CLI: If you are using Jakarta Messaging the max rate size can be set using setConsumerMaxRate(int consumerMaxRate) method of the instantiated ConnectionFactory . If you are using the Core API the rate can be set with the ServerLocator.setConsumerMaxRate(int consumerMaxRate) method. 19.2. Producer Flow Control JBoss EAP messaging can also limit the amount of data sent from a client in order to prevent the server from receiving too many messages. Window-based Flow Control JBoss EAP messaging regulates message producers by using an exchange of credits. Producers can send messages to an address as long as they have sufficient credits to do so. The amount of credits required to send a message is determined by its size. As producers run low on credits, they must request more from the server. Within the server configuration, the amount of credits a producer can request at one time is known as the producer-window-size , an attribute of the connection-factory element: <connection-factory name="MyConnFactory" ... producer-window-size="1048576" /> The window size determines the amount of bytes that can be in-flight at any one time, thus preventing the remote connection from overloading the server. Use the management CLI to read and write the producer-window-size attribute of a given connection factory. The examples below use the RemoteConnectionFactory , which is included in the default configuration and intended for use by remote clients. Read the producer-window-size attribute using the management CLI: Write the producer-window-size attribute using the management CLI: If you are using Jakarta Messaging, the client can call the setProducerWindowSize(int producerWindowSize) method of the ConnectionFactory to set the window size directly. If you are using the core API, the window size can be set using the setProducerWindowSize(int producerWindowSize) method of the ServerLocator . Blocking Producer Window-based Flow Control Typically, the messaging server always provides the same number of credits that was requested. However, it is possible to limit the number of credits sent by the server, which can prevent it from running out of memory due to producers sending more messages than can be handled at one time. For example, if you have a Jakarta Messaging queue called myqueue and you set the maximum memory size to 10MB, the server will regulate the number of messages in the queue so that its size never exceeds 10MB. When the address gets full, producers will block on the client side until sufficient space is freed up on the address. Note Blocking producer flow control is an alternative approach to paging, which does not block producers but instead pages messages to storage. See About Paging for more information. The address-setting configuration element contains the configuration for managing blocking producer flow control. An address-setting is used to apply a set of configuration to all queues registered to that address. See Configuring Address Settings for more information on how this is done. For each address-setting requiring blocking producer flow control, you must include a value for the max-size-bytes attribute. The total memory for all queues bound to that address cannot exceed max-size-bytes . In the case of Jakarta Messaging topics, this means the total memory of all subscriptions in the topic cannot exceed max-size-bytes . You must also set the address-full-policy attribute to BLOCK so the server knows that producers should be blocked if max-size-bytes is reached. Below is an example address-setting with both attributes set: <address-setting ... name="myqueue" address-full-policy="BLOCK" max-size-bytes="100000" /> The above example would set the maximum size of the Jakarta Messaging queue "myqueue" to 100000 bytes. Producers will be blocked from sending to that address once it has reached its maximum size. Use the management CLI to set these attributes, as in the examples below: Set max-size-bytes for a specified address-setting Set address-full-policy for a specified address-setting Rate-limited Flow Control JBoss EAP messaging limits the number of messages a producer can send per second if you specify a producer-max-rate for the connection-factory it uses, as in the example below: <connection-factory name="MyConnFactory" producer-max-rate="1000" /> The default value is -1 , which disables rate limited flow control. Use the management CLI to read and write the value for producer-max-rate . The examples below use the RemoteConnectionFactory , which is included in the default configuration and intended for use by remote clients. Read the value of the producer-max-rate attribute: Write the value of a producer-max-rate attribute: If you use the core API, set the rate by using the method ServerLocator.setProducerMaxRate(int producerMaxRate) . If you are using JNDI to instantiate and look up the connection factory, the max rate can be set on the client using the setProducerMaxRate(int producerMaxRate) method of the instantiated connection factory. | [
"<connection-factory name=\"MyConnFactory\" ... consumer-window-size=\"1048576\" />",
"/subsystem=messaging-activemq/server=default/connection-factory=InVmConnectionFactory:read-attribute(name=consumer-window-size) { \"outcome\" => \"success\", \"result\" => 1048576 }",
"/subsystem=messaging-activemq/server=default/connection-factory=InVmConnectionFactory:write-attribute(name=consumer-window-size,value=1048576) {\"outcome\" => \"success\"}",
"<connection-factory name=\"MyConnFactory\" ... consumer-max-rate=\"10\" />",
"/subsystem=messaging-activemq/server=default/connection-factory=InVmConnectionFactory:read-attribute(name=consumer-max-rate) { \"outcome\" => \"success\", \"result\" => -1 }",
"/subsystem=messaging-activemq/server=default/connection-factory=InVmConnectionFactory:write-attribute(name=consumer-max-rate,value=100) {\"outcome\" => \"success\"}",
"<connection-factory name=\"MyConnFactory\" ... producer-window-size=\"1048576\" />",
"subsystem=messaging-activemq/server=default/connection-factory=RemoteConnectionFactory:read-attribute(name=producer-window-size) { \"outcome\" => \"success\", \"result\" => 65536 }",
"/subsystem=messaging-activemq/server=default/connection-factory=RemoteConnectionFactory:write-attribute(name=producer-window-size,value=65536) {\"outcome\" => \"success\"}",
"<address-setting name=\"myqueue\" address-full-policy=\"BLOCK\" max-size-bytes=\"100000\" />",
"/subsystem=messaging-activemq/server=default/address-setting=myqueue:write-attribute(name=max-size-bytes,value=100000) {\"outcome\" => \"success\"}",
"/subsystem=messaging-activemq/server=default/address-setting=myqueue:write-attribute(name=address-full-policy,value=BLOCK) {\"outcome\" => \"success\"}",
"<connection-factory name=\"MyConnFactory\" producer-max-rate=\"1000\" />",
"/subsystem=messaging-activemq/server=default/connection-factory=RemoteConnectionFactory:read-attribute(name=producer-max-rate) { \"outcome\" => \"success\", \"result\" => -1 }",
"/subsystem=messaging-activemq/server=default/connection-factory=RemoteConnectionFactory:write-attribute(name=producer-max-rate,value=100) {\"outcome\" => \"success\"}"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuring_messaging/flow_control |
Chapter 5. Installing a cluster on GCP with network customizations | Chapter 5. Installing a cluster on GCP with network customizations In OpenShift Container Platform version 4.16, you can install a cluster with a customized network configuration on infrastructure that the installation program provisions on Google Cloud Platform (GCP). By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. 5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured a GCP project to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 5.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 5.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 5.5. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Configure a GCP account. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for GCP 5.5.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 5.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 5.5.2. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 5.1. Machine series A2 A3 C2 C2D C3 C3D E2 M1 N1 N2 N2D Tau T2D 5.5.3. Tested instance types for GCP on 64-bit ARM infrastructures The following Google Cloud Platform (GCP) 64-bit ARM instance types have been tested with OpenShift Container Platform. Example 5.2. Machine series for 64-bit ARM machines Tau T2A 5.5.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . As part of the installation process, you specify the custom machine type in the install-config.yaml file. Sample install-config.yaml file with a custom machine type compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3 5.5.5. Enabling Shielded VMs You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google's documentation on Shielded VMs . Note Shielded VMs are currently not supported on clusters with 64-bit ARM infrastructures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use shielded VMs for only control plane machines: controlPlane: platform: gcp: secureBoot: Enabled To use shielded VMs for only compute machines: compute: - platform: gcp: secureBoot: Enabled To use shielded VMs for all machines: platform: gcp: defaultMachinePlatform: secureBoot: Enabled 5.5.6. Enabling Confidential VMs You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google's documentation on Confidential Computing . You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other. Note Confidential VMs are currently not supported on 64-bit ARM architectures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use confidential VMs for only control plane machines: controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3 1 Enable confidential VMs. 2 Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types . 3 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VMs do not support live VM migration. To use confidential VMs for only compute machines: compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate To use confidential VMs for all machines: platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate 5.5.7. Sample customized install-config.yaml file for GCP You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: 16 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 17 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 18 region: us-central1 19 defaultMachinePlatform: tags: 20 - global-tag1 - global-tag2 osImage: 21 project: example-project-name name: example-image-name pullSecret: '{"auths": ...}' 22 fips: false 23 sshKey: ssh-ed25519 AAAA... 24 1 15 18 19 22 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 9 16 If you do not provide these parameters and values, the installation program provides the default value. 4 10 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 11 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 6 12 Optional: The custom encryption key section to encrypt both virtual machines and persistent volumes. Your default compute service account must have the permissions granted to use your KMS key and have the correct IAM role assigned. The default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. For more information about granting the correct permissions for your service account, see "Machine management" "Creating compute machine sets" "Creating a compute machine set on GCP". 7 13 20 Optional: A set of network tags to apply to the control plane or compute machine sets. The platform.gcp.defaultMachinePlatform.tags parameter will apply to both control plane and compute machines. If the compute.platform.gcp.tags or controlPlane.platform.gcp.tags parameters are set, they override the platform.gcp.defaultMachinePlatform.tags parameter. 8 14 21 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) that should be used to boot control plane and compute machines. The project and name parameters under platform.gcp.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the project and name parameters under controlPlane.platform.gcp.osImage or compute.platform.gcp.osImage are set, they override the platform.gcp.defaultMachinePlatform.osImage parameters. 17 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 23 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 24 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Additional resources Enabling customer-managed encryption keys for a compute machine set 5.5.8. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.6. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 5.7. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring a GCP cluster to use short-term credentials . 5.7.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure Add the following granular permissions to the GCP account that the installation program uses: Example 5.3. Required GCP permissions compute.machineTypes.list compute.regions.list compute.zones.list dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.delete dns.resourceRecordSets.list If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 5.7.2. Configuring a GCP cluster to use short-term credentials To install a cluster that is configured to use GCP Workload Identity, you must configure the CCO utility and create the required GCP resources for your cluster. 5.7.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have added one of the following authentication options to the GCP account that the installation program uses: The IAM Workload Identity Pool Admin role. The following granular permissions: Example 5.4. Required GCP permissions compute.projects.get iam.googleapis.com/workloadIdentityPoolProviders.create iam.googleapis.com/workloadIdentityPoolProviders.get iam.googleapis.com/workloadIdentityPools.create iam.googleapis.com/workloadIdentityPools.delete iam.googleapis.com/workloadIdentityPools.get iam.googleapis.com/workloadIdentityPools.undelete iam.roles.create iam.roles.delete iam.roles.list iam.roles.undelete iam.roles.update iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.getIamPolicy iam.serviceAccounts.list iam.serviceAccounts.setIamPolicy iam.workloadIdentityPoolProviders.get iam.workloadIdentityPools.delete resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.getIamPolicy storage.buckets.setIamPolicy storage.objects.create storage.objects.delete storage.objects.list Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 5.7.2.2. Creating GCP resources with the Cloud Credential Operator utility You can use the ccoctl gcp create-all command to automate the creation of GCP resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl gcp create-all \ --name=<name> \ 1 --region=<gcp_region> \ 2 --project=<gcp_project_id> \ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4 1 Specify the user-defined name for all created GCP resources used for tracking. 2 Specify the GCP region in which cloud resources will be created. 3 Specify the GCP project ID in which cloud resources will be created. 4 Specify the directory containing the files of CredentialsRequest manifests to create GCP service accounts. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml You can verify that the IAM service accounts are created by querying GCP. For more information, refer to GCP documentation on listing IAM service accounts. 5.7.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure Add the following granular permissions to the GCP account that the installation program uses: Example 5.5. Required GCP permissions compute.machineTypes.list compute.regions.list compute.zones.list dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.delete dns.resourceRecordSets.list If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 5.8. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information, see "Installation configuration parameters". Note Set the networking.machineNetwork to match the Classless Inter-Domain Routing (CIDR) where the preferred subnet is located. Important The CIDR range 172.17.0.0/16 is reserved by libVirt . You cannot use any other CIDR range that overlaps with the 172.17.0.0/16 CIDR range for networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration. During phase 2, you cannot override the values that you specified in phase 1 in the install-config.yaml file. However, you can customize the network plugin during phase 2. 5.9. Specifying advanced network configuration You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following example: Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. Remove the Kubernetes manifest files that define the control plane machines and compute MachineSets : USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the MachineSet files to create compute machines by using the machine API, but you must update references to them to match your environment. 5.10. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 5.10.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 5.2. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 5.3. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. OpenShift SDN is no longer available as an installation choice for new clusters. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 5.4. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 5.5. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 5.6. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd97::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is fd97::/64 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 5.7. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 5.8. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 5.9. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 5.10. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Table 5.11. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full Important Using OVNKubernetes can lead to a stack exhaustion problem on IBM Power(R). kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 5.12. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 5.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 5.12. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 5.13. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 5.14. steps Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3",
"controlPlane: platform: gcp: secureBoot: Enabled",
"compute: - platform: gcp: secureBoot: Enabled",
"platform: gcp: defaultMachinePlatform: secureBoot: Enabled",
"controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3",
"compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: 16 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 17 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 18 region: us-central1 19 defaultMachinePlatform: tags: 20 - global-tag1 - global-tag2 osImage: 21 project: example-project-name name: example-image-name pullSecret: '{\"auths\": ...}' 22 fips: false 23 sshKey: ssh-ed25519 AAAA... 24",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret",
"chmod 775 ccoctl.<rhel_version>",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl gcp create-all --name=<name> \\ 1 --region=<gcp_region> \\ 2 --project=<gcp_project_id> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_gcp/installing-gcp-network-customizations |
Chapter 2. Installation | Chapter 2. Installation This section outlines the differences between Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 5 installation procedures. Depending on which release of Red Hat Enterprise Linux 5 you are migrating from, not all of the options and techniques listed here will be relevant to your environment, as they might already be present in your Red Hat Enterprise Linux 5 environment. 2.1. Kernel and Boot Options You can perform memory testing before you install Red Hat Enterprise Linux by entering memtest86 at the boot: prompt. This option runs the Memtest86 stand alone memory testing software in place of the Anaconda system installer. Once started, Memtest86 memory testing loops continually until the Esc key is pressed. The rdloaddriver kernel parameter is now needed to define the order of module loading, instead of the old scsi_hostadapter option. Kernel Modesetting (KMS) is a feature that assigns the responsibility of graphics mode initialization to the kernel, and is enabled by default. KMS enables: Improved graphical boot. Faster fast user switching. Seamless X server switching. Graphical panic messages. KMS can be disabled for all drivers by appending nomodeset to the boot: line when booting the system. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/migration_planning_guide/chap-migration_guide-installation |
Chapter 5. The User Interface | Chapter 5. The User Interface The automation controller User Interface (UI) provides a graphical framework for your IT orchestration requirements. The navigation panel provides quick access to automation controller resources, such as Projects , Inventories , Job Templates , and Jobs . Note The automation controller UI is also available as a technical preview and is subject to change in future releases. To preview the new UI, click the Enable Preview of New User Interface toggle to On from the Miscellaneous System option of the Settings menu. After saving, logout and log back in to access the new UI from the preview banner. To return to the current UI, click the link on the top banner where indicated. Access your user profile, the About page, view related documentation, or log out using the icons in the page header. You can view the activity stream for that user by clicking the Activity Stream icon. 5.1. Views The automation controller UI provides several options for viewing information. Dashboard view Jobs view Schedules view Activity Stream Workflow Approvals Host Metrics 5.1.1. Dashboard View Use the navigation menu to complete the following tasks: Display different views Navigate to your resources Grant access to users Administer automation controller features in the UI Procedure From the navigation panel, select Views to hide or display the Views options. The dashboard displays a summary of your current Job status . You can filter the job status within a period of time or by job type. You can also display summaries of Recent Jobs and Recent Templates . The Recent Jobs tab displays which jobs were most recently run, their status, and the time at which they were run. The Recent Templates tab displays a summary of the most recently used templates. You can also access this summary by selecting Resources Templates from the navigation panel. Note Click Views Dashboard on the navigation panel, or the Ansible Automation Platform logo at any time to return to the Dashboard. 5.1.2. Jobs view From the navigation panel, select Views Jobs . This view displays the jobs that have run, including projects, templates, management jobs, SCM updates, and playbook runs. 5.1.3. Schedules view From the navigation panel, select Views Schedules . This view shows all the scheduled jobs that are configured. 5.1.4. Activity Stream From the navigation panel, select Views Activity Stream to display Activity Streams. Most screens have an Activity Stream icon. An Activity Stream shows all changes for a particular object. For each change, the Activity Stream shows the time of the event, the user that initiated the event, and the action. The information displayed varies depending on the type of event. Click the Examine icon to display the event log for the change. You can filter the Activity Stream by the initiating user, by system (if it was system initiated), or by any related object, such as a credential, job template, or schedule. The Activity Stream on the main Dashboard shows the Activity Stream for the entire instance. Most pages permit viewing an activity stream filtered for that specific object. 5.1.5. Workflow Approvals From the navigation panel, select Views Workflow Approvals to see your workflow approval queue. The list contains actions that require you to approve or deny before a job can proceed. 5.1.6. Host Metrics From the navigation panel, select Host Metrics to see the activity associated with hosts, which includes counts on those that have been automated, used in inventories, and deleted. For further information, see Troubleshooting: Keeping your subscription in compliance . 5.2. Resources Menu The Resources menu provides access to the following components of automation controller: Templates Credentials Projects Inventories Hosts 5.3. Access Menu The Access menu enables you to configure who has permissions to automation controller resources: Organizations Users Teams 5.4. Administration The Administration menu provides access to the administrative options of automation controller. From here, you can create, view, and edit: Credential types Notifications Management_jobs Instance groups Instances Applications Execution environments Topology view 5.5. The Settings menu Configure global and system-level settings using the Settings menu. The Settings menu provides access to automation controller configuration settings. The Settings page enables administrators to configure the following: Authentication Jobs System-level attributes Customize the UI Product license information | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_user_guide/assembly-controller-user-interface |
Chapter 3. Configuring the Red Hat Ceph Storage cluster | Chapter 3. Configuring the Red Hat Ceph Storage cluster To deploy the Red Hat Ceph Storage cluster for your Red Hat OpenStack Platform environment, you must first configure the Red Hat Ceph Storage cluster options for your environment. Configure the Red Hat Ceph Storage cluster options: Configuring time synchronization Configuring the Red Hat Ceph Storage cluster name Configuring network options with the network data file Configuring network options with a configuration file Configuring a CRUSH hierarchy for an OSD Configuring Ceph service placement options Configuring SSH user options for Ceph nodes Configuring the container registry Prerequisites Before you can configure and deploy the Red Hat Ceph Storage cluster, use the the Bare Metal Provisioning service (ironic) to provision the bare metal instances and networks. For more information, see Bare Metal Provisioning . 3.1. The openstack overcloud ceph deploy command If you deploy the Ceph cluster using director, you must use the openstack overcloud ceph deploy command. For a complete listing of command options and parameters, see openstack overcloud ceph deploy in the Command Line Interface Reference . The command openstack overcloud ceph deploy --help provides the current options and parameters available in your environment. 3.2. Ceph configuration file A standard format initialization file is one way to perform Ceph cluster configuration. This initialization file is used to configure the Ceph cluster. Use one of the following commands to use this file: * cephadm bootstap --config <file_name> * openstack overcloud ceph deploy --config <file_name> commands. Example The following example creates a simple initialization file called initial-ceph.conf and then uses the openstack overcloud ceph deploy command to configure the Ceph cluster with it. It demonstrates how to configure the messenger v2 protocol to use a secure mode that encrypts all data passing over the network. 3.3. Configuring time synchronization The Time Synchronization Service (chrony) is enabled for time synchronization by default. You can perform the following tasks to configure the service. Configuring time synchronization with a delimited list Configuring time synchronization with an environment file Disabling time synchronization Note Time synchronization is configured using either a delimited list or an environment file. Use the procedure that is best suited to your administrative practices. 3.3.1. Configuring time synchronization with a delimited list You can configure the Time Synchronization Service (chrony) to use a delimited list to configure NTP servers. Procedure Log in to the undercloud node as the stack user. Configure NTP servers with a delimited list: Replace <ntp_server_list> with a comma delimited list of servers. 3.3.2. Configuring time synchronization with an environment file You can configure the Time Synchronization Service (chrony) to use an environment file that defines NTP servers. Procedure Log in to the undercloud node as the stack user. Create an environment file, such as /home/stack/templates/ntp-parameters.yaml , to contain the NTP server configuration. Add the NtpServer parameter. The NtpServer parameter contains a comma delimited list of NTP servers. Configure NTP servers with an environment file: Replace <ntp_file_name> with the name of the environment file you created. 3.3.3. Disabling time synchronization The Time Synchronization Service (chrony) is enabled by default. You can disable the service if you do not want to use it. Procedure Log in to the undercloud node as the stack user. Disable the Time Synchronization Service (chrony): 3.4. Configuring the Red Hat Ceph Storage cluster name You can deploy the Red Hat Ceph Storage cluster with a name that you configure. The default name is ceph . Procedure Log in to the undercloud node as the stack user. Configure the name of the Ceph Storage cluster by using the following command: openstack overcloud ceph deploy \ --cluster <cluster_name> USD openstack overcloud ceph deploy \ --cluster central \ Note Keyring files are not created at this time. Keyring files are created during the overcloud deployment. Keyring files inherit the cluster name configured during this procedure. For more information about overcloud deployment see Section 8.1, "Initiating overcloud deployment" In the example above, the Ceph cluster is named central . The configuration and keyring files for the central Ceph cluster would be created in /etc/ceph during the deployment process. Troubleshooting The following error may be displayed if you configure a custom name for the Ceph Storage cluster: monclient: get_monmap_and_config cannot identify monitors to contact because If this error is displayed, use the following command after Ceph deployment: cephadm shell --config <configuration_file> --keyring <keyring_file> For example, if this error was displayed when you configured the cluster name to central , you would use the following command: The following command could also be used as an alternative: 3.5. Configuring network options with the network data file The network data file describes the networks used by the Red Hat Ceph Storage cluster. Procedure Log in to the undercloud node as the stack user. Create a YAML format file that defines the custom network attributes called network_data.yaml . Important Using network isolation, the standard network deployment consists of two storage networks which map to the two Ceph networks: The storage network, storage , maps to the Ceph network, public_network . This network handles storage traffic such as the RBD traffic from the Compute nodes to the Ceph cluster. The storage network, storage_mgmt , maps to the Ceph network, cluster_network . This network handles storage management traffic such as data replication between Ceph OSDs. Use the openstack overcloud ceph deploy command with the --crush-hierarchy option to deploy the configuration. Important The openstack overcloud ceph deploy command uses the network data file specified by the --network-data option to determine the networks to be used as the public_network and cluster_network . The command assumes these networks are named storage and storage_mgmt in network data file unless a different name is specified by the --public-network-name and --cluster-network-name options. You must use the --network-data option when deploying with network isolation. The default undercloud (192.168.24.0/24) will be used for both the public_network and cluster_network if you do not use this option. 3.6. Configuring network options with a configuration file Network options can be specified with a configuration file as an alternative to the network data file. Important Using this method to configure network options overwrites automatically generated values in network_data.yaml . Ensure you set all four values when using this network configuration method. Procedure Log in to the undercloud node as the stack user. Create a standard format initialization file to configure the Ceph cluster. If you have already created a file to include other configuration options, you can add the network configuration to it. Add the following parameters to the [global] section of the file: public_network cluster_network ms_bind_ipv4 Important Ensure the public_network and cluster_network map to the same networks as storage and storage_mgmt . The following is an example of a configuration file entry for a network configuration with multiple subnets and custom networking names: Use the command openstack overcloud ceph deploy with the --config option to deploy the configuration file. 3.7. Configuring a CRUSH hierarchy for an OSD You can configure a custom Controlled Replication Under Scalable Hashing (CRUSH) hierarchy during OSD deployment to add the OSD location attribute to the Ceph Storage cluster hosts specification. The location attribute configures where the OSD is placed within the CRUSH hierarchy. Note The location attribute sets only the initial CRUSH location. Subsequent changes of the attribute are ignored. Procedure Log in to the undercloud node as the stack user. Source the stackrc undercloud credentials file: USD source ~/stackrc Create a configuration file to define the custom CRUSH hierarchy, for example, crush_hierarchy.yaml . Add the following configuration to the file: Replace <osd_host> with the hostnames of the nodes where the OSDs are deployed, for example, ceph-0 . Replace <rack_num> with the number of the rack where the OSDs are deployed, for example, r0 . Deploy the Ceph cluster with your custom OSD layout: The Ceph cluster is created with the custom OSD layout. The example file above would result in the following OSD layout. Note Device classes are automatically detected by Ceph but CRUSH rules are associated with pools. Pools are still defined and created using the CephCrushRules parameter during the overcloud deployment. Additional resources See Red Hat Ceph Storage workload considerations in the Red Hat Ceph Storage Installation Guide for additional information. 3.8. Configuring Ceph service placement options You can define what nodes run what Ceph services using a custom roles file. A custom roles file is only necessary when default role assignments are not used because of the environment. For example, when deploying hyperconverged nodes, the predeployed compute nodes should be labeled as osd with a service type of osd to have a placement list containing a list of compute instances. Service definitions in the roles_data.yaml file determine which bare metal instance runs which service. By default, the Controller role has the CephMon and CephMgr service while the CephStorage role has the CephOSD service. Unlike most composable services, Ceph services do not require heat output to determine how services are configured. The roles_data.yaml file always determines Ceph service placement even though the deployed Ceph process occurs before Heat runs. Procedure Log in to the undercloud node as the stack user. Create a YAML format file that defines the custom roles. Deploy the configuration file: 3.9. Configuring SSH user options for Ceph nodes The openstack overcloud ceph deploy command creates the user and keys and distributes them to the hosts so it is not necessary to perform the procedures in this section. However, it is a supported option. Cephadm connects to all managed remote Ceph nodes using SSH. The Red Hat Ceph Storage cluster deployment process creates an account and SSH key pair on all overcloud Ceph nodes. The key pair is then given to Cephadm so it can communicate with the nodes. 3.9.1. Creating the SSH user before Red Hat Ceph Storage cluster creation You can create the SSH user before Ceph cluster creation with the openstack overcloud ceph user enable command. Procedure Log in to the undercloud node as the stack user. Create the SSH user: USD openstack overcloud ceph user enable Note The default user name is ceph-admin . To specify a different user name, use the --cephadm-ssh-user option to specify a different one. openstack overcloud ceph user enable --cephadm-ssh-user <custom_user_name> It is recommended to use the default name and not use the --cephadm-ssh-user parameter. If the user is created in advance, use the parameter --skip-user-create when executing openstack overcloud ceph deploy . 3.9.2. Disabling the SSH user Disabling the SSH user disables Cephadm. Disabling Cephadm removes the ability of the service to administer the Ceph cluster and prevents associated commands from working. It also prevents Ceph node overcloud scaling operations. It also removes all public and private SSH keys. Procedure Log in to the undercloud node as the stack user. Use the command openstack overcloud ceph user disable --fsid <FSID> ceph_spec.yaml to disable the SSH user. Note The FSID is located in the deployed_ceph.yaml environment file. Important The openstack overcloud ceph user disable command is not recommended unless it is necessary to disable Cephadm. Important To enable the SSH user and Cephadm service after being disabled, use the openstack overcloud ceph user enable --fsid <FSID> ceph_spec.yaml command. Note This command requires the path to a Ceph specification file to determine: Which hosts require the SSH user. Which hosts have the _admin label and require the private SSH key. Which hosts require the public SSH key. For more information about specification files and how to generate them, see Generating the service specification. 3.10. Accessing Ceph Storage containers Obtaining and modifying container images in the Transitioning to Containerized Services guide contains procedures and information on how to prepare the registry and your undercloud and overcloud configuration to use container images. Use the information in this section to adapt these procedures to access Ceph Storage containers. There are two options for accessing Ceph Storage containers from the overcloud. Downloading containers directly from a remote registry Cacheing containers on the undercloud 3.10.1. Downloading containers directly from a remote registry You can configure Ceph to download containers directly from a remote registry. Procedure Create a containers-prepare-parameter.yaml file using the procedure Preparing container images . Add the remote registry credentials to the containers-prepare-parameter.yaml file using the ContainerImageRegistryCredentials parameter as described in Obtaining container images from private registries . When you deploy Ceph, pass the containers-prepare-parameter.yaml file using the openstack overcloud ceph deploy command. Note If you do not cache the containers on the undercloud, as described in Cacheing containers on the undercloud , then you should pass the same containers-prepare-parameter.yaml file to the openstack overcloud ceph deploy command when you deploy Ceph. This will cache containers on the undercloud. Result The credentials in the containers-prepare-parameter.yaml are used by the cephadm command to authenticate to the remote registry and download the Ceph Storage container. 3.10.2. Cacheing containers on the undercloud The procedure Modifying images during preparation describes using the following command: If you do not use the --container-image-prepare option to provide authentication credentials to the openstack overcloud ceph deploy command and directly download the Ceph containers from a remote registry, as described in Downloading containers directly from a remote registry , you must run the sudo openstack tripleo container image prepare command before deploying Ceph. | [
"cat <<EOF > initial-ceph.conf [global] ms_cluster_mode: secure ms_service_mode: secure ms_client_mode: secure EOF openstack overcloud ceph deploy --config initial-ceph.conf",
"openstack overcloud ceph deploy --ntp-server \"<ntp_server_list>\"",
"openstack overcloud ceph deploy --ntp-server \"0.pool.ntp.org,1.pool.ntp.org\"",
"parameter_defaults: NtpServer: 0.pool.ntp.org,1.pool.ntp.org",
"openstack overcloud ceph deploy --ntp-heat-env-file \"<ntp_file_name>\"",
"openstack overcloud ceph deploy --ntp-heat-env-file \"/home/stack/templates/ntp-parameters.yaml\"",
"openstack overcloud ceph deploy --skip-ntp",
"ls -l /etc/ceph/ total 16 -rw-------. 1 root root 63 Mar 26 21:49 central.client.admin.keyring -rw-------. 1 167 167 201 Mar 26 22:17 central.client.openstack.keyring -rw-------. 1 167 167 134 Mar 26 22:17 central.client.radosgw.keyring -rw-r--r--. 1 root root 177 Mar 26 21:49 central.conf",
"cephadm shell --config /etc/ceph/central.conf --keyring /etc/ceph/central.client.admin.keyring",
"cephadm shell --mount /etc/ceph:/etc/ceph export CEPH_ARGS='--cluster central'",
"openstack overcloud ceph deploy deployed_metal.yaml -o deployed_ceph.yaml --network-data network_data.yaml",
"[global] public_network = 172.16.14.0/24,172.16.15.0/24 cluster_network = 172.16.12.0/24,172.16.13.0/24 ms_bind_ipv4 = True ms_bind_ipv6 = False",
"openstack overcloud ceph deploy --config initial-ceph.conf --network-data network_data.yaml",
"ceph_crush_hierarchy: <osd_host>: root: default rack: <rack_num> <osd_host>: root: default rack: <rack_num> <osd_host>: root: default rack: <rack_num>",
"openstack overcloud ceph deploy deployed_metal.yaml -o deployed_ceph.yaml --osd-spec osd_spec.yaml --crush-hierarchy crush_hierarchy.yaml",
"ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.02939 root default -3 0.00980 rack r0 -2 0.00980 host ceph-node-00 0 hdd 0.00980 osd.0 up 1.00000 1.00000 -5 0.00980 rack r1 -4 0.00980 host ceph-node-01 1 hdd 0.00980 osd.1 up 1.00000 1.00000 -7 0.00980 rack r2 -6 0.00980 host ceph-node-02 2 hdd 0.00980 osd.2 up 1.00000 1.00000",
"openstack overcloud ceph deploy deployed_metal.yaml -o deployed_ceph.yaml --roles-data custom_roles.yaml",
"openstack overcloud ceph deploy --container-image-prepare containers-prepare-parameter.yaml",
"sudo openstack tripleo container image prepare -e ~/containers-prepare-parameter.yaml \\"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/deploying_red_hat_ceph_storage_and_red_hat_openstack_platform_together_with_director/assembly_deployed_ceph_storage_cluster_deployingcontainerizedrhcs |
Red Hat Ansible Automation Platform release notes | Red Hat Ansible Automation Platform release notes Red Hat Ansible Automation Platform 2.4 New features, enhancements, and bug fix information Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html-single/red_hat_ansible_automation_platform_release_notes/index |
Chapter 6. Administer Micrometer in JBoss EAP | Chapter 6. Administer Micrometer in JBoss EAP 6.1. Adding Micrometer subsystem using the Management CLI The Micrometer subsystem enhances monitoring capabilities in JBoss EAP by facilitating comprehensive metrics gathering and publication. However, the org.jboss.extension.micrometer subsystem is available to all standalone configurations within the JBoss EAP distribution, but it must be added manually. Prerequisites JBoss EAP 8.0 with JBoss EAP XP 5.0 is installed. You have access to the JBoss EAP management CLI and permissions to make configuration changes. Procedure Open your terminal. Connect to the server by running the following command: Check if the Micrometer extension is already added to the configuration by running the following command: If the Micrometer extension is not available, add it by running the following command: Add the Micrometer subsystem with the required configuration. For example, specify the endpoint URL of the metrics collector by running the following command: Reload the server to apply the changes: Note When the collector is not running or its collector endpoint is unavailable, then a warning message similar to the following is triggered: By following these steps, you can add the Micrometer subsystem to your JBoss EAP server using the management CLI, enabling enhanced monitoring capabilities for your applications. Additional resources Develop Micrometer application for JBoss EAP | [
"./jboss-cli.sh --connect",
"[standalone@localhost:9990 /] /extension=org.wildfly.extension.micrometer:read-resource",
"[standalone@localhost:9990 /] /extension=org.wildfly.extension.micrometer:add",
"[standalone@localhost:9990 /] /subsystem=micrometer:add(endpoint=\"http://localhost:4318/v1/metrics\")",
"[standalone@localhost:9990 /] reload",
"11:28:16,581 WARNING [io.micrometer.registry.otlp.OtlpMeterRegistry] (MSC service thread 1-5) Failed to publish metrics to OTLP receiver: java.net.ConnectException: Connection refused"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/using_jboss_eap_xp_5.0/administer_micrometer_in_jboss_eap |
Appendix B. Revision History | Appendix B. Revision History Revision History Revision 1-43 Fri Feb 7 2020 Jan Fiala Async release with an update of the Compliance and Vulnerability Scanning chapter. Revision 1-42 Fri Aug 9 2019 Mirek Jahoda Version for 7.7 GA publication. Revision 1-41 Sat Oct 20 2018 Mirek Jahoda Version for 7.6 GA publication. Revision 1-32 Wed Apr 4 2018 Mirek Jahoda Version for 7.5 GA publication. Revision 1-30 Thu Jul 27 2017 Mirek Jahoda Version for 7.4 GA publication. Revision 1-24 Mon Feb 6 2017 Mirek Jahoda Async release with misc. updates, especially in the firewalld section. Revision 1-23 Tue Nov 1 2016 Mirek Jahoda Version for 7.3 GA publication. Revision 1-19 Mon Jul 18 2016 Mirek Jahoda The Smart Cards section added. Revision 1-18 Mon Jun 27 2016 Mirek Jahoda The OpenSCAP-daemon and Atomic Scan section added. Revision 1-17 Fri Jun 3 2016 Mirek Jahoda Async release with misc. updates. Revision 1-16 Tue Jan 5 2016 Robert Kratky Post 7.2 GA fixes. Revision 1-15 Tue Nov 10 2015 Robert Kratky Version for 7.2 GA release. Revision 1-14.18 Mon Nov 09 2015 Robert Kratky Async release with misc. updates. Revision 1-14.17 Wed Feb 18 2015 Robert Kratky Version for 7.1 GA release. Revision 1-14.15 Fri Dec 06 2014 Robert Kratky Update to sort order on the Red Hat Customer Portal. Revision 1-14.13 Thu Nov 27 2014 Robert Kratky Updates reflecting the POODLE vuln. Revision 1-14.12 Tue Jun 03 2014 Tomas Capek Version for 7.0 GA release. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/app-security_guide-revision_history |
Chapter 3. New features | Chapter 3. New features This section lists all major updates, enhancements, and new features introduced in this release of Red Hat Ceph Storage. The main features added by this release are: Containerized Cluster Red Hat Ceph Storage 5 supports only containerized daemons. It does not support non-containerized storage clusters. If you are upgrading a non-containerized storage cluster from Red Hat Ceph Storage 4 to Red Hat Ceph Storage 5, the upgrade process includes the conversion to a containerized deployment. For more information, see the Upgrading a Red Hat Ceph Storage cluster from RHCS 4 to RHCS 5 section in the Red Hat Ceph Storage Installation Guide for more details. Cephadm Cephadm is a new containerized deployment tool that deploys and manages a Red Hat Ceph Storage 5.0 cluster by connecting to hosts from the manager daemon. The cephadm utility replaces ceph-ansible for Red Hat Ceph Storage deployment. The goal of Cephadm is to provide a fully-featured, robust, and well installed management layer for running Red Hat Ceph Storage. The cephadm command manages the full lifecycle of a Red Hat Ceph Storage cluster. Starting with Red Hat Ceph Storage 5.0, ceph-ansible is no longer supported and is incompatible with the product. Once you have migrated to Red Hat Ceph Storage 5.0, you must use cephadm and cephadm-ansible to perform updates. The cephadm command can perform the following operations: Bootstrap a new Ceph storage cluster. Launch a containerized shell that works with the Ceph command-line interface (CLI). Aid in debugging containerized daemons. The cephadm command uses ssh to communicate with the nodes in the storage cluster and add, remove, or update Ceph daemon containers. This allows you to add, remove, or update Red Hat Ceph Storage containers without using external tools. The cephadm command has two main components: The cephadm shell launches a bash shell within a container. This enables you to run storage cluster installation and setup tasks, as well as to run ceph commands in the container. The cephadm orchestrator commands enable you to provision Ceph daemons and services, and to expand the storage cluster. For more information, see the Red Hat Ceph Storage Installation Guide . Management API The management API creates management scripts that are applicable for Red Hat Ceph Storage 5.0 and continues to operate unchanged for the version lifecycle. The incompatible versioning of the API would only happen across major release lines. For more information, see the Red Hat Ceph Storage Developer Guide . Disconnected installation of Red Hat Ceph Storage Red Hat Ceph Storage 5.0 supports the disconnected installation and bootstrapping of storage clusters on private networks. A disconnected installation uses custom images and configuration files and local hosts, instead of downloading files from the network. You can install container images that you have downloaded from a proxy host that has access to the Red Hat registry, or by copying a container image to your local registry. The bootstrapping process requires a specification file that identifies the hosts to be added by name and IP address. Once the initial monitor host has been bootstrapped, you can use Ceph Orchestrator commands to expand and configure the storage cluster. See the Red Hat Ceph Storage Installation Guide for more details. Ceph File System geo-replication Starting with the Red Hat Ceph Storage 5 release, you can replicate Ceph File Systems (CephFS) across geographical locations or between different sites. The new cephfs-mirror daemon does asynchronous replication of snapshots to a remote CephFS. See the Ceph File System mirrors section in the Red Hat Ceph Storage File System Guide for more details. A new Ceph File System client performance tool Starting with the Red Hat Ceph Storage 5 release, the Ceph File System (CephFS) provides a top -like utility to display metrics on Ceph File Systems in realtime. The cephfs-top utility is a curses -based Python script that uses the Ceph Manager stats module to fetch and display client performance metrics. See the Using the cephfs-top utility section in the Red Hat Ceph Storage File System Guide for more details. Monitoring the Ceph object gateway multisite using the Red Hat Ceph Storage Dashboard The Red Hat Ceph Storage dashboard can now be used to monitor an Ceph object gateway multisite configuration. After the multi-zones are set-up using the cephadm utility, the buckets of one zone is visible to other zones and other sites. You can also create, edit, delete buckets on the dashboard. See the Management of buckets of a multisite object configuration on the Ceph dashboard chapter in the Red Hat Ceph Storage Dashboard Guide for more details. Improved BlueStore space utilization The Ceph Object Gateway and the Ceph file system (CephFS) stores small objects and files as individual objects in RADOS. With this release, the default value of BlueStore's min_alloc_size for SSDs and HDDs is 4 KB. This enables better use of space with no impact on performance. See the OSD BlueStore chapter in the Red Hat Ceph Storage Administration Guide for more details. 3.1. The Cephadm utility Red Hat Ceph Storage can now automatically tune the Ceph OSD memory target With this release, osd_memory_target_autotune option is fixed, and works as expected. Users can enable Red Hat Ceph Storage to automatically tune the Ceph OSD memory target for the Ceph OSDs in the storage cluster for improved performance without explicitly setting the memory target for the Ceph OSDs. Red Hat Ceph Storage sets the Ceph OSD memory target on a per-node basis by evaluating the total memory available, and the daemons running on the node. Users can enable the memory auto-tuning feature for the Ceph OSD by running the following command: 3.2. Ceph Dashboard A new Grafana Dashboard to display graphs for Ceph Object Gateway multi-site setup With this release, a new Grafana dashboard is now available and displays graphs for Ceph Object Gateway multisite sync performance including two-way replication throughput, polling latency, and unsuccessful replications. See the Monitoring Ceph object gateway daemons on the dashboard section in the Red Hat Ceph Storage Dashboard Guide for more information. The Prometheus Alertmanager rule triggers an alert for different MTU settings on the Red Hat Ceph Storage Dashboard Previously, mismatch in MTU settings, which is a well-known cause of networking issues, had to be identified and managed using the command-line interface. With this release, when a node or a minority of them have an MTU setting that differs from the majority of nodes, an alert is triggered on the Red Hat Ceph Storage Dashboard. The user can either mute the alert or fix the MTU mismatched settings. See the Management of Alerts on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more information. User and role management on the Red Hat Ceph Storage Dashboard With this release, user and role management is now available. It allows administrators to define fine-grained role-based access control (RBAC) policies for users to create, update, list, and remove OSDs in a Ceph cluster. See the Management of roles on the Ceph dashboard in the Red Hat Ceph Storage Dashboard Guide for more information. The Red Hat Ceph Storage Dashboard now supports RBD v1 images Previously, the Red Hat Ceph Storage Dashboard displayed and supported RBD v2 format images only. With this release, users can now manage and migrate their v1 RBD images to v2 RBD images by setting the RBD_FORCE_ALLOW_V1 to 1 . See the Management of block devices using the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more information. Users can replace the failed OSD on the Red Hat Ceph Storage Dashboard With this release, users can identify and replace the failed OSD by preserving the OSD_ID of the OSDs on the Red Hat Ceph Storage Dashboard. See Replacing the failed OSDs on the Ceph dashboard in the Red Hat Ceph Storage Dashboard Guide for more information. Specify placement target when creating a Ceph Object Gateway bucket on the Red Hat Ceph Storage Dashboard With this release, users can now specify a placement target when creating a Ceph Object Gateway bucket on the Red Hat Ceph Storage Dashboard. See the Creating Ceph object gateway buckets on the dashboard section in the Red Hat Ceph Storage Dashboard Guide for more information. The Multi-Factor Authentication deletes feature is enabled on the Red Hat Ceph Storage Dashboard With this release, users can now enable Multi-Factor Authentication deletes (MFA) for a specific bucket from the Ceph cluster on the Red Hat Ceph Storage Dashboard. See the Editing Ceph object gateway buckets on the dashboard section in the Red Hat Ceph Storage Dashboard Guide for more information. The bucket versioning feature for a specific bucket is enabled on the Red Hat Ceph Storage Dashboard With this release, users can now enable bucket versioning for a specific bucket on the Red Hat Ceph Storage Dashboard. See the Editing Ceph object gateway buckets on the dashboard section in the Red Hat Ceph Storage Dashboard Guide for more information. The object locking feature for Ceph Object Gateway buckets is enabled on the Red Hat Ceph Storage Dashboard With this release, users can now enable object locking for Ceph Object Gateway buckets on the Red Hat Ceph Storage Dashboard. See the Creating Ceph object gateway buckets on the dashboard section in the Red Hat Ceph Storage Dashboard Guide for more information. The Red Hat Ceph Storage Dashboard has the vertical navigation bar With this release, the vertical navigation bar is now available. The heartbeat icon on the Red Hat Ceph Storage Dashboard menu changes color based on the cluster status that is green, yellow, and red. Other menus for example Cluster>Monitoring and Block>Mirroring display a colored numbered icon that shows the number of warnings in that specific component. The "box" page of the Red Hat Ceph Storage dashboard displays detailed information With this release, the "box" page of Red Hat Ceph Storage Dashboard displays information about the Ceph version, the hostname where the ceph-mgr is running, username,roles, and the browser details. Browser favicon displays the Red Hat logo with an icon for a change in the cluster health status With this release, the browser favicon now displays the Red Hat logo with an icon that changes color based on cluster health status that is green, yellow, or red. The error page of the Red Hat Ceph Storage Dashboard works as expected With this release, the error page of the Red Hat Ceph Storage Dashboard is fixed and works as expected. Users can view Cephadm workflows on the Red Hat Ceph Storage Dashboard With this release, the Red Hat Ceph Storage displays more information on inventory such as nodes defined in the Ceph Orchestrator and services such as information on containers. The Red Hat Ceph Storage dashboard also allows the users to manage the hosts on the Ceph cluster. See the Monitoring hosts of the Ceph cluster on the dashboard section in the Red Hat Ceph Storage Dashboard Guide for more information. Users can modify the object count and size quota on the Red Hat Ceph Storage Dashboard With this release, the users can now set and modify the object count and size quota for a given pool on the Red Hat Ceph Storage Dashboard. See the Creating pools on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more information. Users can manage Ceph File system snapshots on the Red Hat Ceph Storage Dashboard With this release, the users can now create and delete Ceph File System (CephFS) snapshots, and set and modify per-directory quotas on the Red Hat Ceph Storage Dashboard. Enhanced account and password policies for the Red Hat Ceph Storage Dashboard With this release, to comply with the best security standards, strict password and account policies are implemented. The user passwords need to comply with some configurable rules. User accounts can also be set to expire after a given amount of time, or be locked out after a number of unsuccessful log-in attempts. Users can manage users and buckets on any realm, zonegroup or zone With this release, users can now manage users and buckets not only on the default zone but any realm, zone group, or zone that they configure. To manage multiple daemons on the Red Hat Ceph Storage Dashboard, see the Management of buckets of a multi-site object gateway configuration on the Ceph dashboard in the Red Hat Ceph Storage Dashboard Guide . Users can create a tenanted S3 user intuitively on the Red Hat Ceph Storage Dashboard Previously, a tenanted S3 user could be created using a user friendly syntax that is "tenantUSDuser" instead of the intuitive separate input fields for each one. With this release, users can now create a tenanted S3 user intuitively without using "tenantUSDuser" on the Red Hat Ceph Storage Dashboard. The Red Hat Ceph Storage Dashboard now supports host management Previously, the command-line interface was used to manage hosts in a Red Hat Ceph Storage cluster. With this release, users can enable or disable the hosts by using the maintenance mode feature on the Red Hat Ceph Storage Dashboard. Nested tables can be expanded or collapsed on the Red Hat Ceph Storage Dashboard With this release, rows that contain nested tables can be expanded or collapsed by clicking on the row on the Red Hat Ceph Storage Dashboard. 3.3. Ceph File System CephFS clients can now reconnect after being blocklisted by Metadata Servers (MDS) Previously, Ceph File System (CephFS) clients were blocklisted by MDS because of network partitions or other transient errors. With this release, the CephFS client can reconnect to the mount with the appropriate configurations turned ON for each client as manual remount is not needed. Users can now use the ephemeral pinning policies for automated distribution of subtrees among MDS With this release, the export pins are improved by introducing efficient strategies to pin subtrees, thereby enabling automated distribution of subtrees among Metadata Servers (MDS) and eliminating user intervention for manual pinning. See the Ephemeral pinning policies section in the Red Hat Ceph Storage File System Guide for more information. mount.ceph has an additional option of recover_session=clean With this release, an additional option of recover_session=clean is added to mount.ceph . With this option, the client reconnects to the Red Hat Ceph Storage cluster automatically when it detects that it is blocklisted by Metadata servers (MDS) and the mounts are recovered automatically. See the Removing a Ceph File System client from the blocklist section in the Red Hat Ceph Storage File System Guide for more information. Asynchronous creation and removal of metadata operations in the Ceph File System With this release, Red Hat Enterprise Linux 8.4 kernel mounts now asynchronously execute file creation and removal on Red Hat Ceph Storage clusters. This improves performance of some workloads by avoiding round-trip latency for these system calls without impacting consistency. Use the new -o nowsync mount option to enable asynchronous file creation and deletion. Ceph File System (CephFS) now provides a configuration option for MDS called mds_join_fs With this release, when failing over metadata server (MDS) daemons, the cluster's monitors prefer standby daemons with mds_join_fs equal to the file system name with the failed rank . If no standby exists with mds_join_fs equal to the file system name , it chooses an unqualified standby for the replacement, or any other available standby, as a last resort. See the File system affinity section in the Red Hat Ceph Storage File System Guide for more information. Asynchronous replication of snapshots between Ceph Filesystems With this release, the mirroring module, that is the manager plugin, provides interfaces for managing directory snapshot mirroring. The mirroring module is responsible for assigning directories to the mirror daemons for the synchronization. Currently, a single mirror daemon is supported and can be deployed using cephadm . Ceph File System (CephFS) supports asynchronous replication of snapshots to a remote CephFS through the cephfs-mirror tool. A mirror daemon can handle snapshot synchronization for multiple file systems in a Red Hat Ceph Storage cluster. Snapshots are synchronized by mirroring snapshot data followed by creating a snapshot with the same name for a given directory on the remote file system, as the snapshot being synchronized. See the Ceph File System mirrors section in the Red Hat Ceph Storage File System Guide for more information. The cephfs-top tool is supported With this release, the cephfs-top tool is introduced. Ceph provides a top(1) like utility to display the various Ceph File System(CephFS) metrics in realtime. The cephfs-top is a curses based python script that uses the stats plugin in the Ceph Manager to fetch and display the metrics. CephFS clients periodically forward various metrics to the Ceph Metadata Servers (MDSs), which then forward these metrics to MDS rank zero for aggregation. These aggregated metrics are then forwarded to the Ceph Manager for consumption. Metrics are divided into two categories; global and per-mds. Global metrics represent a set of metrics for the file system as a whole for example client read latency, whereas per-mds metrics are for a specific MDS rank for example the number of subtrees handled by an MDS. Currently, global metrics are tracked and displayed. The cephfs-top command does not work reliably with multiple Ceph File Systems. See the Using the cephfs-top utility section in the Red Hat Ceph Storage File System Guide for more information. MDS daemons can be deployed with mds_autoscaler plugin With this release, a new ceph-mgr plugin, mds_autoscaler is available which deploys metadata server (MDS) daemons in response to the Ceph File System (CephFS) requirements. Once enabled, mds_autoscaler automatically deploys the required standbys and actives according to the setting of max_mds . For more information, see the Using the MDS autoscaler module section in Red Hat Ceph Storage File System Guide . Ceph File System (CephFS) scrub now works with multiple active MDS Previously, users had to set the parameter max_mds=1 and wait for only one active metadata server (MDS) to run Ceph File System (CephFS) scrub operations. With this release, irrespective of the value of mds_max , users can execute scrub on rank 0 with multiple active MDS. See the Configuring multiple active Metadata Server daemons section in the Red Hat Ceph Storage File System Guide for more information. Ceph File System snapshots can now be scheduled with snap_schedule plugin With this release, a new ceph-mgr plugin, snap_schedule is now available for scheduling snapshots of the Ceph File System (CephFS). The snapshots can be created, retained, and automatically garbage collected. 3.4. Containers The cephfs-mirror package is included in the ceph-container ubi8 image With this release, the cephfs-mirror package is now included in the ceph-container ubi8 image to support the mirroring Ceph File System (CephFS) snapshots to a remote CephFS. The command to configure CephFS-mirror is now available. See the Ceph File System mirrors section in the Red Hat Ceph Storage File System Guide for more information. 3.5. Ceph Object Gateway Bucket name or ID is supported in the radosgw-admin bucket stats command. With this release, the bucket name or ID can be used as an argument in the radosgw-admin bucket stats command. Bucket stats reports the non-current bucket instances which can be used in debugging a class of large OMAP object warnings that is the Ceph OSD log. Six new performance counters added to the Ceph Object Gateway's perfcounters With this release, six performance counters are now available in the Ceph Object Gateway. These counters report on the object expiration and lifecycle transition activity through the foreground and background processing of the Ceph Object Gateway lifecycle system. The lc_abort_mpu , lc_expire_current , lc_expire_noncurrent and lc_expire_dm counters permit the estimation of object expiration. The lc_transition_current and lc_transition_noncurrent counters provide information for lifecycle transitions. Users can now use object lock to implement WORM-like functionality in S3 object storage The S3 Object lock is the key mechanism supporting write-once-read-many (WORM) functionality in S3 Object storage. With this release, Red Hat Ceph Storage 5 supports Amazon Web Services (AWS) S3 Object lock data management API and the users can use Object lock concepts like retention period, legal hold, and bucket configuration to implement WORM-like functionality as part of the custom workflow overriding data deletion permissions. 3.6. RADOS The Red Hat Ceph Storage recovers with fewer OSDs available in an erasure coded (EC) pool Previously, erasure coded (EC) pools of size k+m required at least k+1 copies for recovery to function. If only k copies were available, recovery would be incomplete. With this release, Red Hat Ceph Storage cluster now recovers with k or more copies available in an EC pool. For more information on erasure coded pools, see the Erasure coded pools chapter in the Red Hat Ceph Storage Storage Strategies Guide . Sharding of RocksDB database using column families is supported With the BlueStore admin tool, the goal is to achieve less read and write amplification, decrease DB (Database) expansion during compaction, and also improve IOPS performance. With this release, you can reshard the database with the BlueStore admin tool. The data in RocksDB (DB) database is split into multiple Column Families (CF). Each CF has its own options and the split is performed according to type of data such as omap, object data, delayed cached writes, and PGlog. For more information on resharding, see the Resharding the RocksDB database using the BlueStore admin tool section in the Red Hat Ceph Storage Administration Guide . The mon_allow_pool_size_one configuration option can be enabled for Ceph monitors With this release, users can now enable the configuration option mon_allow_pool_size_one . Once enabled, users have to pass the flag --yes-i-really-mean-it for osd pool set size 1 , if they want to configure the pool size to 1 . The osd_client_message_cap option has been added back Previously, the osd_client_message_cap option was removed. With this release, the osd_client_message_cap option has been re-introduced. This option helps control the maximum number of in-flight client requests by throttling those requests. Doing this can be helpful when a Ceph OSD flaps due to an overwhelming amount of client-based traffic. Ceph messenger protocol is now updated to msgr v2.1. With this release, a new version of Ceph messenger protocol, msgr v2.1, is implemented, which addresses several security, integrity and potential performance issues with the version, msgr v2.0. All Ceph entities, both daemons and clients, now default to msgr v2.1. The new default osd_client_message_cap value is 256 Previously, the osd_client_message_cap had a default value of 0 . The default value of 0 disables the flow control feature for the Ceph OSD and does not prevent Ceph OSDs from flapping during periods of heavy client traffic. With this release, the default value of 256 for osd_client_message_cap provides better flow control by limiting the maximum number of inflight client requests. The set_new_tiebreaker command has been added With this release, storage administrators can set a new tiebreak Ceph Monitor when running in a storage cluster in stretch mode. This command can be helpful if the tiebreaker fails and cannot be recovered. 3.7. RADOS Block Devices (RBD) Improved librbd small I/O performance Previously, in an NVMe based Ceph cluster, there were limitations in the internal threading architecture resulting in a single librbd client struggling to achieve more than 20K 4KiB IOPS. With this release, librbd is switched to an asynchronous reactor model on top of the new ASIO-based neorados API thereby increasing the small I/O throughput potentially by several folds and reducing latency. Built in schedule for purging expired RBD images Previously, the storage administrator could set up a cron-like job for the rbd trash purge command. With this release, the built-in schedule is now available for purging expired RBD images. The rbd trash purge schedule add and the related commands can be used to configure the RBD trash to automatically purge expired images based on a defined schedule. See the Defining an automatic trash purge schedule section in the Red Hat Ceph Storage Block Device Guide for more information. Servicing reads of immutable objects with the new ceph-immutable-object-cache daemon With this release, the new ceph-immutable-object-cache daemon can be deployed on a hypervisor node to service the reads of immutable objects, for example a parent image snapshot. The new parent_cache librbd plugin coordinates with the daemon on every read from the parent image, adding the result to the cache wherever necessary. This reduces latency in scenarios where multiple virtual machines are concurrently sharing a golden image. For more information, see the Management of `ceph-immutable-object-cache`daemons chapter in the Red Hat Ceph Storage Block device guide . Support for sending compressible or incompressible hints in librbd-based clients Previously, there was no way to hint to the underlying OSD object store backend whether data is compressible or incompressible. With this release, the rbd_compression_hint configuration option can be used to hint whether data is compressible or incompressible, to the underlying OSD object store backend. This can be done per-image, per-pool or globally. See the Block device input and output options section in the Red Hat Ceph Storage Block Device Guide for more information. Overriding read-from-replica policy in librbd clients is supported Previously there was no way to limit the inter-DC/AZ network traffic, as when a cluster is stretched across data centers, the primary OSD may be on a higher latency and cost link in comparison with other OSDs in the PG. With this release, the rbd_read_from_replica_policy configuration option is now available and can be used to send reads to a random OSD or to the closest OSD in the PG, as defined by the CRUSH map and the client location in the CRUSH hierarchy. This can be done per-image, per-pool or globally. See the Block device input and output options section in the Red Hat Ceph Storage Block Device Guide for more information. Online re-sparsification of RBD images Previously, reclaiming space for image extents that are zeroed and yet fully allocated in the underlying OSD object store was highly cumbersome and error prone. With this release, the new rbd sparsify command can now be used to scan the image for chunks of zero data and deallocate the corresponding ranges in the underlying OSD object store. ocf:ceph:rbd cluster resource agent supports namespaces Previously, it was not possible to use ocf:ceph:rbd cluster resource agent for images that exist within a namespace. With this release, the new pool_namespace resource agent parameter can be used to handle images within the namespace. RBD images can be imported instantaneously With the rbd import command, the new image becomes available for use only after it is fully populated. With this release, the image live-migration feature is extended to support external data sources and can be used as an alternative to rbd import . The new image can be linked to local files, remote files served over HTTP(S) or remote Amazon S3-compatible buckets in raw , qcow or qcow2 formats and becomes available for use immediately. The image is populated as a background operation which can be run while it is in active use. LUKS encryption inside librbd is supported Layering QEMU LUKS encryption or dm-crypt kernel module on top of librbd suffers a major limitation that a copy-on-write clone image must use the same encryption key as its parent image. With this release, support for LUKS encryption has been incorporated within librbd. The new "rbd encryption format" command can now be used to format an image to a luks1 or luks2 encrypted format. 3.8. RBD Mirroring Snapshot-based mirroring of RBD images The journal-based mirroring provides fine-grained crash-consistent replication at the cost of double-write penalty where every update to the image is first recorded to the associated journal before modifying the actual image. With this release, in addition to journal-based mirroring, snapshot-based mirroring is supported. It provides coarse-grained crash-consistent replication where the image is mirrored using the mirror snapshots which can be created manually or periodically with a defined schedule. This is supported by all clients and requires a less stringent recovery point objective (RPO). 3.9. iSCSI Gateway Improved tcmu-runner section in the ceph status output Previously, each iSCSI LUN was listed individually resulting in cluttering the ceph status output. With this release, the ceph status command summarizes the report and shows only the number of active portals and the number of hosts. 3.10. The Ceph Ansible utility The cephadm-adopt.yml playbook is idempotent With this release, the cephadm-adopt.yml playbook is idempotent, that is the playbook can be run multiple times. If the playbook fails for any reason in the first attempt, you can rerun the playbook and it works as expected. For more information, see the Upgrading from Red Hat Ceph Storage 4 to Red Hat Ceph Storage 5 using `ceph-ansible` section in the Red Hat Ceph Storage Installation Guide . The pg_autoscaler and balancer modules are now disabled during upgrades Previously Red Hat Ceph Storage did not support disabling the pg_autoscaler and balancer modules during the upgrade process. This can result in the placement group check failing during the upgrade process, because the pg_autoscaler continues adjusting the placement group numbers. With this release, ceph-ansible disables the pg_autoscaler and balancer modules before upgrading a Ceph OSD node, and then re-enables them after the upgrade completes. Improvement to the Ceph Ansible rolling_update.yml playbook Previously, the Ceph Ansible rolling_update.yml playbook checked the Ceph version requirement of a container image later during the upgrade process. This resulted in the playbook failing in the middle of the upgrade process. With this release, the rolling_update.yml playbook will fail early, if the container image does not meet the Ceph version requirement. | [
"ceph config set osd osd_memory_target_autotune true"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/release_notes/enhancements |
Chapter 4. Specifics of Individual Software Collections | Chapter 4. Specifics of Individual Software Collections This chapter is focused on the specifics of certain Software Collections and provides additional details concerning these components. 4.1. Red Hat Developer Toolset Red Hat Developer Toolset is designed for developers working on the Red Hat Enterprise Linux platform. Red Hat Developer Toolset provides current versions of the GNU Compiler Collection , GNU Debugger , and other development, debugging, and performance monitoring tools. Similarly to other Software Collections, an additional set of tools is installed into the /opt/ directory. These tools are enabled by the user on demand using the supplied scl utility. Similarly to other Software Collections, these do not replace the Red Hat Enterprise Linux system versions of these tools, nor will they be used in preference to those system versions unless explicitly invoked using the scl utility. For an overview of features, refer to the Features section of the Red Hat Developer Toolset Release Notes . For detailed information regarding usage and changes in 10.0, see the Red Hat Developer Toolset User Guide . 4.2. MongoDB 3.6 The rh-mongodb36 Software Collection is available only for Red Hat Enterprise Linux 7. To install the rh-mongodb36 collection, type the following command as root : yum install rh-mongodb36 To run the MongoDB shell utility, type the following command: scl enable rh-mongodb36 'mongo' Note The rh-mongodb36-mongo-cxx-driver package has been built with the -std=gnu++14 option using GCC from Red Hat Developer Toolset 6. Binaries using the shared library for the MongoDB C++ Driver that use C++11 (or later) features have to be built also with Red Hat Developer Toolset 6 or later. See C++ compatibility details in the Red Hat Developer Toolset 6 User Guide . To start the MongoDB daemon, type the following command as root : systemctl start rh-mongodb36-mongod.service To start the MongoDB daemon on boot, type this command as root : systemctl enable rh-mongodb36-mongod.service To start the MongoDB sharding server, type the following command as root : systemctl start rh-mongodb36-mongos.service To start the MongoDB sharding server on boot, type this command as root : systemctl enable rh-mongodb36-mongos.service Note that the MongoDB sharding server does not work unless the user starts at least one configuration server and specifies it in the mongos.conf file. 4.3. Maven The rh-maven36 Software Collection, available only for Red Hat Enterprise Linux 7, provides a software project management and comprehension tool. Based on the concept of a project object model (POM), Maven can manage a project's build, reporting, and documentation from a central piece of information. To install the rh-maven36 Collection, type the following command as root : yum install rh-maven36 To enable this collection, type the following command at a shell prompt: scl enable rh-maven36 bash Global Maven settings, such as remote repositories or mirrors, can be customized by editing the /opt/rh/rh-maven36/root/etc/maven/settings.xml file. For more information about using Maven, refer to the Maven documentation . Usage of plug-ins is described in this section ; to find documentation regarding individual plug-ins, see the index of plug-ins . 4.4. Database Connectors Database connector packages provide the database client functionality, which is necessary for local or remote connection to a database server. Table 4.1, "Interoperability Between Languages and Databases" lists Software Collections with language runtimes that include connectors for certain database servers: yes - the combination is supported no - the combination is not supported Table 4.1. Interoperability Between Languages and Databases Database Language (Software Collection) MariaDB MongoDB MySQL PostgreSQL Redis SQLite3 rh-nodejs4 no no no no no no rh-nodejs6 no no no no no no rh-nodejs8 no no no no no no rh-nodejs10 no no no no no no rh-nodejs12 no no no no no no rh-nodejs14 no no no no no no rh-perl520 yes no yes yes no no rh-perl524 yes no yes yes no no rh-perl526 yes no yes yes no no rh-perl530 yes no yes yes no yes rh-php56 yes yes yes yes no yes rh-php70 yes no yes yes no yes rh-php71 yes no yes yes no yes rh-php72 yes no yes yes no yes rh-php73 yes no yes yes no yes python27 yes yes yes yes no yes rh-python34 no yes no yes no yes rh-python35 yes yes yes yes no yes rh-python36 yes yes yes yes no yes rh-python38 yes no yes yes no yes rh-ror41 yes yes yes yes no yes rh-ror42 yes yes yes yes no yes rh-ror50 yes yes yes yes no yes rh-ruby25 yes yes yes yes no no rh-ruby26 yes yes yes yes no no rh-ruby27 yes yes yes yes no no | null | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.6_release_notes/chap-individual_collections |
Preface | Preface The dynamic plugin support is based on the backend plugin manager package, which is a service that scans a configured root directory ( dynamicPlugins.rootDirectory in the app config) for dynamic plugin packages and loads them dynamically. You can use the dynamic plugins that come preinstalled with Red Hat Developer Hub or install external dynamic plugins from a public NPM registry. | null | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/dynamic_plugins_reference/pr01 |
20.3. Quota Accounting | 20.3. Quota Accounting When a quota is assigned to a consumer or a resource, each action by that consumer or on the resource involving storage, vCPU, or memory results in quota consumption or quota release. Since the quota acts as an upper bound that limits the user's access to resources, the quota calculations may differ from the actual current use of the user. The quota is calculated for the max growth potential and not the current usage. Example 20.1. Accounting example A user runs a virtual machine with 1 vCPU and 1024 MB memory. The action consumes 1 vCPU and 1024 MB of the quota assigned to that user. When the virtual machine is stopped 1 vCPU and 1024 MB of RAM are released back to the quota assigned to that user. Run-time quota consumption is accounted for only during the actual run-time of the consumer. A user creates a virtual thin provision disk of 10 GB. The actual disk usage may indicate only 3 GB of that disk are actually in use. The quota consumption, however, would be 10 GB, the max growth potential of that disk. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/quota_accounting |
Chapter 105. KafkaUserTlsExternalClientAuthentication schema reference | Chapter 105. KafkaUserTlsExternalClientAuthentication schema reference Used in: KafkaUserSpec The type property is a discriminator that distinguishes use of the KafkaUserTlsExternalClientAuthentication type from KafkaUserTlsClientAuthentication , KafkaUserScramSha512ClientAuthentication . It must have the value tls-external for the type KafkaUserTlsExternalClientAuthentication . Property Property type Description type string Must be tls-external . | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-kafkausertlsexternalclientauthentication-reference |
Chapter 40. Compiler and Tools | Chapter 40. Compiler and Tools Shenandoah garbage collector The new, low pause time Shenandoah garbage collector, is now available as a Technology Preview for OpenJDK on the Intel 64, AMD64, and 64-bit ARM architectures. Shenandoah performs concurrent evacuation which allows users to run with large heaps without long pause times. For more information, see https://wiki.openjdk.java.net/display/shenandoah/Main . (BZ#1400306) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/technology_previews_compiler_and_tools |
Release notes | Release notes Red Hat Service Interconnect 1.8 Latest information about features and issues in this release | null | https://docs.redhat.com/en/documentation/red_hat_service_interconnect/1.8/html/release_notes/index |
Chapter 65. SFTP Source | Chapter 65. SFTP Source Receive data from an SFTP Server. 65.1. Configuration Options The following table summarizes the configuration options available for the sftp-source Kamelet: Property Name Description Type Default Example connectionHost * Connection Host Hostname of the SFTP server string connectionPort * Connection Port Port of the FTP server string 22 directoryName * Directory Name The starting directory string password * Password The password to access the SFTP server string username * Username The username to access the SFTP server string idempotent Idempotency Skip already processed files. boolean true passiveMode Passive Mode Sets passive mode connection boolean false recursive Recursive If a directory, will look for files in all the sub-directories as well. boolean false Note Fields marked with an asterisk (*) are mandatory. 65.2. Dependencies At runtime, the sftp-source Kamelet relies upon the presence of the following dependencies: camel:ftp camel:core camel:kamelet 65.3. Usage This section describes how you can use the sftp-source . 65.3.1. Knative Source You can use the sftp-source Kamelet as a Knative source by binding it to a Knative object. sftp-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: sftp-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: sftp-source properties: connectionHost: "The Connection Host" directoryName: "The Directory Name" password: "The Password" username: "The Username" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 65.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 65.3.1.2. Procedure for using the cluster CLI Save the sftp-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f sftp-source-binding.yaml 65.3.1.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind sftp-source -p "source.connectionHost=The Connection Host" -p "source.directoryName=The Directory Name" -p "source.password=The Password" -p "source.username=The Username" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 65.3.2. Kafka Source You can use the sftp-source Kamelet as a Kafka source by binding it to a Kafka topic. sftp-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: sftp-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: sftp-source properties: connectionHost: "The Connection Host" directoryName: "The Directory Name" password: "The Password" username: "The Username" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 65.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 65.3.2.2. Procedure for using the cluster CLI Save the sftp-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f sftp-source-binding.yaml 65.3.2.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind sftp-source -p "source.connectionHost=The Connection Host" -p "source.directoryName=The Directory Name" -p "source.password=The Password" -p "source.username=The Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 65.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/sftp-source.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: sftp-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: sftp-source properties: connectionHost: \"The Connection Host\" directoryName: \"The Directory Name\" password: \"The Password\" username: \"The Username\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel",
"apply -f sftp-source-binding.yaml",
"kamel bind sftp-source -p \"source.connectionHost=The Connection Host\" -p \"source.directoryName=The Directory Name\" -p \"source.password=The Password\" -p \"source.username=The Username\" channel:mychannel",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: sftp-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: sftp-source properties: connectionHost: \"The Connection Host\" directoryName: \"The Directory Name\" password: \"The Password\" username: \"The Username\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic",
"apply -f sftp-source-binding.yaml",
"kamel bind sftp-source -p \"source.connectionHost=The Connection Host\" -p \"source.directoryName=The Directory Name\" -p \"source.password=The Password\" -p \"source.username=The Username\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/sftp-source |
5.2.14. /proc/kcore | 5.2.14. /proc/kcore This file represents the physical memory of the system and is stored in the core file format. Unlike most /proc/ files, kcore displays a size. This value is given in bytes and is equal to the size of the physical memory (RAM) used plus 4 KB. The contents of this file are designed to be examined by a debugger, such as gdb , and is not human readable. Warning Do not view the /proc/kcore virtual file. The contents of the file scramble text output on the terminal. If this file is accidentally viewed, press Ctrl + C to stop the process and then type reset to bring back the command line prompt. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-proc-kcore |
2.5. Considerations for Using Quorum Disk | 2.5. Considerations for Using Quorum Disk Quorum Disk is a disk-based quorum daemon, qdiskd , that provides supplemental heuristics to determine node fitness. With heuristics you can determine factors that are important to the operation of the node in the event of a network partition. For example, in a four-node cluster with a 3:1 split, ordinarily, the three nodes automatically "win" because of the three-to-one majority. Under those circumstances, the one node is fenced. With qdiskd however, you can set up heuristics that allow the one node to win based on access to a critical resource (for example, a critical network path). If your cluster requires additional methods of determining node health, then you should configure qdiskd to meet those needs. Note Configuring qdiskd is not required unless you have special requirements for node health. An example of a special requirement is an "all-but-one" configuration. In an all-but-one configuration, qdiskd is configured to provide enough quorum votes to maintain quorum even though only one node is working. Important Overall, heuristics and other qdiskd parameters for your Red Hat Cluster depend on the site environment and special requirements needed. To understand the use of heuristics and other qdiskd parameters, refer to the qdisk (5) man page. If you require assistance understanding and using qdiskd for your site, contact an authorized Red Hat support representative. If you need to use qdiskd , you should take into account the following considerations: Cluster node votes Each cluster node should have the same number of votes. CMAN membership timeout value The CMAN membership timeout value (the time a node needs to be unresponsive before CMAN considers that node to be dead, and not a member) should be at least two times that of the qdiskd membership timeout value. The reason is because the quorum daemon must detect failed nodes on its own, and can take much longer to do so than CMAN. The default value for CMAN membership timeout is 10 seconds. Other site-specific conditions may affect the relationship between the membership timeout values of CMAN and qdiskd . For assistance with adjusting the CMAN membership timeout value, contact an authorized Red Hat support representative. Fencing To ensure reliable fencing when using qdiskd , use power fencing. While other types of fencing (such as watchdog timers and software-based solutions to reboot a node internally) can be reliable for clusters not configured with qdiskd , they are not reliable for a cluster configured with qdiskd . Maximum nodes A cluster configured with qdiskd supports a maximum of 16 nodes. The reason for the limit is because of scalability; increasing the node count increases the amount of synchronous I/O contention on the shared quorum disk device. Quorum disk device A quorum disk device should be a shared block device with concurrent read/write access by all nodes in a cluster. The minimum size of the block device is 10 Megabytes. Examples of shared block devices that can be used by qdiskd are a multi-port SCSI RAID array, a Fibre Channel RAID SAN, or a RAID-configured iSCSI target. You can create a quorum disk device with mkqdisk , the Cluster Quorum Disk Utility. For information about using the utility refer to the mkqdisk(8) man page. Note Using JBOD as a quorum disk is not recommended. A JBOD cannot provide dependable performance and therefore may not allow a node to write to it quickly enough. If a node is unable to write to a quorum disk device quickly enough, the node is falsely evicted from a cluster. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s1-qdisk-considerations-ca |
Chapter 80. BuildConfigTemplate schema reference | Chapter 80. BuildConfigTemplate schema reference Used in: KafkaConnectTemplate Property Description metadata Metadata to apply to the PodDisruptionBudgetTemplate resource. MetadataTemplate pullSecret Container Registry Secret with the credentials for pulling the base image. string | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-buildconfigtemplate-reference |
Chapter 20. Integrating with Microsoft Sentinel notifier | Chapter 20. Integrating with Microsoft Sentinel notifier Microsoft Sentinel is a security information and event management (SIEM) solution which acts on Red Hat Advanced Cluster Security for Kubernetes (RHACS) alerts and audit logs. 20.1. Viewing the log analytics to detect threats By creating a Microsoft Sentinel integration, you can view the log analytics to detect threats. Prerequisites You have created a data collection rule and a log analytics workspace on Microsoft Azure. You have configured a service principal with a client secret, client certificate, or managed identity. The service principal or managed identity requires the Monitoring Metrics Publisher over a scope that includes all Sentinel resources. You have created a log analytics schema by using the TimeGenerated and msg fields in JSON format. Important You need to create separate log analytics tables for audit logs and alerts, and both data sources use the same schema. To create a schema, upload the following content to Microsoft Sentinel: Example JSON { "TimeGenerated": "2024-09-03T10:56:58.5010069Z", 1 "msg": { 2 "id": "1abe30d1-fa3a-xxxx-xxxx-781f0a12228a", 3 "policy" : {} } } 1 The timestamp for the alert. 2 Contains the message details. 3 The payload of the message, either alert or audit log. Procedure In the RHACS portal, click Platform Configuration Integrations . Scroll down to the Notifier Integrations section, and then click Microsoft Sentinel . To create a new integration, click New integration . In the Create integration page, provide the following information: Integration name : Specify a name for the integration. Log ingestion endpoint : Enter the data collection endpoint. You can find the endpoint in the Microsoft Azure portal. For more information, see Data collection rules (DCRs) in Azure Monitor (Microsoft Azure documentation). Directory tenant ID : Enter your unique tenant ID within the Microsoft Azure cloud infrastructure. You can find the tenant ID in the Microsoft Azure portal. For more information, see Find tenant name and tenant ID in Azure Active Directory B2C (Microsoft Azure documentation). Application client ID : Enter the client ID which uniquely identifies the specific application registered within your AAD that needs access to resources. You can find the client ID in the Microsoft Entra portal for the service principal you have created. For more information, see Register applications (Microsoft Azure documentation). Choose the appropriate authentication method: If you want to use a secret, enter the secret value. You can find the secret in the Microsoft Azure portal. If you want to use a client certificate, enter the client certificate and private key. You can find the certificate ID and private key in the Microsoft Azure portal. If you want to use an Azure managed identity, select the Use workload identity checkbox. For more information, see The new App registrations experience for Azure Active Directory B2C (Microsoft Azure documentation). Optional: Choose the appropriate method to configure the data collection rule configuration: Select the Enable alert DCR checkbox, if you want to enable the alert data collection rule configuration. To create an alert data collection rule, enter the alert data collection rule stream name and ID. You can find the stream name and ID in the Microsoft Azure portal. Select the Enable audit log DCR checkbox, if you want to enable audit data collection rule configuration. To create an audit data collection rule, enter the stream name and ID. You can find the stream name and ID in the Microsoft Azure portal. For more information, see Data collection rules (DCRs) in Azure Monitor (Microsoft Azure documentation). Optional: To test the new integration, click Test . To save the new integration, click Save . Verification In the RHACS portal, click Platform Configuration Integrations . Scroll down to the Notifier Integrations section, and then click Microsoft Sentinel . In the Integrations Microsoft Sentinel page, verify that the new integration has been created. Verify that the messages receive the correct log tables in your log analytics workspace. | [
"{ \"TimeGenerated\": \"2024-09-03T10:56:58.5010069Z\", 1 \"msg\": { 2 \"id\": \"1abe30d1-fa3a-xxxx-xxxx-781f0a12228a\", 3 \"policy\" : {} } }"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/integrating/integrating-with-microsoft-sentinel-notifier |
Chapter 34. Preparing your environment for managing IdM using Ansible playbooks | Chapter 34. Preparing your environment for managing IdM using Ansible playbooks As a system administrator managing Identity Management (IdM), when working with Red Hat Ansible Engine, it is good practice to do the following: Create a subdirectory dedicated to Ansible playbooks in your home directory, for example ~/MyPlaybooks . Copy and adapt sample Ansible playbooks from the /usr/share/doc/ansible-freeipa/* and /usr/share/doc/rhel-system-roles/* directories and subdirectories into your ~/MyPlaybooks directory. Include your inventory file in your ~/MyPlaybooks directory. Using this practice, you can find all your playbooks in one place and you can run your playbooks without invoking root privileges. Note You only need root privileges on the managed nodes to execute the ipaserver , ipareplica , ipaclient and ipabackup ansible-freeipa roles. These roles require privileged access to directories and the dnf software package manager. Follow this procedure to create the ~/MyPlaybooks directory and configure it so that you can use it to store and run Ansible playbooks. Prerequisites You have installed an IdM server on your managed nodes, server.idm.example.com and replica.idm.example.com . You have configured DNS and networking so you can log in to the managed nodes, server.idm.example.com and replica.idm.example.com , directly from the control node. You know the IdM admin password. Procedure Create a directory for your Ansible configuration and playbooks in your home directory: Change into the ~/MyPlaybooks/ directory: Create the ~/MyPlaybooks/ansible.cfg file with the following content: Create the ~/MyPlaybooks/inventory file with the following content: This configuration defines two host groups, eu and us , for hosts in these locations. Additionally, this configuration defines the ipaserver host group, which contains all hosts from the eu and us groups. Optional: Create an SSH public and private key. To simplify access in your test environment, do not set a password on the private key: Copy the SSH public key to the IdM admin account on each managed node: These commands require that you enter the IdM admin password. Additional resources Installing an Identity Management server using an Ansible playbook How to build your inventory | [
"mkdir ~/MyPlaybooks/",
"cd ~/MyPlaybooks",
"[defaults] inventory = /home/ your_username /MyPlaybooks/inventory [privilege_escalation] become=True",
"[eu] server.idm.example.com [us] replica.idm.example.com [ipaserver:children] eu us",
"ssh-keygen",
"ssh-copy-id [email protected] ssh-copy-id [email protected]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/preparing-your-environment-for-managing-idm-using-ansible-playbooks_managing-users-groups-hosts |
Appendix A. Reference: Settings in Administration Portal and VM Portal Windows | Appendix A. Reference: Settings in Administration Portal and VM Portal Windows A.1. Explanation of Settings in the New Virtual Machine and Edit Virtual Machine Windows A.1.1. Virtual Machine General Settings Explained The following table details the options available on the General tab of the New Virtual Machine and Edit Virtual Machine windows. Table A.1. Virtual Machine: General Settings Field Name Description Power cycle required? Cluster The name of the host cluster to which the virtual machine is attached. Virtual machines are hosted on any physical machine in that cluster in accordance with policy rules. Yes. Cross-cluster migration is for emergency use only. Moving clusters requires the virtual machine to be down. Template The template on which the virtual machine is based. This field is set to Blank by default, which allows you to create a virtual machine on which an operating system has not yet been installed. Templates are displayed as Name | Sub-version name (Sub-version number) . Each new version is displayed with a number in brackets that indicates the relative order of the version, with a higher number indicating a more recent version. The version name is displayed as base version if it is the root template of the template version chain. When the virtual machine is stateless, there is an option to select the latest version of the template. This option means that anytime a new version of this template is created, the virtual machine is automatically recreated on restart based on the latest template. Not applicable. This setting is for provisioning new virtual machines only. Operating System The operating system. Valid values include a range of Red Hat Enterprise Linux and Windows variants. Yes. Potentially changes the virtual hardware. Instance Type The instance type on which the virtual machine's hardware configuration can be based. This field is set to Custom by default, which means the virtual machine is not connected to an instance type. The other options available from this drop down menu are Large , Medium , Small , Tiny , XLarge , and any custom instance types that the Administrator has created. Other settings that have a chain link icon to them are pre-filled by the selected instance type. If one of these values is changed, the virtual machine will be detached from the instance type and the chain icon will appear broken. However, if the changed setting is restored to its original value, the virtual machine will be reattached to the instance type and the links in the chain icon will rejoin. Yes. Optimized for The type of system for which the virtual machine is to be optimized. There are three options: Server , Desktop , and High Performance ; by default, the field is set to Server . Virtual machines optimized to act as servers have no sound card, use a cloned disk image, and are not stateless. Virtual machines optimized to act as desktop machines do have a sound card, use an image (thin allocation), and are stateless. Virtual machines optimized for high performance have a number of configuration changes. See Section 4.10, "Configuring High Performance Virtual Machines, Templates, and Pools" . Yes. Name The name of the virtual machine. The name must be a unique name within the data center and must not contain any spaces, and must contain at least one character from A-Z or 0-9. The maximum length of a virtual machine name is 255 characters. The name can be reused in different data centers in the environment. Yes. VM ID The virtual machine ID. The virtual machine's creator can set a custom ID for that virtual machine. The custom ID must contain only numbers, in the format, 00000000-0000-0000-0000-00000000 . If no ID is specified during creation a UUID will be automatically assigned. For both custom and automatically-generated IDs, changes are not possible after virtual machine creation. Yes. Description A meaningful description of the new virtual machine. No. Comment A field for adding plain text human-readable comments regarding the virtual machine. No. Affinity Labels Add or remove a selected Affinity Label . No. Stateless Select this check box to run the virtual machine in stateless mode. This mode is used primarily for desktop virtual machines. Running a stateless desktop or server creates a new COW layer on the virtual machine hard disk image where new and changed data is stored. Shutting down the stateless virtual machine deletes the new COW layer which includes all data and configuration changes, and returns the virtual machine to its original state. Stateless virtual machines are useful when creating machines that need to be used for a short time, or by temporary staff. Not applicable. Start in Pause Mode Select this check box to always start the virtual machine in pause mode. This option is suitable for virtual machines which require a long time to establish a SPICE connection; for example, virtual machines in remote locations. Not applicable. Delete Protection Select this check box to make it impossible to delete the virtual machine. It is only possible to delete the virtual machine if this check box is not selected. No. Instance Images Click Attach to attach a floating disk to the virtual machine, or click Create to add a new virtual disk. Use the plus and minus buttons to add or remove additional virtual disks. Click Edit to change the configuration of a virtual disk that has already been attached or created. No. Instantiate VM network interfaces by picking a vNIC profile. Add a network interface to the virtual machine by selecting a vNIC profile from the nic1 drop-down list. Use the plus and minus buttons to add or remove additional network interfaces. No. A.1.2. Virtual Machine System Settings Explained CPU Considerations For non-CPU-intensive workloads , you can run virtual machines with a total number of processor cores greater than the number of cores in the host. Doing so enables the following: You can run a greater number of virtual machines, which reduces hardware requirements. You can configure virtual machines with CPU topologies that are otherwise not possible, such as when the number of virtual cores is between the number of host cores and the number of host threads. For best performance, and especially for CPU-intensive workloads , you should use the same topology in the virtual machine as in the host, so the host and the virtual machine expect the same cache usage. When the host has hyperthreading enabled, QEMU treats the host's hyperthreads as cores, so the virtual machine is not aware that it is running on a single core with multiple threads. This behavior might impact the performance of a virtual machine, because a virtual core that actually corresponds to a hyperthread in the host core might share a single cache with another hyperthread in the same host core, while the virtual machine treats it as a separate core. The following table details the options available on the System tab of the New Virtual Machine and Edit Virtual Machine windows. Table A.2. Virtual Machine: System Settings Field Name Description Power cycle required? Memory Size The amount of memory assigned to the virtual machine. When allocating memory, consider the processing and storage needs of the applications that are intended to run on the virtual machine. If OS supports hotplugging, no. Otherwise, yes. Maximum Memory The maximum amount of memory that can be assigned to the virtual machine. Maximum guest memory is also constrained by the selected guest architecture and the cluster compatibility level. If OS supports hotplugging, no. Otherwise, yes. Total Virtual CPUs The processing power allocated to the virtual machine as CPU Cores. For high performance, do not assign more cores to a virtual machine than are present on the physical host. If OS supports hotplugging, no. Otherwise, yes. Virtual Sockets The number of CPU sockets for the virtual machine. Do not assign more sockets to a virtual machine than are present on the physical host. If OS supports hotplugging, no. Otherwise, yes. Cores per Virtual Socket The number of cores assigned to each virtual socket. If OS supports hotplugging, no. Otherwise, yes. Threads per Core The number of threads assigned to each core. Increasing the value enables simultaneous multi-threading (SMT). IBM POWER8 supports up to 8 threads per core. For x86 and x86_64 (Intel and AMD) CPU types, the recommended value is 1, unless you want to replicate the exact host topology, which you can do using CPU pinning. For more information, see Section 4.10.2.2, "Pinning CPUs" . If OS supports hotplugging, no. Otherwise, yes. Custom Emulated Machine This option allows you to specify the machine type. If changed, the virtual machine will only run on hosts that support this machine type. Defaults to the cluster's default machine type. Yes. Custom CPU Type This option allows you to specify a CPU type. If changed, the virtual machine will only run on hosts that support this CPU type. Defaults to the cluster's default CPU type. Yes. Hardware Clock Time Offset This option sets the time zone offset of the guest hardware clock. For Windows, this should correspond to the time zone set in the guest. Most default Linux installations expect the hardware clock to be GMT+00:00. Yes. Custom Compatibility Version The compatibility version determines which features are supported by the cluster, as well as, the values of some properties and the emulated machine type. By default, the virtual machine is configured to run in the same compatibility mode as the cluster as the default is inherited from the cluster. In some situations the default compatibility mode needs to be changed. An example of this is if the cluster has been updated to a later compatibility version but the virtual machines have not been restarted. These virtual machines can be set to use a custom compatibility mode that is older than that of the cluster. See Changing the Cluster Compatibility Version in the Administration Guide for more information. Yes. Provide custom serial number policy This check box allows you to specify a serial number for the virtual machine. Select either: Host ID : Sets the host's UUID as the virtual machine's serial number. Vm ID : Sets the virtual machine's UUID as its serial number. Custom serial number : Allows you to specify a custom serial number. Yes. A.1.3. Virtual Machine Initial Run Settings Explained The following table details the options available on the Initial Run tab of the New Virtual Machine and Edit Virtual Machine windows. The settings in this table are only visible if the Use Cloud-Init/Sysprep check box is selected, and certain options are only visible when either a Linux-based or Windows-based option has been selected in the Operating System list in the General tab, as outlined below. Note This table does not include information on whether a power cycle is required because the settings apply to the virtual machine's initial run; the virtual machine is not running when you configure these settings. Table A.3. Virtual Machine: Initial Run Settings Field Name Operating System Description Use Cloud-Init/Sysprep Linux, Windows This check box toggles whether Cloud-Init or Sysprep will be used to initialize the virtual machine. VM Hostname Linux, Windows The host name of the virtual machine. Domain Windows The Active Directory domain to which the virtual machine belongs. Organization Name Windows The name of the organization to which the virtual machine belongs. This option corresponds to the text field for setting the organization name displayed when a machine running Windows is started for the first time. Active Directory OU Windows The organizational unit in the Active Directory domain to which the virtual machine belongs. Configure Time Zone Linux, Windows The time zone for the virtual machine. Select this check box and select a time zone from the Time Zone list. Admin Password Windows The administrative user password for the virtual machine. Click the disclosure arrow to display the settings for this option. Use already configured password : This check box is automatically selected after you specify an initial administrative user password. You must clear this check box to enable the Admin Password and Verify Admin Password fields and specify a new password. Admin Password : The administrative user password for the virtual machine. Enter the password in this text field and the Verify Admin Password text field to verify the password. Authentication Linux The authentication details for the virtual machine. Click the disclosure arrow to display the settings for this option. Use already configured password : This check box is automatically selected after you specify an initial root password. You must clear this check box to enable the Password and Verify Password fields and specify a new password. Password : The root password for the virtual machine. Enter the password in this text field and the Verify Password text field to verify the password. SSH Authorized Keys : SSH keys to be added to the authorized keys file of the virtual machine. You can specify multiple SSH keys by entering each SSH key on a new line. Regenerate SSH Keys : Regenerates SSH keys for the virtual machine. Custom Locale Windows Custom locale options for the virtual machine. Locales must be in a format such as en-US . Click the disclosure arrow to display the settings for this option. Input Locale : The locale for user input. UI Language : The language used for user interface elements such as buttons and menus. System Locale : The locale for the overall system. User Locale : The locale for users. Networks Linux Network-related settings for the virtual machine. Click the disclosure arrow to display the settings for this option. DNS Servers : The DNS servers to be used by the virtual machine. DNS Search Domains : The DNS search domains to be used by the virtual machine. Network : Configures network interfaces for the virtual machine. Select this check box and click + or - to add or remove network interfaces to or from the virtual machine. When you click + , a set of fields becomes visible that can specify whether to use DHCP, and configure an IP address, netmask, and gateway, and specify whether the network interface will start on boot. Custom Script Linux Custom scripts that will be run on the virtual machine when it starts. The scripts entered in this field are custom YAML sections that are added to those produced by the Manager, and allow you to automate tasks such as creating users and files, configuring yum repositories and running commands. For more information on the format of scripts that can be entered in this field, see the Custom Script documentation. Sysprep Windows A custom Sysprep definition. The definition must be in the format of a complete unattended installation answer file. You can copy and paste the default answer files in the /usr/share/ovirt-engine/conf/sysprep/ directory on the machine on which the Red Hat Virtualization Manager is installed and alter the fields as required. See Chapter 7, Templates for more information. A.1.4. Virtual Machine Console Settings Explained The following table details the options available on the Console tab of the New Virtual Machine and Edit Virtual Machine windows. Table A.4. Virtual Machine: Console Settings Field Name Description Power cycle required? Graphical Console Section A group of settings. Yes. Headless Mode Select this check box if you do not a require a graphical console for the virtual machine. When selected, all other fields in the Graphical Console section are disabled. In the VM Portal, the Console icon in the virtual machine's details view is also disabled. Important See Section 4.9, "Configuring Headless Virtual Machines" for more details and prerequisites for using headless mode. Yes. Video Type Defines the graphics device. QXL is the default and supports both graphic protocols. VGA supports only the VNC protocol. Yes. Graphics protocol Defines which display protocol to use. SPICE is the default protocol. VNC is an alternative option. To allow both protocols select SPICE + VNC . Yes. VNC Keyboard Layout Defines the keyboard layout for the virtual machine. This option is only available when using the VNC protocol. Yes. USB Support Defines SPICE USB redirection. This option is only available for virtual machines using the SPICE protocol. Select either: Disabled - USB controller devices are added according to the devices.usb.controller value in the osinfo-defaults.properties configuration file. The default for all x86 and x86_64 operating systems is piix3-uhci . For ppc64 systems, the default is nec-xhci . Enabled - Enables native KVM/SPICE USB redirection for Linux and Windows virtual machines. Virtual machines do not require any in-guest agents or drivers for native USB. Yes. Console Disconnect Action Defines what happens when the console is disconnected. This is only relevant with SPICE and VNC console connections. This setting can be changed while the virtual machine is running but will not take effect until a new console connection is established. Select either: No action - No action is taken. Lock screen - This is the default option. For all Linux machines and for Windows desktops this locks the currently active user session. For Windows servers, this locks the desktop and the currently active user. Logout user - For all Linux machines and Windows desktops, this logs out the currently active user session. For Windows servers, the desktop and the currently active user are logged out. Shutdown virtual machine - Initiates a graceful virtual machine shutdown. Reboot virtual machine - Initiates a graceful virtual machine reboot. No. Monitors The number of monitors for the virtual machine. This option is only available for virtual desktops using the SPICE display protocol. You can choose 1 , 2 or 4 . Note that multiple monitors are not supported for Windows 8 and Windows Server 2012 virtual machines. Yes. Smartcard Enabled Smart cards are an external hardware security feature, most commonly seen in credit cards, but also used by many businesses as authentication tokens. Smart cards can be used to protect Red Hat Virtualization virtual machines. Tick or untick the check box to activate and deactivate Smart card authentication for individual virtual machines. Yes. Single Sign On method Enabling Single Sign On allows users to sign into the guest operating system when connecting to a virtual machine from the VM Portal using the Guest Agent. Disable Single Sign On - Select this option if you do not want the Guest Agent to attempt to sign into the virtual machine. Use Guest Agent - Enables Single Sign On to allow the Guest Agent to sign you into the virtual machine. If you select Use Guest Agent, no. Otherwise, yes. Disable strict user checking Click the Advanced Parameters arrow and select the check box to use this option. With this option selected, the virtual machine does not need to be rebooted when a different user connects to it. By default, strict checking is enabled so that only one user can connect to the console of a virtual machine. No other user is able to open a console to the same virtual machine until it has been rebooted. The exception is that a SuperUser can connect at any time and replace a existing connection. When a SuperUser has connected, no normal user can connect again until the virtual machine is rebooted. Disable strict checking with caution, because you can expose the user's session to the new user. No. Soundcard Enabled A sound card device is not necessary for all virtual machine use cases. If it is for yours, enable a sound card here. Yes. Enable SPICE file transfer Defines whether a user is able to drag and drop files from an external host into the virtual machine's SPICE console. This option is only available for virtual machines using the SPICE protocol. This check box is selected by default. No. Enable SPICE clipboard copy and paste Defines whether a user is able to copy and paste content from an external host into the virtual machine's SPICE console. This option is only available for virtual machines using the SPICE protocol. This check box is selected by default. No. Serial Console Section A group of settings. Enable VirtIO serial console The VirtIO serial console is emulated through VirtIO channels, using SSH and key pairs, and allows you to access a virtual machine's serial console directly from a client machine's command line, instead of opening a console from the Administration Portal or the VM Portal. The serial console requires direct access to the Manager, since the Manager acts as a proxy for the connection, provides information about virtual machine placement, and stores the authentication keys. Select the check box to enable the VirtIO console on the virtual machine. Requires a firewall rule. See Opening a Serial Console to a Virtual Machine . Yes. A.1.5. Virtual Machine Host Settings Explained The following table details the options available on the Host tab of the New Virtual Machine and Edit Virtual Machine windows. Table A.5. Virtual Machine: Host Settings Field Name Sub-element Description Power cycle required? Start Running On Defines the preferred host on which the virtual machine is to run. Select either: Any Host in Cluster - The virtual machine can start and run on any available host in the cluster. Specific Host(s) - The virtual machine will start running on a particular host in the cluster. However, the Manager or an administrator can migrate the virtual machine to a different host in the cluster depending on the migration and high-availability settings of the virtual machine. Select the specific host or group of hosts from the list of available hosts. No. The virtual machine can migrate to that host while running. Migration Options Migration mode Defines options to run and migrate the virtual machine. If the options here are not used, the virtual machine will run or migrate according to its cluster's policy. Allow manual and automatic migration - The virtual machine can be automatically migrated from one host to another in accordance with the status of the environment, or manually by an administrator. Allow manual migration only - The virtual machine can only be migrated from one host to another manually by an administrator. Do not allow migration - The virtual machine cannot be migrated, either automatically or manually. No. Use custom migration policy Defines the migration convergence policy. If the check box is left unselected, the host determines the policy. Legacy - Legacy behavior of 3.6 version. Overrides in vdsm.conf are still applied. The guest agent hook mechanism is disabled. Minimal downtime - Allows the virtual machine to migrate in typical situations. Virtual machines should not experience any significant downtime. The migration will be aborted if virtual machine migration does not converge after a long time (dependent on QEMU iterations, with a maximum of 500 milliseconds). The guest agent hook mechanism is enabled. Suspend workload if needed - Allows the virtual machine to migrate in most situations, including when the virtual machine is running a heavy workload. Because of this, virtual machines may experience a more significant downtime than with some other settings. The migration may still be aborted for extreme workloads. The guest agent hook mechanism is enabled. No. Use custom migration downtime This check box allows you to specify the maximum number of milliseconds the virtual machine can be down during live migration. Configure different maximum downtimes for each virtual machine according to its workload and SLA requirements. Enter 0 to use the VDSM default value. No. Auto Converge migrations Only activated with the Legacy migration policy. Allows you to set whether auto-convergence is used during live migration of the virtual machine. Large virtual machines with high workloads can dirty memory more quickly than the transfer rate achieved during live migration, and prevent the migration from converging. Auto-convergence capabilities in QEMU allow you to force convergence of virtual machine migrations. QEMU automatically detects a lack of convergence and triggers a throttle-down of the vCPUs on the virtual machine. Auto-convergence is disabled globally by default. Select Inherit from cluster setting to use the auto-convergence setting that is set at the cluster level. This option is selected by default. Select Auto Converge to override the cluster setting or global setting and allow auto-convergence for the virtual machine. Select Don't Auto Converge to override the cluster setting or global setting and prevent auto-convergence for the virtual machine. No. Enable migration compression Only activated with the Legacy migration policy. The option allows you to set whether migration compression is used during live migration of the virtual machine. This feature uses Xor Binary Zero Run-Length-Encoding to reduce virtual machine downtime and total live migration time for virtual machines running memory write-intensive workloads or for any application with a sparse memory update pattern. Migration compression is disabled globally by default. Select Inherit from cluster setting to use the compression setting that is set at the cluster level. This option is selected by default. Select Compress to override the cluster setting or global setting and allow compression for the virtual machine. Select Don't compress to override the cluster setting or global setting and prevent compression for the virtual machine. No. Pass-Through Host CPU This check box allows virtual machines to use the host's CPU flags. When selected, Migration Options is set to Allow manual migration only . Yes. Configure NUMA NUMA Node Count The number of virtual NUMA nodes to assign to the virtual machine. If the Tune Mode is Preferred , this value must be set to 1 . Yes. Tune Mode The method used to allocate memory. Strict : Memory allocation will fail if the memory cannot be allocated on the target node. Preferred : Memory is allocated from a single preferred node. If sufficient memory is not available, memory can be allocated from other nodes. Interleave : Memory is allocated across nodes in a round-robin algorithm. Yes. NUMA Pinning Opens the NUMA Topology window. This window shows the host's total CPUs, memory, and NUMA nodes, and the virtual machine's virtual NUMA nodes. Pin virtual NUMA nodes to host NUMA nodes by clicking and dragging each vNUMA from the box on the right to a NUMA node on the left. If you define NUMA pinning, Migration Options is set to Allow manual migration only . Yes. A.1.6. Virtual Machine High Availability Settings Explained The following table details the options available on the High Availability tab of the New Virtual Machine and Edit Virtual Machine windows. Table A.6. Virtual Machine: High Availability Settings Field Name Description Power cycle required? Highly Available Select this check box if the virtual machine is to be highly available. For example, in cases of host maintenance, all virtual machines are automatically live migrated to another host. If the host crashes and is in a non-responsive state, only virtual machines with high availability are restarted on another host. If the host is manually shut down by the system administrator, the virtual machine is not automatically live migrated to another host. Note that this option is unavailable for virtual machines defined as Server or Desktop if the Migration Options setting in the Hosts tab is set to Do not allow migration . For a virtual machine to be highly available, it must be possible for the Manager to migrate the virtual machine to other available hosts as necessary. However, for virtual machines defined as High Performance , you can define high availability regardless of the Migration Options setting. Yes. Target Storage Domain for VM Lease Select the storage domain to hold a virtual machine lease, or select No VM Lease to disable the functionality. When a storage domain is selected, it will hold a virtual machine lease on a special volume that allows the virtual machine to be started on another host if the original host loses power or becomes unresponsive. This functionality is only available on storage domain V4 or later. Note If you define a lease, the only Resume Behavior available is KILL. Yes. Resume Behavior Defines the desired behavior of a virtual machine that is paused due to storage I/O errors, once a connection with the storage is reestablished. You can define the desired resume behavior even if the virtual machine is not highly available. The following options are available: AUTO_RESUME - The virtual machine is automatically resumed, without requiring user intervention. This is recommended for virtual machines that are not highly available and that do not require user intervention after being in the paused state. LEAVE_PAUSED - The virtual machine remains in pause mode until it is manually resumed or restarted. KILL - The virtual machine is automatically resumed if the I/O error is remedied within 80 seconds. However, if more than 80 seconds pass, the virtual machine is ungracefully shut down. This is recommended for highly available virtual machines, to allow the Manager to restart them on another host that is not experiencing the storage I/O error. KILL is the only option available when using virtual machine leases. No. Priority for Run/Migration queue Sets the priority level for the virtual machine to be migrated or restarted on another host. No. Watchdog Allows users to attach a watchdog card to a virtual machine. A watchdog is a timer that is used to automatically detect and recover from failures. Once set, a watchdog timer continually counts down to zero while the system is in operation, and is periodically restarted by the system to prevent it from reaching zero. If the timer reaches zero, it signifies that the system has been unable to reset the timer and is therefore experiencing a failure. Corrective actions are then taken to address the failure. This functionality is especially useful for servers that demand high availability. Watchdog Model : The model of watchdog card to assign to the virtual machine. At current, the only supported model is i6300esb . Watchdog Action : The action to take if the watchdog timer reaches zero. The following actions are available: none - No action is taken. However, the watchdog event is recorded in the audit log. reset - The virtual machine is reset and the Manager is notified of the reset action. poweroff - The virtual machine is immediately shut down. dump - A dump is performed and the virtual machine is paused. pause - The virtual machine is paused, and can be resumed by users. Yes. A.1.7. Virtual Machine Resource Allocation Settings Explained The following table details the options available on the Resource Allocation tab of the New Virtual Machine and Edit Virtual Machine windows. Table A.7. Virtual Machine: Resource Allocation Settings Field Name Sub-element Description Power cycle required? CPU Allocation CPU Profile The CPU profile assigned to the virtual machine. CPU profiles define the maximum amount of processing capability a virtual machine can access on the host on which it runs, expressed as a percent of the total processing capability available to that host. CPU profiles are defined on the cluster level based on quality of service entries created for data centers. No. CPU Shares Allows users to set the level of CPU resources a virtual machine can demand relative to other virtual machines. Low - 512 Medium - 1024 High - 2048 Custom - A custom level of CPU shares defined by the user. No. CPU Pinning topology Enables the virtual machine's virtual CPU (vCPU) to run on a specific physical CPU (pCPU) in a specific host. The syntax of CPU pinning is v#p[_v#p] , for example: 0#0 - Pins vCPU 0 to pCPU 0. 0#0_1#3 - Pins vCPU 0 to pCPU 0, and pins vCPU 1 to pCPU 3. 1#1-4,^2 - Pins vCPU 1 to one of the pCPUs in the range of 1 to 4, excluding pCPU 2. In order to pin a virtual machine to a host, you must also select the following on the Host tab: Start Running On: Specific Pass-Through Host CPU If CPU pinning is set and you change Start Running On: Specific a CPU pinning topology will be lost window appears when you click OK . When defined, Migration Options in the Hosts tab is set to Allow manual migration only . Yes. Memory Allocation Physical Memory Guaranteed The amount of physical memory guaranteed for this virtual machine. Should be any number between 0 and the defined memory for this virtual machine. If lowered, yes. Otherwise, no. Memory Balloon Device Enabled Enables the memory balloon device for this virtual machine. Enable this setting to allow memory overcommitment in a cluster. Enable this setting for applications that allocate large amounts of memory suddenly but set the guaranteed memory to the same value as the defined memory.Use ballooning for applications and loads that slowly consume memory, occasionally release memory, or stay dormant for long periods of time, such as virtual desktops. See Optimization Settings Explained in the Administration Guide for more information. Yes. IO Threads IO Threads Enabled Enables IO threads. Select this check box to improve the speed of disks that have a VirtIO interface by pinning them to a thread separate from the virtual machine's other functions. Improved disk performance increases a virtual machine's overall performance. Disks with VirtIO interfaces are pinned to an IO thread using a round-robin algorithm. Yes. Queues Multi Queues Enabled Enables multiple queues. This check box is selected by default. It creates up to four queues per vNIC, depending on how many vCPUs are available. It is possible to define a different number of queues per vNIC by creating a custom property as follows: engine-config -s "CustomDeviceProperties={type=interface;prop={ other-nic-properties ;queues=[1-9][0-9]*}}" where other-nic-properties is a semicolon-separated list of pre-existing NIC custom properties. Yes. Storage Allocation The Storage Allocation option is only available when the virtual machine is created from a template. Not applicable. Thin Provides optimized usage of storage capacity. Disk space is allocated only as it is required. When selected, the format of the disks will be marked as QCOW2 and you will not be able to change it. Not applicable. Clone Optimized for the speed of guest read and write operations. All disk space requested in the template is allocated at the time of the clone operation. Possible disk formats are QCOW2 or Raw . Not applicable. VirtIO-SCSI Enabled Allows users to enable or disable the use of VirtIO-SCSI on the virtual machines. Not applicable. Disk Allocation The Disk Allocation option is only available when you are creating a virtual machine from a template. Not applicable. Alias An alias for the virtual disk. By default, the alias is set to the same value as that of the template. Not applicable. Virtual Size The total amount of disk space that the virtual machine based on the template can use. This value cannot be edited, and is provided for reference only. Not applicable. Format The format of the virtual disk. The available options are QCOW2 and Raw . When Storage Allocation is Thin , the disk format is QCOW2 . When Storage Allocation is Clone , select QCOW2 or Raw . Not applicable. Target The storage domain on which the virtual disk is stored. By default, the storage domain is set to the same value as that of the template. Not applicable. Disk Profile The disk profile to assign to the virtual disk. Disk profiles are created based on storage profiles defined in the data centers. For more information, see Creating a Disk Profile . Not applicable. A.1.8. Virtual Machine Boot Options Settings Explained The following table details the options available on the Boot Options tab of the New Virtual Machine and Edit Virtual Machine windows Table A.8. Virtual Machine: Boot Options Settings Field Name Description Power cycle required? First Device After installing a new virtual machine, the new virtual machine must go into Boot mode before powering up. Select the first device that the virtual machine must try to boot: Hard Disk CD-ROM Network (PXE) Yes. Second Device Select the second device for the virtual machine to use to boot if the first device is not available. The first device selected in the option does not appear in the options. Yes. Attach CD If you have selected CD-ROM as a boot device, tick this check box and select a CD-ROM image from the drop-down menu. The images must be available in the ISO domain. Yes. Enable menu to select boot device Enables a menu to select the boot device. After the virtual machine starts and connects to the console, but before the virtual machine starts booting, a menu displays that allows you to select the boot device. This option should be enabled before the initial boot to allow you to select the required installation media. Yes. A.1.9. Virtual Machine Random Generator Settings Explained The following table details the options available on the Random Generator tab of the New Virtual Machine and Edit Virtual Machine windows. Table A.9. Virtual Machine: Random Generator Settings Field Name Description Power cycle required? Random Generator enabled Selecting this check box enables a paravirtualized Random Number Generator PCI device (virtio-rng). This device allows entropy to be passed from the host to the virtual machine in order to generate a more sophisticated random number. Note that this check box can only be selected if the RNG device exists on the host and is enabled in the host's cluster. Yes. Period duration (ms) Specifies the duration of the RNG's "full cycle" or "full period" in milliseconds. If omitted, the libvirt default of 1000 milliseconds (1 second) is used. If this field is filled, Bytes per period must be filled also. Yes. Bytes per period Specifies how many bytes are permitted to be consumed per period. Yes. Device source: The source of the random number generator. This is automatically selected depending on the source supported by the host's cluster. /dev/urandom source - The Linux-provided random number generator. /dev/hwrng source - An external hardware generator. Yes. A.1.10. Virtual Machine Custom Properties Settings Explained The following table details the options available on the Custom Properties tab of the New Virtual Machine and Edit Virtual Machine windows. Table A.10. Virtual Machine Custom Properties Settings Field Name Description Recommendations and Limitations Power cycle required? sndbuf Enter the size of the buffer for sending the virtual machine's outgoing data over the socket. Default value is 0. - Yes hugepages Enter the huge page size in KB. Set the huge page size to the largest size supported by the pinned host. The recommended size for x86_64 is 1 GB. The virtual machine's huge page size must be the same size as the pinned host's huge page size. The virtual machine's memory size must fit into the selected size of the pinned host's free huge pages. The NUMA node size must be a multiple of the huge page's selected size. Yes sap_agent Enables SAP monitoring on the virtual machine. Set to true or false . - Yes vhost Disables vhost-net, which is the kernel-based virtio network driver on virtual network interface cards attached to the virtual machine. To disable vhost, the format for this property is LogicalNetworkName : false . This will explicitly start the virtual machine without the vhost-net setting on the virtual NIC attached to LogicalNetworkName . vhost-net provides better performance than virtio-net, and if it is present, it is enabled on all virtual machine NICs by default. Disabling this property makes it easier to isolate and diagnose performance issues, or to debug vhost-net errors; for example, if migration fails for virtual machines on which vhost does not exist. Yes mdev_type Enter the name of the mediated device, my_GPU, supported by the host's kernel to enable the host to work with the device. - viodiskcache Caching mode for the virtio disk. writethrough writes data to the cache and the disk in parallel, writeback does not copy modifications from the cache to the disk, and none disables caching. See https://access.redhat.com/solutions/2361311 for more information about the limitations of the viodiskcache custom property. In order to ensure data integrity in the event of a fault in storage, in the network, or in a host during migration, do not migrate virtual machines with viodiskcache enabled, unless virtual machine clustering or application-level clustering is also enabled. Yes Warning Increasing the value of the sndbuf custom property results in increased occurrences of communication failure between hosts and unresponsive virtual machines. A.1.11. Virtual Machine Icon Settings Explained You can add custom icons to virtual machines and templates. Custom icons can help to differentiate virtual machines in the VM Portal. The following table details the options available on the Icon tab of the New Virtual Machine and Edit Virtual Machine windows. Note This table does not include information on whether a power cycle is required because these settings apply to the virtual machine's appearance in the Administration portal , not to its configuration. Table A.11. Virtual Machine: Icon Settings Button Name Description Upload Click this button to select a custom image to use as the virtual machine's icon. The following limitations apply: Supported formats: jpg, png, gif Maximum size: 24 KB Maximum dimensions: 150px width, 120px height Power cycle required? Use default A.1.12. Virtual Machine Foreman/Satellite Settings Explained The following table details the options available on the Foreman/Satellite tab of the New Virtual Machine and Edit Virtual Machine windows Table A.12. Virtual Machine:Foreman/Satellite Settings Field Name Description Power cycle required? Provider If the virtual machine is running Red Hat Enterprise Linux and the system is configured to work with a Satellite server, select the name of the Satellite from the list. This enables you to use Satellite's content management feature to display the relevant Errata for this virtual machine. See Section 4.8, "Configuring Red Hat Satellite Errata Management for a Virtual Machine" for more details. Yes. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/appe-Reference_Settings_in_Administration_Portal_and_User_Portal_Windows |
A.2. PCP Deployment | A.2. PCP Deployment To monitor an entire cluster, the recommended approach is to install and configure PCP so that the GFS2 PMDA is enabled and loaded on each node of the cluster along with any other PCP services. You will then be able to monitor nodes either locally or remotely on a machine that has PCP installed with the corresponding PMDAs loaded in monitor mode. You may also install the optional pcp-gui package to allow graphical representation of trace data through the pmchart tool For additional information, see the pcp-doc package, which is installed to /usr/share/doc/pcp-doc by default. PCP also provides a man page for every tool. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/global_file_system_2/s1-pcpdeployment |
13.2.14. Configuring Domains: Active Directory as an LDAP Provider (Alternative) | 13.2.14. Configuring Domains: Active Directory as an LDAP Provider (Alternative) While Active Directory can be configured as a type-specific identity provider, it can also be configured as a pure LDAP provider with a Kerberos authentication provider. Procedure 13.7. Configuring Active Directory as an LDAP Provider It is recommended that SSSD connect to the Active Directory server using SASL, which means that the local host must have a service keytab for the Windows domain on the Linux host. This keytab can be created using Samba. Configure the /etc/krb5.conf file to use the Active Directory realm. Set the Samba configuration file, /etc/samba/smb.conf , to point to the Windows Kerberos realm. To initialize Kerberos, type the following command as root : Then, run the net ads command to log in as an administrator principal. This administrator account must have sufficient rights to add a machine to the Windows domain, but it does not require domain administrator privileges. Run net ads again to add the host machine to the domain. This can be done with the host principal ( host/FQDN ) or, optionally, with the NFS service ( nfs/FQDN ). Make sure that the Services for Unix package is installed on the Windows server. Set up the Windows domain which will be used with SSSD. On the Windows machine, open Server Manager . Create the Active Directory Domain Services role. Create a new domain, such as ad.example.com . Add the Identity Management for UNIX service to the Active Directory Domain Services role. Use the Unix NIS domain as the domain name in the configuration. On the Active Directory server, create a group for the Linux users. Open Administrative Tools and select Active Directory Users and Computers . Select the Active Directory domain, ad.example.com . In the Users tab, right-click and select Create a New Group . Name the new group unixusers , and save. Double-click the unixusers group entry, and open the Users tab. Open the Unix Attributes tab. Set the NIS domain to the NIS domain that was configured for ad.example.com and, optionally, set a group ID (GID) number. Configure a user to be part of the Unix group. Open Administrative Tools and select Active Directory Users and Computers . Select the Active Directory domain, ad.example.com . In the Users tab, right-click and select Create a New User . Name the new user aduser , and make sure that the User must change password at logon and Lock account check boxes are not selected. Then save the user. Double-click the aduser user entry, and open the Unix Attributes tab. Make sure that the Unix configuration matches that of the Active Directory domain and the unixgroup group: The NIS domain, as created for the Active Directory domain The UID The login shell, to /bin/bash The home directory, to /home/aduser The primary group name, to unixusers Note Password lookups on large directories can take several seconds per request. The initial user lookup is a call to the LDAP server. Unindexed searches are much more resource-intensive, and therefore take longer, than indexed searches because the server checks every entry in the directory for a match. To speed up user lookups, index the attributes that are searched for by SSSD: uid uidNumber gidNumber gecos On the Linux system, configure the SSSD domain. For a complete list of LDAP provider parameters, see the sssd-ldap(5) man pages. Example 13.9. An Active Directory 2008 R2 Domain with Services for Unix Restart SSSD. | [
"[logging] default = FILE:/var/log/krb5libs.log [libdefaults] default_realm = AD.EXAMPLE.COM dns_lookup_realm = true dns_lookup_kdc = true ticket_lifetime = 24h renew_lifetime = 7d rdns = false forwardable = false Define only if DNS lookups are not working AD.EXAMPLE.COM = { kdc = server.ad.example.com admin_server = server.ad.example.com master_kdc = server.ad.example.com } Define only if DNS lookups are not working .ad.example.com = AD.EXAMPLE.COM ad.example.com = AD.EXAMPLE.COM",
"[global] workgroup = EXAMPLE client signing = yes client use spnego = yes kerberos method = secrets and keytab log file = /var/log/samba/%m.log password server = AD.EXAMPLE.COM realm = EXAMPLE.COM security = ads",
"~]# kinit [email protected]",
"~]# net ads join -U Administrator",
"~]# net ads join createupn=\"host/[email protected]\" -U Administrator",
"~]# vim /etc/sssd/sssd.conf",
"[sssd] config_file_version = 2 domains = ad.example.com services = nss, pam [domain/ad.example.com] cache_credentials = true for performance ldap_referrals = false id_provider = ldap auth_provider = krb5 chpass_provider = krb5 access_provider = ldap ldap_schema = rfc2307bis ldap_sasl_mech = GSSAPI ldap_sasl_authid = host/[email protected] #provide the schema for services for unix ldap_schema = rfc2307bis ldap_user_search_base = ou=user accounts,dc=ad,dc=example,dc=com ldap_user_object_class = user ldap_user_home_directory = unixHomeDirectory ldap_user_principal = userPrincipalName optional - set schema mapping parameters are listed in sssd-ldap ldap_user_object_class = user ldap_user_name = sAMAccountName ldap_group_search_base = ou=groups,dc=ad,dc=example,dc=com ldap_group_object_class = group ldap_access_order = expire ldap_account_expire_policy = ad ldap_force_upper_case_realm = true ldap_referrals = false krb5_realm = AD-REALM.EXAMPLE.COM required krb5_canonicalize = false",
"~]# service sssd restart"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sect-sssd-ad-ldap-proc |
Chapter 1. Creating the Red Hat automation hub API token | Chapter 1. Creating the Red Hat automation hub API token Before you can interact with automation hub by uploading or downloading collections, you must create an API token. The automation hub API token authenticates your ansible-galaxy client to the Red Hat automation hub server. You can create an API token by using Token management in automation hub or API Token Management in private automation hub (PAH). 1.1. Creating the Red Hat automation hub API token Before you can interact with automation hub by uploading or downloading collections, you need to create an API token. The automation hub API token authenticates your ansible-galaxy client to the Red Hat automation hub server. You can create an API token using automation hub Token management . Prerequisites Valid subscription credentials for Red Hat Ansible Automation Platform. Procedure Navigate to https://cloud.redhat.com/ansible/automation-hub/token/ . Click Load Token . Click copy icon to copy the API token to the clipboard. Paste the API token into a file and store in a secure location. Important The API token is a secret token used to protect your content. Store your API token in a secure location. The API token is now available for configuring automation hub as your default collections server or uploading collections using the ansible-galaxy command line tool. 1.2. Creating the API token in private automation hub You can create an API token by using API Token Management in private automation hub. Prerequisites Valid subscription credentials for Red Hat Ansible Automation Platform. Procedure Navigate to your PAH. From the sidebar, navigate to Collections API Token Management . Click Load Token . Click the copy icon to copy the API token to the clipboard. Paste the API token into a file and store in a secure location. Important The API token is a secret token used to protect your content. Store your API token in a secure location. The API token is now available for configuring automation hub as your default collections server or uploading collections using the ansible-galaxy command line tool. 1.3. Keeping your offline token active Keeping an offline token active is useful when an application needs to perform action on behalf of the user, even when the user is offline. For example, a routine data backup. Offline tokens expire after 30 days of inactivity. You can keep your offline token from expiring by periodically refreshing your offline token. Note Once your offline token expires, you must request a new one. Run the following command periodically to prevent your token from expiring: curl https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token -d grant_type=refresh_token -d client_id="cloud-services" -d refresh_token="{{ user_token }}" --fail --silent --show-error --output /dev/null | [
"curl https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token -d grant_type=refresh_token -d client_id=\"cloud-services\" -d refresh_token=\"{{ user_token }}\" --fail --silent --show-error --output /dev/null"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/getting_started_with_automation_hub/con-create-api-token |
Release notes for Red Hat build of OpenJDK 17.0.4 | Release notes for Red Hat build of OpenJDK 17.0.4 Red Hat build of OpenJDK 17 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.4/index |
Chapter 2. Accessing the web console | Chapter 2. Accessing the web console The OpenShift Container Platform web console is a user interface accessible from a web browser. Developers can use the web console to visualize, browse, and manage the contents of projects. 2.1. Prerequisites JavaScript must be enabled to use the web console. For the best experience, use a web browser that supports WebSockets . Review the OpenShift Container Platform 4.x Tested Integrations page before you create the supporting infrastructure for your cluster. 2.2. Understanding and accessing the web console The web console runs as a pod on the master. The static assets required to run the web console are served by the pod. After OpenShift Container Platform is successfully installed using openshift-install create cluster , find the URL for the web console and login credentials for your installed cluster in the CLI output of the installation program. For example: Example output INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided> Use those details to log in and access the web console. For existing clusters that you did not install, you can use oc whoami --show-console to see the web console URL. Additional resources Enabling feature sets using the web console | [
"INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/web_console/web-console |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_go_1.22_toolset/proc_providing-feedback-on-red-hat-documentation_using-go-toolset |
Chapter 4. LVM Administration with CLI Commands | Chapter 4. LVM Administration with CLI Commands This chapter summarizes the individual administrative tasks you can perform with the LVM Command Line Interface (CLI) commands to create and maintain logical volumes. In addition to the LVM Command Line Interface (CLI), you can use System Storage Manager (SSM) to configure LVM logical volumes. For information on using SSM with LVM, see the Storage Administration Guide . 4.1. Using CLI Commands There are several general features of all LVM CLI commands. When sizes are required in a command line argument, units can always be specified explicitly. If you do not specify a unit, then a default is assumed, usually KB or MB. LVM CLI commands do not accept fractions. When specifying units in a command line argument, LVM is case-insensitive; specifying M or m is equivalent, for example, and powers of 2 (multiples of 1024) are used. However, when specifying the --units argument in a command, lower-case indicates that units are in multiples of 1024 while upper-case indicates that units are in multiples of 1000. Where commands take volume group or logical volume names as arguments, the full path name is optional. A logical volume called lvol0 in a volume group called vg0 can be specified as vg0/lvol0 . Where a list of volume groups is required but is left empty, a list of all volume groups will be substituted. Where a list of logical volumes is required but a volume group is given, a list of all the logical volumes in that volume group will be substituted. For example, the lvdisplay vg0 command will display all the logical volumes in volume group vg0 . All LVM commands accept a -v argument, which can be entered multiple times to increase the output verbosity. For example, the following examples shows the default output of the lvcreate command. The following command shows the output of the lvcreate command with the -v argument. You could also have used the -vv , -vvv or the -vvvv argument to display increasingly more details about the command execution. The -vvvv argument provides the maximum amount of information at this time. The following example shows only the first few lines of output for the lvcreate command with the -vvvv argument specified. You can display help for any of the LVM CLI commands with the --help argument of the command. To display the man page for a command, execute the man command: The man lvm command provides general online information about LVM. All LVM objects are referenced internally by a UUID, which is assigned when you create the object. This can be useful in a situation where you remove a physical volume called /dev/sdf which is part of a volume group and, when you plug it back in, you find that it is now /dev/sdk . LVM will still find the physical volume because it identifies the physical volume by its UUID and not its device name. For information on specifying the UUID of a physical volume when creating a physical volume, see Section 6.3, "Recovering Physical Volume Metadata" . | [
"lvcreate -L 50MB new_vg Rounding up size to full physical extent 52.00 MB Logical volume \"lvol0\" created",
"lvcreate -v -L 50MB new_vg Finding volume group \"new_vg\" Rounding up size to full physical extent 52.00 MB Archiving volume group \"new_vg\" metadata (seqno 4). Creating logical volume lvol0 Creating volume group backup \"/etc/lvm/backup/new_vg\" (seqno 5). Found volume group \"new_vg\" Creating new_vg-lvol0 Loading new_vg-lvol0 table Resuming new_vg-lvol0 (253:2) Clearing start of logical volume \"lvol0\" Creating volume group backup \"/etc/lvm/backup/new_vg\" (seqno 5). Logical volume \"lvol0\" created",
"lvcreate -vvvv -L 50MB new_vg #lvmcmdline.c:913 Processing: lvcreate -vvvv -L 50MB new_vg #lvmcmdline.c:916 O_DIRECT will be used #config/config.c:864 Setting global/locking_type to 1 #locking/locking.c:138 File-based locking selected. #config/config.c:841 Setting global/locking_dir to /var/lock/lvm #activate/activate.c:358 Getting target version for linear #ioctl/libdm-iface.c:1569 dm version OF [16384] #ioctl/libdm-iface.c:1569 dm versions OF [16384] #activate/activate.c:358 Getting target version for striped #ioctl/libdm-iface.c:1569 dm versions OF [16384] #config/config.c:864 Setting activation/mirror_region_size to 512",
"commandname --help",
"man commandname"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/logical_volume_manager_administration/LVM_CLI |
Chapter 6. Customizing the web console in OpenShift Container Platform | Chapter 6. Customizing the web console in OpenShift Container Platform You can customize the OpenShift Container Platform web console to set a custom logo, product name, links, notifications, and command line downloads. This is especially helpful if you need to tailor the web console to meet specific corporate or government requirements. 6.1. Adding a custom logo and product name You can create custom branding by adding a custom logo or custom product name. You can set both or one without the other, as these settings are independent of each other. Prerequisites You must have administrator privileges. Create a file of the logo that you want to use. The logo can be a file in any common image format, including GIF, JPG, PNG, or SVG, and is constrained to a max-height of 60px . Procedure Import your logo file into a config map in the openshift-config namespace: USD oc create configmap console-custom-logo --from-file /path/to/console-custom-logo.png -n openshift-config Tip You can alternatively apply the following YAML to create the config map: apiVersion: v1 kind: ConfigMap metadata: name: console-custom-logo namespace: openshift-config binaryData: console-custom-logo.png: <base64-encoded_logo> ... 1 1 Provide a valid base64-encoded logo. Edit the web console's Operator configuration to include customLogoFile and customProductName : USD oc edit consoles.operator.openshift.io cluster apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: customLogoFile: key: console-custom-logo.png name: console-custom-logo customProductName: My Console Once the Operator configuration is updated, it will sync the custom logo config map into the console namespace, mount it to the console pod, and redeploy. Check for success. If there are any issues, the console cluster Operator will report a Degraded status, and the console Operator configuration will also report a CustomLogoDegraded status, but with reasons like KeyOrFilenameInvalid or NoImageProvided . To check the clusteroperator , run: USD oc get clusteroperator console -o yaml To check the console Operator configuration, run: USD oc get consoles.operator.openshift.io -o yaml 6.2. Creating custom links in the web console Prerequisites You must have administrator privileges. Procedure From Administration Custom Resource Definitions , click on ConsoleLink . Select Instances tab Click Create Console Link and edit the file: apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: example spec: href: 'https://www.example.com' location: HelpMenu 1 text: Link 1 1 Valid location settings are HelpMenu , UserMenu , ApplicationMenu , and NamespaceDashboard . To make the custom link appear in all namespaces, follow this example: apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: namespaced-dashboard-link-for-all-namespaces spec: href: 'https://www.example.com' location: NamespaceDashboard text: This appears in all namespaces To make the custom link appear in only some namespaces, follow this example: apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: namespaced-dashboard-for-some-namespaces spec: href: 'https://www.example.com' location: NamespaceDashboard # This text will appear in a box called "Launcher" under "namespace" or "project" in the web console text: Custom Link Text namespaceDashboard: namespaces: # for these specific namespaces - my-namespace - your-namespace - other-namespace To make the custom link appear in the application menu, follow this example: apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: application-menu-link-1 spec: href: 'https://www.example.com' location: ApplicationMenu text: Link 1 applicationMenu: section: My New Section # image that is 24x24 in size imageURL: https://via.placeholder.com/24 Click Save to apply your changes. 6.3. Customizing console routes For console and downloads routes, custom routes functionality uses the ingress config route configuration API. If the console custom route is set up in both the ingress config and console-operator config, then the new ingress config custom route configuration takes precedent. The route configuration with the console-operator config is deprecated. 6.3.1. Customizing the console route You can customize the console route by setting the custom hostname and TLS certificate in the spec.componentRoutes field of the cluster Ingress configuration. Prerequisites You have logged in to the cluster as a user with administrative privileges. You have created a secret in the openshift-config namespace containing the TLS certificate and key. This is required if the domain for the custom hostname suffix does not match the cluster domain suffix. The secret is optional if the suffix matches. Tip You can create a TLS secret by using the oc create secret tls command. Procedure Edit the cluster Ingress configuration: USD oc edit ingress.config.openshift.io cluster Set the custom hostname and optionally the serving certificate and key: apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: console namespace: openshift-console hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2 1 The custom hostname. 2 Reference to a secret in the openshift-config namespace that contains a TLS certificate ( tls.crt ) and key ( tls.key ). This is required if the domain for the custom hostname suffix does not match the cluster domain suffix. The secret is optional if the suffix matches. Save the file to apply the changes. 6.3.2. Customizing the download route You can customize the download route by setting the custom hostname and TLS certificate in the spec.componentRoutes field of the cluster Ingress configuration. Prerequisites You have logged in to the cluster as a user with administrative privileges. You have created a secret in the openshift-config namespace containing the TLS certificate and key. This is required if the domain for the custom hostname suffix does not match the cluster domain suffix. The secret is optional if the suffix matches. Tip You can create a TLS secret by using the oc create secret tls command. Procedure Edit the cluster Ingress configuration: USD oc edit ingress.config.openshift.io cluster Set the custom hostname and optionally the serving certificate and key: apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: downloads namespace: openshift-console hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2 1 The custom hostname. 2 Reference to a secret in the openshift-config namespace that contains a TLS certificate ( tls.crt ) and key ( tls.key ). This is required if the domain for the custom hostname suffix does not match the cluster domain suffix. The secret is optional if the suffix matches. Save the file to apply the changes. 6.4. Customizing the login page Create Terms of Service information with custom login pages. Custom login pages can also be helpful if you use a third-party login provider, such as GitHub or Google, to show users a branded page that they trust and expect before being redirected to the authentication provider. You can also render custom error pages during the authentication process. Note Customizing the error template is limited to identity providers (IDPs) that use redirects, such as request header and OIDC-based IDPs. It does not have an effect on IDPs that use direct password authentication, such as LDAP and htpasswd. Prerequisites You must have administrator privileges. Procedure Run the following commands to create templates you can modify: USD oc adm create-login-template > login.html USD oc adm create-provider-selection-template > providers.html USD oc adm create-error-template > errors.html Create the secrets: USD oc create secret generic login-template --from-file=login.html -n openshift-config USD oc create secret generic providers-template --from-file=providers.html -n openshift-config USD oc create secret generic error-template --from-file=errors.html -n openshift-config Run: USD oc edit oauths cluster Update the specification: spec: templates: error: name: error-template login: name: login-template providerSelection: name: providers-template Run oc explain oauths.spec.templates to understand the options. 6.5. Defining a template for an external log link If you are connected to a service that helps you browse your logs, but you need to generate URLs in a particular way, then you can define a template for your link. Prerequisites You must have administrator privileges. Procedure From Administration Custom Resource Definitions , click on ConsoleExternalLogLink . Select Instances tab Click Create Console External Log Link and edit the file: apiVersion: console.openshift.io/v1 kind: ConsoleExternalLogLink metadata: name: example spec: hrefTemplate: >- https://example.com/logs?resourceName=USD{resourceName}&containerName=USD{containerName}&resourceNamespace=USD{resourceNamespace}&podLabels=USD{podLabels} text: Example Logs 6.6. Creating custom notification banners Prerequisites You must have administrator privileges. Procedure From Administration Custom Resource Definitions , click on ConsoleNotification . Select Instances tab Click Create Console Notification and edit the file: apiVersion: console.openshift.io/v1 kind: ConsoleNotification metadata: name: example spec: text: This is an example notification message with an optional link. location: BannerTop 1 link: href: 'https://www.example.com' text: Optional link text color: '#fff' backgroundColor: '#0088ce' 1 Valid location settings are BannerTop , BannerBottom , and BannerTopBottom . Click Create to apply your changes. 6.7. Customizing CLI downloads You can configure links for downloading the CLI with custom link text and URLs, which can point directly to file packages or to an external page that provides the packages. Prerequisites You must have administrator privileges. Procedure Navigate to Administration Custom Resource Definitions . Select ConsoleCLIDownload from the list of Custom Resource Definitions (CRDs). Click the YAML tab, and then make your edits: apiVersion: console.openshift.io/v1 kind: ConsoleCLIDownload metadata: name: example-cli-download-links-for-foo spec: description: | This is an example of download links for foo displayName: example-foo links: - href: 'https://www.example.com/public/foo.tar' text: foo for linux - href: 'https://www.example.com/public/foo.mac.zip' text: foo for mac - href: 'https://www.example.com/public/foo.win.zip' text: foo for windows Click the Save button. 6.8. Adding YAML examples to Kubernetes resources You can dynamically add YAML examples to any Kubernetes resources at any time. Prerequisites You must have cluster administrator privileges. Procedure From Administration Custom Resource Definitions , click on ConsoleYAMLSample . Click YAML and edit the file: apiVersion: console.openshift.io/v1 kind: ConsoleYAMLSample metadata: name: example spec: targetResource: apiVersion: batch/v1 kind: Job title: Example Job description: An example Job YAML sample yaml: | apiVersion: batch/v1 kind: Job metadata: name: countdown spec: template: metadata: name: countdown spec: containers: - name: counter image: centos:7 command: - "bin/bash" - "-c" - "for i in 9 8 7 6 5 4 3 2 1 ; do echo USDi ; done" restartPolicy: Never Use spec.snippet to indicate that the YAML sample is not the full YAML resource definition, but a fragment that can be inserted into the existing YAML document at the user's cursor. Click Save . | [
"oc create configmap console-custom-logo --from-file /path/to/console-custom-logo.png -n openshift-config",
"apiVersion: v1 kind: ConfigMap metadata: name: console-custom-logo namespace: openshift-config binaryData: console-custom-logo.png: <base64-encoded_logo> ... 1",
"oc edit consoles.operator.openshift.io cluster",
"apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: customLogoFile: key: console-custom-logo.png name: console-custom-logo customProductName: My Console",
"oc get clusteroperator console -o yaml",
"oc get consoles.operator.openshift.io -o yaml",
"apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: example spec: href: 'https://www.example.com' location: HelpMenu 1 text: Link 1",
"apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: namespaced-dashboard-link-for-all-namespaces spec: href: 'https://www.example.com' location: NamespaceDashboard text: This appears in all namespaces",
"apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: namespaced-dashboard-for-some-namespaces spec: href: 'https://www.example.com' location: NamespaceDashboard # This text will appear in a box called \"Launcher\" under \"namespace\" or \"project\" in the web console text: Custom Link Text namespaceDashboard: namespaces: # for these specific namespaces - my-namespace - your-namespace - other-namespace",
"apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: application-menu-link-1 spec: href: 'https://www.example.com' location: ApplicationMenu text: Link 1 applicationMenu: section: My New Section # image that is 24x24 in size imageURL: https://via.placeholder.com/24",
"oc edit ingress.config.openshift.io cluster",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: console namespace: openshift-console hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2",
"oc edit ingress.config.openshift.io cluster",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: downloads namespace: openshift-console hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2",
"oc adm create-login-template > login.html",
"oc adm create-provider-selection-template > providers.html",
"oc adm create-error-template > errors.html",
"oc create secret generic login-template --from-file=login.html -n openshift-config",
"oc create secret generic providers-template --from-file=providers.html -n openshift-config",
"oc create secret generic error-template --from-file=errors.html -n openshift-config",
"oc edit oauths cluster",
"spec: templates: error: name: error-template login: name: login-template providerSelection: name: providers-template",
"apiVersion: console.openshift.io/v1 kind: ConsoleExternalLogLink metadata: name: example spec: hrefTemplate: >- https://example.com/logs?resourceName=USD{resourceName}&containerName=USD{containerName}&resourceNamespace=USD{resourceNamespace}&podLabels=USD{podLabels} text: Example Logs",
"apiVersion: console.openshift.io/v1 kind: ConsoleNotification metadata: name: example spec: text: This is an example notification message with an optional link. location: BannerTop 1 link: href: 'https://www.example.com' text: Optional link text color: '#fff' backgroundColor: '#0088ce'",
"apiVersion: console.openshift.io/v1 kind: ConsoleCLIDownload metadata: name: example-cli-download-links-for-foo spec: description: | This is an example of download links for foo displayName: example-foo links: - href: 'https://www.example.com/public/foo.tar' text: foo for linux - href: 'https://www.example.com/public/foo.mac.zip' text: foo for mac - href: 'https://www.example.com/public/foo.win.zip' text: foo for windows",
"apiVersion: console.openshift.io/v1 kind: ConsoleYAMLSample metadata: name: example spec: targetResource: apiVersion: batch/v1 kind: Job title: Example Job description: An example Job YAML sample yaml: | apiVersion: batch/v1 kind: Job metadata: name: countdown spec: template: metadata: name: countdown spec: containers: - name: counter image: centos:7 command: - \"bin/bash\" - \"-c\" - \"for i in 9 8 7 6 5 4 3 2 1 ; do echo USDi ; done\" restartPolicy: Never"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/web_console/customizing-web-console |
Specialized hardware and driver enablement | Specialized hardware and driver enablement OpenShift Container Platform 4.10 Learn about hardware enablement on OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"oc adm release info 4.10.0 --image-for=driver-toolkit",
"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fd84aee79606178b6561ac71f8540f404d518ae5deff45f6d6ac8f02636c7f4",
"podman pull --authfile=path/to/pullsecret.json quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:<SHA>",
"oc new-project simple-kmod-demo",
"apiVersion: image.openshift.io/v1 kind: ImageStream metadata: labels: app: simple-kmod-driver-container name: simple-kmod-driver-container namespace: simple-kmod-demo spec: {} --- apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: labels: app: simple-kmod-driver-build name: simple-kmod-driver-build namespace: simple-kmod-demo spec: nodeSelector: node-role.kubernetes.io/worker: \"\" runPolicy: \"Serial\" triggers: - type: \"ConfigChange\" - type: \"ImageChange\" source: git: ref: \"master\" uri: \"https://github.com/openshift-psap/kvc-simple-kmod.git\" type: Git dockerfile: | FROM DRIVER_TOOLKIT_IMAGE WORKDIR /build/ # Expecting kmod software version as an input to the build ARG KMODVER # Grab the software from upstream RUN git clone https://github.com/openshift-psap/simple-kmod.git WORKDIR simple-kmod # Build and install the module RUN make all KVER=USD(rpm -q --qf \"%{VERSION}-%{RELEASE}.%{ARCH}\" kernel-core) KMODVER=USD{KMODVER} && make install KVER=USD(rpm -q --qf \"%{VERSION}-%{RELEASE}.%{ARCH}\" kernel-core) KMODVER=USD{KMODVER} # Add the helper tools WORKDIR /root/kvc-simple-kmod ADD Makefile . ADD simple-kmod-lib.sh . ADD simple-kmod-wrapper.sh . ADD simple-kmod.conf . RUN mkdir -p /usr/lib/kvc/ && mkdir -p /etc/kvc/ && make install RUN systemctl enable kmods-via-containers@simple-kmod strategy: dockerStrategy: buildArgs: - name: KMODVER value: DEMO output: to: kind: ImageStreamTag name: simple-kmod-driver-container:demo",
"OCP_VERSION=USD(oc get clusterversion/version -ojsonpath={.status.desired.version})",
"DRIVER_TOOLKIT_IMAGE=USD(oc adm release info USDOCP_VERSION --image-for=driver-toolkit)",
"sed \"s#DRIVER_TOOLKIT_IMAGE#USD{DRIVER_TOOLKIT_IMAGE}#\" 0000-buildconfig.yaml.template > 0000-buildconfig.yaml",
"oc create -f 0000-buildconfig.yaml",
"apiVersion: v1 kind: ServiceAccount metadata: name: simple-kmod-driver-container --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: simple-kmod-driver-container rules: - apiGroups: - security.openshift.io resources: - securitycontextconstraints verbs: - use resourceNames: - privileged --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: simple-kmod-driver-container roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: simple-kmod-driver-container subjects: - kind: ServiceAccount name: simple-kmod-driver-container userNames: - system:serviceaccount:simple-kmod-demo:simple-kmod-driver-container --- apiVersion: apps/v1 kind: DaemonSet metadata: name: simple-kmod-driver-container spec: selector: matchLabels: app: simple-kmod-driver-container template: metadata: labels: app: simple-kmod-driver-container spec: serviceAccount: simple-kmod-driver-container serviceAccountName: simple-kmod-driver-container containers: - image: image-registry.openshift-image-registry.svc:5000/simple-kmod-demo/simple-kmod-driver-container:demo name: simple-kmod-driver-container imagePullPolicy: Always command: [\"/sbin/init\"] lifecycle: preStop: exec: command: [\"/bin/sh\", \"-c\", \"systemctl stop kmods-via-containers@simple-kmod\"] securityContext: privileged: true nodeSelector: node-role.kubernetes.io/worker: \"\"",
"oc create -f 1000-drivercontainer.yaml",
"oc get pod -n simple-kmod-demo",
"NAME READY STATUS RESTARTS AGE simple-kmod-driver-build-1-build 0/1 Completed 0 6m simple-kmod-driver-container-b22fd 1/1 Running 0 40s simple-kmod-driver-container-jz9vn 1/1 Running 0 40s simple-kmod-driver-container-p45cc 1/1 Running 0 40s",
"oc exec -it pod/simple-kmod-driver-container-p45cc -- lsmod | grep simple",
"simple_procfs_kmod 16384 0 simple_kmod 16384 0",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-special-resource-operator namespace: openshift-operators spec: channel: \"stable\" installPlanApproval: Automatic name: openshift-special-resource-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f sro-sub.yaml",
"oc project openshift-operators",
"oc get pods",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f4c5f5778-4lvvk 2/2 Running 0 89s special-resource-controller-manager-6dbf7d4f6f-9kl8h 2/2 Running 0 81s",
"mkdir -p chart/simple-kmod-0.0.1/templates",
"cd chart/simple-kmod-0.0.1/templates",
"apiVersion: image.openshift.io/v1 kind: ImageStream metadata: labels: app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} 1 name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} 2 spec: {} --- apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: labels: app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverBuild}} 3 name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverBuild}} 4 annotations: specialresource.openshift.io/wait: \"true\" specialresource.openshift.io/driver-container-vendor: simple-kmod specialresource.openshift.io/kernel-affine: \"true\" spec: nodeSelector: node-role.kubernetes.io/worker: \"\" runPolicy: \"Serial\" triggers: - type: \"ConfigChange\" - type: \"ImageChange\" source: git: ref: {{.Values.specialresource.spec.driverContainer.source.git.ref}} uri: {{.Values.specialresource.spec.driverContainer.source.git.uri}} type: Git strategy: dockerStrategy: dockerfilePath: Dockerfile.SRO buildArgs: - name: \"IMAGE\" value: {{ .Values.driverToolkitImage }} {{- range USDarg := .Values.buildArgs }} - name: {{ USDarg.name }} value: {{ USDarg.value }} {{- end }} - name: KVER value: {{ .Values.kernelFullVersion }} output: to: kind: ImageStreamTag name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}:v{{.Values.kernelFullVersion}} 5",
"apiVersion: v1 kind: ServiceAccount metadata: name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} rules: - apiGroups: - security.openshift.io resources: - securitycontextconstraints verbs: - use resourceNames: - privileged --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} subjects: - kind: ServiceAccount name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} namespace: {{.Values.specialresource.spec.namespace}} --- apiVersion: apps/v1 kind: DaemonSet metadata: labels: app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} annotations: specialresource.openshift.io/wait: \"true\" specialresource.openshift.io/state: \"driver-container\" specialresource.openshift.io/driver-container-vendor: simple-kmod specialresource.openshift.io/kernel-affine: \"true\" specialresource.openshift.io/from-configmap: \"true\" spec: updateStrategy: type: OnDelete selector: matchLabels: app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} template: metadata: labels: app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} spec: priorityClassName: system-node-critical serviceAccount: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} serviceAccountName: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} containers: - image: image-registry.openshift-image-registry.svc:5000/{{.Values.specialresource.spec.namespace}}/{{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}:v{{.Values.kernelFullVersion}} name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} imagePullPolicy: Always command: [\"/sbin/init\"] lifecycle: preStop: exec: command: [\"/bin/sh\", \"-c\", \"systemctl stop kmods-via-containers@{{.Values.specialresource.metadata.name}}\"] securityContext: privileged: true nodeSelector: node-role.kubernetes.io/worker: \"\" feature.node.kubernetes.io/kernel-version.full: \"{{.Values.KernelFullVersion}}\"",
"cd ..",
"apiVersion: v2 name: simple-kmod description: Simple kmod will deploy a simple kmod driver-container icon: https://avatars.githubusercontent.com/u/55542927 type: application version: 0.0.1 appVersion: 1.0.0",
"helm package simple-kmod-0.0.1/",
"Successfully packaged chart and saved it to: /data/<username>/git/<github_username>/special-resource-operator/yaml-for-docs/chart/simple-kmod-0.0.1/simple-kmod-0.0.1.tgz",
"mkdir cm",
"cp simple-kmod-0.0.1.tgz cm/simple-kmod-0.0.1.tgz",
"helm repo index cm --url=cm://simple-kmod/simple-kmod-chart",
"oc create namespace simple-kmod",
"oc create cm simple-kmod-chart --from-file=cm/index.yaml --from-file=cm/simple-kmod-0.0.1.tgz -n simple-kmod",
"apiVersion: sro.openshift.io/v1beta1 kind: SpecialResource metadata: name: simple-kmod spec: #debug: true 1 namespace: simple-kmod chart: name: simple-kmod version: 0.0.1 repository: name: example url: cm://simple-kmod/simple-kmod-chart 2 set: kind: Values apiVersion: sro.openshift.io/v1beta1 kmodNames: [\"simple-kmod\", \"simple-procfs-kmod\"] buildArgs: - name: \"KMODVER\" value: \"SRO\" driverContainer: source: git: ref: \"master\" uri: \"https://github.com/openshift-psap/kvc-simple-kmod.git\"",
"oc create -f simple-kmod-configmap.yaml",
"oc get pods -n simple-kmod",
"NAME READY STATUS RESTARTS AGE simple-kmod-driver-build-12813789169ac0ee-1-build 0/1 Completed 0 7m12s simple-kmod-driver-container-12813789169ac0ee-mjsnh 1/1 Running 0 8m2s simple-kmod-driver-container-12813789169ac0ee-qtkff 1/1 Running 0 8m2s",
"oc logs pod/simple-kmod-driver-build-12813789169ac0ee-1-build -n simple-kmod",
"oc exec -n simple-kmod -it pod/simple-kmod-driver-container-12813789169ac0ee-mjsnh -- lsmod | grep simple",
"simple_procfs_kmod 16384 0 simple_kmod 16384 0",
"apiVersion: v1 kind: Namespace metadata: name: openshift-nfd",
"oc create -f nfd-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: generateName: openshift-nfd- name: openshift-nfd namespace: openshift-nfd spec: targetNamespaces: - openshift-nfd",
"oc create -f nfd-operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: nfd namespace: openshift-nfd spec: channel: \"stable\" installPlanApproval: Automatic name: nfd source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f nfd-sub.yaml",
"oc project openshift-nfd",
"oc get pods",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 10m",
"apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance namespace: openshift-nfd spec: instance: \"\" # instance is empty by default topologyupdater: false # False by default operand: image: registry.redhat.io/openshift4/ose-node-feature-discovery:v4.10 imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - \"BMI1\" - \"BMI2\" - \"CLMUL\" - \"CMOV\" - \"CX16\" - \"ERMS\" - \"F16C\" - \"HTT\" - \"LZCNT\" - \"MMX\" - \"MMXEXT\" - \"NX\" - \"POPCNT\" - \"RDRAND\" - \"RDSEED\" - \"RDTSCP\" - \"SGX\" - \"SSE\" - \"SSE2\" - \"SSE3\" - \"SSE4.1\" - \"SSE4.2\" - \"SSSE3\" attributeWhitelist: kernel: kconfigFile: \"/path/to/kconfig\" configOpts: - \"NO_HZ\" - \"X86\" - \"DMI\" pci: deviceClassWhitelist: - \"0200\" - \"03\" - \"12\" deviceLabelFields: - \"class\" customConfig: configData: | - name: \"more.kernel.features\" matchOn: - loadedKMod: [\"example_kmod3\"]",
"oc create -f NodeFeatureDiscovery.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 11m nfd-master-hcn64 1/1 Running 0 60s nfd-master-lnnxx 1/1 Running 0 60s nfd-master-mp6hr 1/1 Running 0 60s nfd-worker-vgcz9 1/1 Running 0 60s nfd-worker-xqbws 1/1 Running 0 60s",
"core: sleepInterval: 60s 1",
"core: sources: - system - custom",
"core: labelWhiteList: '^cpu-cpuid'",
"core: noPublish: true 1",
"sources: cpu: cpuid: attributeBlacklist: [MMX, MMXEXT]",
"sources: cpu: cpuid: attributeWhitelist: [AVX512BW, AVX512CD, AVX512DQ, AVX512F, AVX512VL]",
"sources: kernel: kconfigFile: \"/path/to/kconfig\"",
"sources: kernel: configOpts: [NO_HZ, X86, DMI]",
"sources: pci: deviceClassWhitelist: [\"0200\", \"03\"]",
"sources: pci: deviceLabelFields: [class, vendor, device]",
"sources: usb: deviceClassWhitelist: [\"ef\", \"ff\"]",
"sources: pci: deviceLabelFields: [class, vendor]",
"source: custom: - name: \"my.custom.feature\" matchOn: - loadedKMod: [\"e1000e\"] - pciId: class: [\"0200\"] vendor: [\"8086\"]",
"apiVersion: topology.node.k8s.io/v1alpha1 kind: NodeResourceTopology metadata: name: node1 topologyPolicies: [\"SingleNUMANodeContainerLevel\"] zones: - name: node-0 type: Node resources: - name: cpu capacity: 20 allocatable: 16 available: 10 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3 - name: node-1 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic2 capacity: 6 allocatable: 6 available: 6 - name: node-2 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3",
"podman run gcr.io/k8s-staging-nfd/node-feature-discovery:master nfd-topology-updater -help",
"nfd-topology-updater -ca-file=/opt/nfd/ca.crt -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key",
"nfd-topology-updater -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key -ca-file=/opt/nfd/ca.crt",
"nfd-topology-updater -key-file=/opt/nfd/updater.key -cert-file=/opt/nfd/updater.crt -ca-file=/opt/nfd/ca.crt",
"nfd-topology-updater -kubelet-config-file=/var/lib/kubelet/config.yaml",
"nfd-topology-updater -no-publish",
"nfd-topology-updater -oneshot -no-publish",
"nfd-topology-updater -podresources-socket=/var/lib/kubelet/pod-resources/kubelet.sock",
"nfd-topology-updater -server=nfd-master.nfd.svc.cluster.local:443",
"nfd-topology-updater -server-name-override=localhost",
"nfd-topology-updater -sleep-interval=1h",
"nfd-topology-updater -watch-namespace=rte"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html-single/specialized_hardware_and_driver_enablement/index |
Chapter 3. Overview of Bare Metal certification | Chapter 3. Overview of Bare Metal certification The bare-metal certification overview provides details about product publication in the catalog, product release, certification duration, and recertification. 3.1. Publication on the catalog When you certify your server for bare metal on Red Hat OpenShift Container Platform, the following features might appear as certified component of your server depending on the certification tests the server passed: Installer Provisioned Infrastructure Assisted Installer Service Names may differ corresponding to the language of the products. 3.2. Red Hat product releases You have access to and are encouraged to test with pre-released Red Hat software. You can begin your engagement with the Red Hat Certification team before Red Hat software is generally available (GA) to customers to expedite the certification process for your product. However, conduct official certification testing only on the GA releases of Red Hat OpenShift Container Platform bare-metal hardware. 3.3. Certification duration Certifications are valid starting with the specific major and minor releases of Red Hat OpenShift Container Platform software as tested and listed on the Red Hat Ecosystem Catalog. They continue to be valid through the last minor release of the major release. This allows customers to count on certifications from the moment they are listed until the end of the product's lifecycle. 3.4. Recertification workflow You do not need to recertify after a new major or minor release of RHOCP if you have not made changes to your product. However, it is your responsibility to certify your product again any time you make significant changes to it. Red Hat recommends that you run the certification tests on your product periodically to ensure its quality, functionality, and performance with the supported versions of RHOCP. To recertify your product, open a supplemental certification. | null | https://docs.redhat.com/en/documentation/red_hat_hardware_certification/2025/html/red_hat_openshift_container_platform_hardware_bare_metal_certification_policy_guide/assembly-overview-of-the-bare-metal-certification-life-cycle_rhosp-bm-pol-prerequisites |
Replacing devices | Replacing devices Red Hat OpenShift Data Foundation 4.18 Instructions for safely replacing operational or failed devices Red Hat Storage Documentation Team Abstract This document explains how to safely replace storage devices for Red Hat OpenShift Data Foundation. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/replacing_devices/index |
Chapter 6. ironic-inspector | Chapter 6. ironic-inspector The following chapter contains information about the configuration options in the ironic-inspector service. 6.1. inspector.conf This section contains options for the /etc/ironic-inspector/inspector.conf file. 6.1.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/ironic-inspector/inspector.conf file. . Configuration option = Default value Type Description api_max_limit = 1000 integer value Limit the number of elements an API list-call returns auth_strategy = keystone string value Authentication method used on the ironic-inspector API. "noauth", "keystone" or "http_basic" are valid options. "noauth" will disable all authentication. can_manage_boot = True boolean value Whether the current installation of ironic-inspector can manage PXE booting of nodes. If set to False, the API will reject introspection requests with manage_boot missing or set to True. clean_up_period = 60 integer value Amount of time in seconds, after which repeat clean up of timed out nodes and old nodes status information. WARNING: If set to a value of 0, then the periodic task is disabled and inspector will not sync with ironic to complete the internal clean-up process. Not advisable if the deployment uses a PXE filter, and will result in the ironic-inspector ceasing periodic cleanup activities. debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['sqlalchemy=WARNING', 'iso8601=WARNING', 'requests=WARNING', 'urllib3.connectionpool=WARNING', 'keystonemiddleware=WARNING', 'keystoneauth=WARNING', 'ironicclient=WARNING', 'amqp=WARNING', 'amqplib=WARNING', 'oslo.messaging=WARNING', 'oslo_messaging=WARNING'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. enable_mdns = False boolean value Whether to enable publishing the ironic-inspector API endpoint via multicast DNS. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. host = <based on operating system> string value Name of this node. This can be an opaque identifier. It is not necessarily a hostname, FQDN, or IP address. However, the node name must be valid within an AMQP key, and if using ZeroMQ, a valid hostname, FQDN, or IP address. http_basic_auth_user_file = /etc/ironic-inspector/htpasswd string value Path to Apache format user authentication file used when auth_strategy=http_basic `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. introspection_delay = 5 integer value Delay (in seconds) between two introspections. Only applies when boot is managed by ironic-inspector (i.e. manage_boot==True). ipmi_address_fields = ['redfish_address', 'ilo_address', 'drac_host', 'drac_address', 'ibmc_address'] list value Ironic driver_info fields that are equivalent to ipmi_address. leader_election_interval = 10 integer value Interval (in seconds) between leader elections. listen_address = :: string value IP to listen on. listen_port = 5050 port value Port to listen on. listen_unix_socket = None string value Unix socket to listen on. Disables listen_address and listen_port. listen_unix_socket_mode = None integer value File mode (an octal number) of the unix socket to listen on. Ignored if listen_unix_socket is not set. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is set to "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_concurrency = 1000 integer value The green thread pool size. max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". publish_errors = False boolean value Enables or disables publication of error events. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. rootwrap_config = /etc/ironic-inspector/rootwrap.conf string value Path to the rootwrap configuration file to use for running commands as root standalone = True boolean value Whether to run ironic-inspector as a standalone service. It's EXPERIMENTAL to set to False. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. timeout = 3600 integer value Timeout after which introspection is considered failed, set to 0 to disable. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_ssl = False boolean value SSL Enabled/Disabled use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 6.1.2. capabilities The following table outlines the options available under the [capabilities] group in the /etc/ironic-inspector/inspector.conf file. Table 6.1. capabilities Configuration option = Default value Type Description boot_mode = False boolean value Whether to store the boot mode (BIOS or UEFI). cpu_flags = {'aes': 'cpu_aes', 'pdpe1gb': 'cpu_hugepages_1g', 'pse': 'cpu_hugepages', 'smx': 'cpu_txt', 'svm': 'cpu_vt', 'vmx': 'cpu_vt'} dict value Mapping between a CPU flag and a capability to set if this flag is present. 6.1.3. coordination The following table outlines the options available under the [coordination] group in the /etc/ironic-inspector/inspector.conf file. Table 6.2. coordination Configuration option = Default value Type Description backend_url = memcached://localhost:11211 string value The backend URL to use for distributed coordination. EXPERIMENTAL. 6.1.4. cors The following table outlines the options available under the [cors] group in the /etc/ironic-inspector/inspector.conf file. Table 6.3. cors Configuration option = Default value Type Description allow_credentials = True boolean value Indicate that the actual request can include user credentials allow_headers = ['X-Auth-Token', 'X-OpenStack-Ironic-Inspector-API-Minimum-Version', 'X-OpenStack-Ironic-Inspector-API-Maximum-Version', 'X-OpenStack-Ironic-Inspector-API-Version'] list value Indicate which header field names may be used during the actual request. allow_methods = ['GET', 'POST', 'PUT', 'HEAD', 'PATCH', 'DELETE', 'OPTIONS'] list value Indicate which methods can be used during the actual request. allowed_origin = None list value Indicate whether this resource may be shared with the domain received in the requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing slash. Example: https://horizon.example.com expose_headers = [] list value Indicate which headers are safe to expose to the API. Defaults to HTTP Simple Headers. max_age = 3600 integer value Maximum cache age of CORS preflight requests. 6.1.5. database The following table outlines the options available under the [database] group in the /etc/ironic-inspector/inspector.conf file. Table 6.4. database Configuration option = Default value Type Description backend = sqlalchemy string value The back end to use for the database. connection = None string value The SQLAlchemy connection string to use to connect to the database. connection_debug = 0 integer value Verbosity of SQL debugging information: 0=None, 100=Everything. `connection_parameters = ` string value Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1¶m2=value2&... connection_recycle_time = 3600 integer value Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the time they are checked out from the pool. connection_trace = False boolean value Add Python stack traces to SQL as comment strings. db_inc_retry_interval = True boolean value If True, increases the interval between retries of a database operation up to db_max_retry_interval. db_max_retries = 20 integer value Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count. db_max_retry_interval = 10 integer value If db_inc_retry_interval is set, the maximum seconds between retries of a database operation. db_retry_interval = 1 integer value Seconds between retries of a database transaction. max_overflow = 50 integer value If set, use this value for max_overflow with SQLAlchemy. max_pool_size = 5 integer value Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit. max_retries = 10 integer value Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count. mysql_enable_ndb = False boolean value If True, transparently enables support for handling MySQL Cluster (NDB). Deprecated since: 12.1.0 *Reason:*Support for the MySQL NDB Cluster storage engine has been deprecated and will be removed in a future release. mysql_sql_mode = TRADITIONAL string value The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode= mysql_wsrep_sync_wait = None integer value For Galera only, configure wsrep_sync_wait causality checks on new connections. Default is None, meaning don't configure any setting. pool_timeout = None integer value If set, use this value for pool_timeout with SQLAlchemy. retry_interval = 10 integer value Interval between retries of opening a SQL connection. slave_connection = None string value The SQLAlchemy connection string to use to connect to the slave database. sqlite_synchronous = True boolean value If True, SQLite uses synchronous mode. use_db_reconnect = False boolean value Enable the experimental use of database reconnect on connection lost. 6.1.6. discovery The following table outlines the options available under the [discovery] group in the /etc/ironic-inspector/inspector.conf file. Table 6.5. discovery Configuration option = Default value Type Description enabled_bmc_address_version = ['4', '6'] list value IP version of BMC address that will be used when enrolling a new node in Ironic. Defaults to "4,6". Could be "4" (use v4 address only), "4,6" (v4 address have higher priority and if both addresses found v6 version is ignored), "6,4" (v6 is desired but fall back to v4 address for BMCs having v4 address, opposite to "4,6"), "6" (use v6 address only and ignore v4 version). enroll_node_driver = fake-hardware string value The name of the Ironic driver used by the enroll hook when creating a new node in Ironic. enroll_node_fields = {} dict value Additional fields to set on newly discovered nodes. 6.1.7. dnsmasq_pxe_filter The following table outlines the options available under the [dnsmasq_pxe_filter] group in the /etc/ironic-inspector/inspector.conf file. Table 6.6. dnsmasq_pxe_filter Configuration option = Default value Type Description dhcp_hostsdir = /var/lib/ironic-inspector/dhcp-hostsdir string value The MAC address cache directory, exposed to dnsmasq.This directory is expected to be in exclusive control of the driver. `dnsmasq_start_command = ` string value A (shell) command line to start the dnsmasq service upon filter initialization. Default: don't start. `dnsmasq_stop_command = ` string value A (shell) command line to stop the dnsmasq service upon inspector (error) exit. Default: don't stop. purge_dhcp_hostsdir = True boolean value Purge the hostsdir upon driver initialization. Setting to false should only be performed when the deployment of inspector is such that there are multiple processes executing inside of the same host and namespace. In this case, the Operator is responsible for setting up a custom cleaning facility. 6.1.8. extra_hardware The following table outlines the options available under the [extra_hardware] group in the /etc/ironic-inspector/inspector.conf file. Table 6.7. extra_hardware Configuration option = Default value Type Description strict = False boolean value If True, refuse to parse extra data if at least one record is too short. Additionally, remove the incoming "data" even if parsing failed. 6.1.9. healthcheck The following table outlines the options available under the [healthcheck] group in the /etc/ironic-inspector/inspector.conf file. Table 6.8. healthcheck Configuration option = Default value Type Description enabled = False boolean value Enable the health check endpoint at /healthcheck. Note that this is unauthenticated. More information is available at https://docs.openstack.org/oslo.middleware/latest/reference/healthcheck_plugins.html . 6.1.10. iptables The following table outlines the options available under the [iptables] group in the /etc/ironic-inspector/inspector.conf file. Table 6.9. iptables Configuration option = Default value Type Description dnsmasq_interface = br-ctlplane string value Interface on which dnsmasq listens, the default is for VM's. ethoib_interfaces = [] list value List of Ethernet Over InfiniBand interfaces on the Inspector host which are used for physical access to the DHCP network. Multiple interfaces would be attached to a bond or bridge specified in dnsmasq_interface. The MACs of the InfiniBand nodes which are not in desired state are going to be blocked based on the list of neighbor MACs on these interfaces. firewall_chain = ironic-inspector string value iptables chain name to use. ip_version = 4 string value The IP version that will be used for iptables filter. Defaults to 4. 6.1.11. ironic The following table outlines the options available under the [ironic] group in the /etc/ironic-inspector/inspector.conf file. Table 6.10. ironic Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file max-version = None string value The maximum major version of a given API, intended to be used as the upper bound of a range with min_version. Mutually exclusive with version. max_retries = 30 integer value Maximum number of retries in case of conflict error (HTTP 409). min-version = None string value The minimum major version of a given API, intended to be used as the lower bound of a range with max_version. Mutually exclusive with version. If min_version is given with no max_version it is as if max version is "latest". password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to region-name = None string value The default region_name for endpoint URL discovery. retry_interval = 2 integer value Interval between retries in case of conflict error (HTTP 409). service-name = None string value The default service_name for endpoint URL discovery. service-type = baremetal string value The default service_type for endpoint URL discovery. split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value ID of the trust to use as a trustee use user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User id username = None string value Username valid-interfaces = ['internal', 'public'] list value List of interfaces, in order of preference, for endpoint URL. version = None string value Minimum Major API version within a given Major API version for endpoint URL discovery. Mutually exclusive with min_version and max_version 6.1.12. keystone_authtoken The following table outlines the options available under the [keystone_authtoken] group in the /etc/ironic-inspector/inspector.conf file. Table 6.11. keystone_authtoken Configuration option = Default value Type Description auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load auth_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. This option is deprecated in favor of www_authenticate_uri and will be removed in the S release. Deprecated since: Queens *Reason:*The auth_uri option is deprecated in favor of www_authenticate_uri and will be removed in the S release. auth_version = None string value API version of the Identity API endpoint. cache = None string value Request environment key where the Swift cache object is stored. When auth_token middleware is deployed with a Swift cache, use this option to have the middleware share a caching backend with swift. Otherwise, use the memcached_servers option instead. cafile = None string value A PEM encoded Certificate Authority to use when verifying HTTPs connections. Defaults to system CAs. certfile = None string value Required if identity server requires client certificate delay_auth_decision = False boolean value Do not handle authorization requests within the middleware, but delegate the authorization decision to downstream WSGI components. enforce_token_bind = permissive string value Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding. "permissive" (default) to validate binding information if the bind type is of a form known to the server and ignore it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of token binding is needed to be allowed. Finally the name of a binding method that must be present in tokens. http_connect_timeout = None integer value Request timeout value for communicating with Identity API server. http_request_max_retries = 3 integer value How many times are we trying to reconnect when communicating with Identity API Server. include_service_catalog = True boolean value (Optional) Indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for service catalog on token validation and will not set the X-Service-Catalog header. insecure = False boolean value Verify HTTPS connections. interface = internal string value Interface to use for the Identity API endpoint. Valid values are "public", "internal" (default) or "admin". keyfile = None string value Required if identity server requires client certificate memcache_pool_conn_get_timeout = 10 integer value (Optional) Number of seconds that an operation will wait to get a memcached client connection from the pool. memcache_pool_dead_retry = 300 integer value (Optional) Number of seconds memcached server is considered dead before it is tried again. memcache_pool_maxsize = 10 integer value (Optional) Maximum total number of open connections to every memcached server. memcache_pool_socket_timeout = 3 integer value (Optional) Socket timeout in seconds for communicating with a memcached server. memcache_pool_unused_timeout = 60 integer value (Optional) Number of seconds a connection to memcached is held unused in the pool before it is closed. memcache_secret_key = None string value (Optional, mandatory if memcache_security_strategy is defined) This string is used for key derivation. memcache_security_strategy = None string value (Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization. memcache_use_advanced_pool = True boolean value (Optional) Use the advanced (eventlet safe) memcached client pool. memcached_servers = None list value Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process. region_name = None string value The region in which the identity server can be found. service_token_roles = ['service'] list value A choice of roles that must be present in a service token. Service tokens are allowed to request that an expired token can be used and so this check should tightly control that only actual services should be sending this token. Roles here are applied as an ANY check so any role in this list must be present. For backwards compatibility reasons this currently only affects the allow_expired check. service_token_roles_required = False boolean value For backwards compatibility reasons we must let valid service tokens pass that don't pass the service_token_roles check as valid. Setting this true will become the default in a future release and should be enabled if possible. service_type = None string value The name or type of the service as it appears in the service catalog. This is used to validate tokens that have restricted access rules. token_cache_time = 300 integer value In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens for a configurable duration (in seconds). Set to -1 to disable caching completely. www_authenticate_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. 6.1.13. oslo_policy The following table outlines the options available under the [oslo_policy] group in the /etc/ironic-inspector/inspector.conf file. Table 6.12. oslo_policy Configuration option = Default value Type Description enforce_new_defaults = False boolean value This option controls whether or not to use old deprecated defaults when evaluating policies. If True , the old deprecated defaults are not going to be evaluated. This means if any existing token is allowed for old defaults but is disallowed for new defaults, it will be disallowed. It is encouraged to enable this flag along with the enforce_scope flag so that you can get the benefits of new defaults and scope_type together. If False , the deprecated policy check string is logically OR'd with the new policy check string, allowing for a graceful upgrade experience between releases with new policies, which is the default behavior. enforce_scope = False boolean value This option controls whether or not to enforce scope when evaluating policies. If True , the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False , a message will be logged informing operators that policies are being invoked with mismatching scope. policy_default_rule = default string value Default rule. Enforced when a requested rule is not found. policy_dirs = ['policy.d'] multi valued Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. policy_file = policy.json string value The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option. remote_content_type = application/x-www-form-urlencoded string value Content Type to send and receive data for REST based policy check remote_ssl_ca_crt_file = None string value Absolute path to ca cert file for REST based policy check remote_ssl_client_crt_file = None string value Absolute path to client cert for REST based policy check remote_ssl_client_key_file = None string value Absolute path client key file REST based policy check remote_ssl_verify_server_crt = False boolean value server identity verification for REST based policy check 6.1.14. pci_devices The following table outlines the options available under the [pci_devices] group in the /etc/ironic-inspector/inspector.conf file. Table 6.13. pci_devices Configuration option = Default value Type Description alias = [] multi valued An alias for PCI device identified by vendor_id and product_id fields. Format: {"vendor_id": "1234", "product_id": "5678", "name": "pci_dev1"} 6.1.15. port_physnet The following table outlines the options available under the [port_physnet] group in the /etc/ironic-inspector/inspector.conf file. Table 6.14. port_physnet Configuration option = Default value Type Description cidr_map = [] list value Mapping of IP subnet CIDR to physical network. When the physnet_cidr_map processing hook is enabled the physical_network property of baremetal ports is populated based on this mapping. 6.1.16. processing The following table outlines the options available under the [processing] group in the /etc/ironic-inspector/inspector.conf file. Table 6.15. processing Configuration option = Default value Type Description add_ports = pxe string value Which MAC addresses to add as ports during introspection. Possible values: all (all MAC addresses), active (MAC addresses of NIC with IP addresses), pxe (only MAC address of NIC node PXE booted from, falls back to "active" if PXE MAC is not supplied by the ramdisk). always_store_ramdisk_logs = False boolean value Whether to store ramdisk logs even if it did not return an error message (dependent upon "ramdisk_logs_dir" option being set). default_processing_hooks = ramdisk_error,root_disk_selection,scheduler,validate_interfaces,capabilities,pci_devices string value Comma-separated list of default hooks for processing pipeline. Hook scheduler updates the node with the minimum properties required by the Nova scheduler. Hook validate_interfaces ensures that valid NIC data was provided by the ramdisk. Do not exclude these two unless you really know what you're doing. disk_partitioning_spacing = True boolean value Whether to leave 1 GiB of disk size untouched for partitioning. Only has effect when used with the IPA as a ramdisk, for older ramdisk local_gb is calculated on the ramdisk side. keep_ports = all string value Which ports (already present on a node) to keep after introspection. Possible values: all (do not delete anything), present (keep ports which MACs were present in introspection data), added (keep only MACs that we added during introspection). node_not_found_hook = None string value The name of the hook to run when inspector receives inspection information from a node it isn't already aware of. This hook is ignored by default. overwrite_existing = True boolean value Whether to overwrite existing values in node database. Disable this option to make introspection a non-destructive operation. permit_active_introspection = False boolean value Whether to process nodes that are in running states. power_off = True boolean value Whether to power off a node after introspection. Nodes in active or rescue states which submit introspection data will be left on if the feature is enabled via the permit_active_introspection configuration option. processing_hooks = USDdefault_processing_hooks string value Comma-separated list of enabled hooks for processing pipeline. The default for this is USDdefault_processing_hooks, hooks can be added before or after the defaults like this: "prehook,USDdefault_processing_hooks,posthook". ramdisk_logs_dir = None string value If set, logs from ramdisk will be stored in this directory. ramdisk_logs_filename_format = {uuid}_{dt:%Y%m%d-%H%M%S.%f}.tar.gz string value File name template for storing ramdisk logs. The following replacements can be used: {uuid} - node UUID or "unknown", {bmc} - node BMC address or "unknown", {dt} - current UTC date and time, {mac} - PXE booting MAC or "unknown". store_data = none string value The storage backend for storing introspection data. Possible values are: none , database and swift . If set to none , introspection data will not be stored. update_pxe_enabled = True boolean value Whether to update the pxe_enabled value according to the introspection data. This option has no effect if [processing]overwrite_existing is set to False 6.1.17. pxe_filter The following table outlines the options available under the [pxe_filter] group in the /etc/ironic-inspector/inspector.conf file. Table 6.16. pxe_filter Configuration option = Default value Type Description deny_unknown_macs = False boolean value By default inspector will open the DHCP server for any node when introspection is active. Opening DHCP for unknown MAC addresses when introspection is active allow for users to add nodes with no ports to ironic and have ironic-inspector enroll ports based on node introspection results. NOTE: If this option is True, nodes must have at least one enrolled port prior to introspection. driver = iptables string value PXE boot filter driver to use, possible filters are: "iptables", "dnsmasq" and "noop". Set "noop " to disable the firewall filtering. sync_period = 15 integer value Amount of time in seconds, after which repeat periodic update of the filter. 6.1.18. service_catalog The following table outlines the options available under the [service_catalog] group in the /etc/ironic-inspector/inspector.conf file. Table 6.17. service_catalog Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file max-version = None string value The maximum major version of a given API, intended to be used as the upper bound of a range with min_version. Mutually exclusive with version. min-version = None string value The minimum major version of a given API, intended to be used as the lower bound of a range with max_version. Mutually exclusive with version. If min_version is given with no max_version it is as if max version is "latest". password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to region-name = None string value The default region_name for endpoint URL discovery. service-name = None string value The default service_name for endpoint URL discovery. service-type = baremetal-introspection string value The default service_type for endpoint URL discovery. split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value ID of the trust to use as a trustee use user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User id username = None string value Username valid-interfaces = ['internal', 'public'] list value List of interfaces, in order of preference, for endpoint URL. version = None string value Minimum Major API version within a given Major API version for endpoint URL discovery. Mutually exclusive with min_version and max_version 6.1.19. swift The following table outlines the options available under the [swift] group in the /etc/ironic-inspector/inspector.conf file. Table 6.18. swift Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. container = ironic-inspector string value Default Swift container to use when creating objects. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. delete_after = 0 integer value Number of seconds that the Swift object will last before being deleted. (set to 0 to never delete the object). domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file max-version = None string value The maximum major version of a given API, intended to be used as the upper bound of a range with min_version. Mutually exclusive with version. min-version = None string value The minimum major version of a given API, intended to be used as the lower bound of a range with max_version. Mutually exclusive with version. If min_version is given with no max_version it is as if max version is "latest". password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to region-name = None string value The default region_name for endpoint URL discovery. service-name = None string value The default service_name for endpoint URL discovery. service-type = object-store string value The default service_type for endpoint URL discovery. split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value ID of the trust to use as a trustee use user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User id username = None string value Username valid-interfaces = ['internal', 'public'] list value List of interfaces, in order of preference, for endpoint URL. version = None string value Minimum Major API version within a given Major API version for endpoint URL discovery. Mutually exclusive with min_version and max_version | null | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuration_reference/ironic_inspector |
Chapter 19. Activation and Passivation Modes | Chapter 19. Activation and Passivation Modes Activation is the process of loading an entry into memory and removing it from the cache store. Activation occurs when a thread attempts to access an entry that is in the store but not the memory (namely a passivated entry). Passivation mode allows entries to be stored in the cache store after they are evicted from memory. Passivation prevents unnecessary and potentially expensive writes to the cache store. It is used for entries that are frequently used or referenced and therefore not evicted from memory. While passivation is enabled, the cache store is used as an overflow tank, similar to virtual memory implementation in operating systems that swap memory pages to disk. The passivation flag is used to toggle passivation mode, a mode that stores entries in the cache store only after they are evicted from memory. 23149%2C+Administration+and+Configuration+Guide-6.628-06-2017+13%3A51%3A02JBoss+Data+Grid+6Documentation6.6.1 Report a bug 19.1. Passivation Mode Benefits The primary benefit of passivation mode is that it prevents unnecessary and potentially expensive writes to the cache store. This is particularly useful if an entry is frequently used or referenced and therefore is not evicted from memory. 23149%2C+Administration+and+Configuration+Guide-6.628-06-2017+13%3A51%3A02JBoss+Data+Grid+6Documentation6.6.1 Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/chap-activation_and_passivation_modes |
Chapter 16. Installing a cluster with the support for configuring multi-architecture compute machines | Chapter 16. Installing a cluster with the support for configuring multi-architecture compute machines An OpenShift Container Platform cluster with multi-architecture compute machines supports compute machines with different architectures. Note When you have nodes with multiple architectures in your cluster, the architecture of your image must be consistent with the architecture of the node. You must ensure that the pod is assigned to the node with the appropriate architecture and that it matches the image architecture. For more information on assigning pods to nodes, Scheduling workloads on clusters with multi-architecture compute machines . You can install a Google Cloud Platform (GCP) cluster with the support for configuring multi-architecture compute machines. After installing the GCP cluster, you can add multi-architecture compute machines to the cluster in the following ways: Adding 64-bit x86 compute machines to a cluster that uses 64-bit ARM control plane machines and already includes 64-bit ARM compute machines. In this case, 64-bit x86 is considered the secondary architecture. Adding 64-bit ARM compute machines to a cluster that uses 64-bit x86 control plane machines and already includes 64-bit x86 compute machines. In this case, 64-bit ARM is considered the secondary architecture. Note Before adding a secondary architecture node to your cluster, it is recommended to install the Multiarch Tuning Operator, and deploy a ClusterPodPlacementConfig custom resource. For more information, see "Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator". 16.1. Installing a cluster with multi-architecture support You can install a cluster with the support for configuring multi-architecture compute machines. Prerequisites You installed the OpenShift CLI ( oc ). You have the OpenShift Container Platform installation program. You downloaded the pull secret for your cluster. Procedure Check that the openshift-install binary is using the multi payload by running the following command: USD ./openshift-install version Example output ./openshift-install 4.17.0 built from commit abc123etc release image quay.io/openshift-release-dev/ocp-release@sha256:abc123wxyzetc release architecture multi default architecture amd64 The output must contain release architecture multi to indicate that the openshift-install binary is using the multi payload. Update the install-config.yaml file to configure the architecture for the nodes. Sample install-config.yaml file with multi-architecture configuration apiVersion: v1 baseDomain: example.openshift.com compute: - architecture: amd64 1 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: arm64 2 name: master platform: {} replicas: 3 # ... 1 Specify the architecture of the worker node. You can set this field to either arm64 or amd64 . 2 Specify the control plane node architecture. You can set this field to either arm64 or amd64 . steps Deploying the cluster Additional resources Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator | [
"./openshift-install version",
"./openshift-install 4.17.0 built from commit abc123etc release image quay.io/openshift-release-dev/ocp-release@sha256:abc123wxyzetc release architecture multi default architecture amd64",
"apiVersion: v1 baseDomain: example.openshift.com compute: - architecture: amd64 1 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: arm64 2 name: master platform: {} replicas: 3"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_gcp/installing-gcp-multiarch-support |
13.5. Locking Down Printing | 13.5. Locking Down Printing You can disable the print dialog from being shown to users. This can be useful if you are giving temporary access to a user or you do not want the user to print to network printers. Important This feature will only work in applications which support it. Not all GNOME and third party applications have this feature enabled. These changes will have no effect on applications which do not support this feature. You prevent applications from printing by locking down the org.gnome.desktop.lockdown.disable-printing key. Follow the procedure. Procedure 13.5. Locking Down the org.gnome.desktop.lockdown.disable-printing Key Create the user profile if it already does not exist ( /etc/dconf/profile/user ): Create a local database for machine-wide settings in etc/dconf/db/local.d/00-lockdown : Override the user's setting and prevent the user from changing it in /etc/dconf/db/local.d/locks/lockdown : Update the system databases by running Having followed these steps, applications supporting this lockdown key will disable printing. Among such applications there are Evolution , Evince , Eye of GNOME , Epiphany , and Gedit . | [
"user-db:user system-db:local",
"Prevent applications from printing disable-printing=true",
"List the keys used to configure lockdown /org/gnome/desktop/lockdown/disable-printing",
"dconf update"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/desktop_migration_and_administration_guide/lockdown-printing |
9.3. NFS Client Configuration | 9.3. NFS Client Configuration The mount command mounts NFS shares on the client side. Its format is as follows: This command uses the following variables: options A comma-delimited list of mount options; refer to Section 9.5, "Common NFS Mount Options" for details on valid NFS mount options. server The hostname, IP address, or fully qualified domain name of the server exporting the file system you wish to mount /remote/export The file system or directory being exported from the server , that is, the directory you wish to mount /local/directory The client location where /remote/export is mounted The NFS protocol version used in Red Hat Enterprise Linux 6 is identified by the mount options nfsvers or vers . By default, mount will use NFSv4 with mount -t nfs . If the server does not support NFSv4, the client will automatically step down to a version supported by the server. If the nfsvers / vers option is used to pass a particular version not supported by the server, the mount will fail. The file system type nfs4 is also available for legacy reasons; this is equivalent to running mount -t nfs -o nfsvers=4 host : /remote/export /local/directory . Refer to man mount for more details. If an NFS share was mounted manually, the share will not be automatically mounted upon reboot. Red Hat Enterprise Linux offers two methods for mounting remote file systems automatically at boot time: the /etc/fstab file and the autofs service. Refer to Section 9.3.1, "Mounting NFS File Systems using /etc/fstab " and Section 9.4, " autofs " for more information. 9.3.1. Mounting NFS File Systems using /etc/fstab An alternate way to mount an NFS share from another machine is to add a line to the /etc/fstab file. The line must state the hostname of the NFS server, the directory on the server being exported, and the directory on the local machine where the NFS share is to be mounted. You must be root to modify the /etc/fstab file. Example 9.1. Syntax example The general syntax for the line in /etc/fstab is as follows: The mount point /pub must exist on the client machine before this command can be executed. After adding this line to /etc/fstab on the client system, use the command mount /pub , and the mount point /pub is mounted from the server. The /etc/fstab file is referenced by the netfs service at boot time, so lines referencing NFS shares have the same effect as manually typing the mount command during the boot process. A valid /etc/fstab entry to mount an NFS export should contain the following information: The variables server , /remote/export , /local/directory , and options are the same ones used when manually mounting an NFS share. Refer to Section 9.3, "NFS Client Configuration" for a definition of each variable. Note The mount point /local/directory must exist on the client before /etc/fstab is read. Otherwise, the mount will fail. For more information about /etc/fstab , refer to man fstab . | [
"mount -t nfs -o options server : /remote/export /local/directory",
"server:/usr/local/pub /pub nfs defaults 0 0",
"server : /remote/export /local/directory nfs options 0 0"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/nfs-clientconfig |
Chapter 13. TigerVNC | Chapter 13. TigerVNC TigerVNC (Tiger Virtual Network Computing) is a system for graphical desktop sharing which allows you to remotely control other computers. TigerVNC works on the client-server principle: a server shares its output ( vncserver ) and a client ( vncviewer ) connects to the server. Note Unlike in Red Hat Enterprise Linux distributions, TigerVNC in Red Hat Enterprise Linux 7 uses the systemd system management daemon for its configuration. The /etc/sysconfig/vncserver configuration file has been replaced by /etc/systemd/system/[email protected] . 13.1. VNC Server vncserver is a utility which starts a VNC (Virtual Network Computing) desktop. It runs Xvnc with appropriate options and starts a window manager on the VNC desktop. vncserver allows users to run separate sessions in parallel on a machine which can then be accessed by any number of clients from anywhere. 13.1.1. Installing VNC Server To install the TigerVNC server, issue the following command as root : 13.1.2. Configuring VNC Server The VNC server can be configured to start a display for one or more users, provided that accounts for the users exist on the system, with optional parameters such as for display settings, network address and port, and security settings. Configuring a VNC Display for a Single User A configuration file named /etc/systemd/system/[email protected] is required. To create this file, copy the /usr/lib/systemd/system/[email protected] file as root : There is no need to include the display number in the file name because systemd automatically creates the appropriately named instance in memory on demand, replacing '%i' in the service file by the display number. For a single user it is not necessary to rename the file. For multiple users, a uniquely named service file for each user is required, for example, by adding the user name to the file name in some way. See Section 13.1.2.1, "Configuring VNC Server for Two Users" for details. Edit /etc/systemd/system/[email protected] and in the following line, located in the [Services] section, replace USER with the user name for whom you want to set the VNC server. Leave the remaining lines of the file unmodified. Note The default size of the VNC desktop is 1024x768. A user's VNC session can be further configured using the ~/.vnc/config file. For example, to change the VNC window size, add the following line: geometry= <WIDTH> x <HEIGHT> Save the changes. To make the changes take effect immediately, issue the following command: Set the password for the user or users defined in the configuration file. Note that you need to switch from root to USER first. Important The stored password is not encrypted; anyone who has access to the password file can find the plain-text password. Proceed to Section 13.1.3, "Starting VNC Server" . 13.1.2.1. Configuring VNC Server for Two Users If you want to configure more than one user on the same machine, create different template-type service files, one for each user. Create two service files, for example vncserver- USER_1 @.service and vncserver- USER_2 @.service . In both these files substitute USER with the correct user name. Set passwords for both users: 13.1.3. Starting VNC Server To start or enable the service, specify the display number directly in the command. The file configured above in Configuring a VNC Display for a Single User works as a template, in which %i is substituted with the display number by systemd . With a valid display number, execute the following command: You can also enable the service to start automatically at system start. Then, when you log in, vncserver is automatically started. As root , issue a command as follows: At this point, other users are able to use a VNC viewer program to connect to the VNC server using the display number and password defined. Provided a graphical desktop is installed, an instance of that desktop will be displayed. It will not be the same instance as that currently displayed on the target machine. 13.1.3.1. Configuring VNC Server for Two Users and Two Different Displays For the two configured VNC servers, [email protected] and [email protected], you can enable different display numbers. For example, the following commands will cause a VNC server for USER_1 to start on display 3, and a VNC server for USER_2 to start on display 5: 13.1.4. VNC setup based on xinetd with XDMCP for GDM VNC setup based on xinetd with X Display Manager Control Protocol (XDMCP) for GDM is a useful setup for client systems that consist mainly of thin clients. After the setup, clients are able to access the GDM login window and log in to any system account. The prerequisite for the setup is that the gdm , vnc , vnc-server & and xinetd packages are installed. Service xinetd must be enabled. System default target unit should be graphical.target . To get the currently set default target unit, use: The default target unit can be changed by using: Accessing the GDM login window and logging in Set up GDM to enable XDMCP by editing the /etc/gdm/custom.conf configuration file: Create a file called /etc/xinetd.d/xvncserver with the following content: In the server_args section, the -query localhost option will make each Xvnc instance query localhost for an xdmcp session. The -depth option specifies the pixel depth (in bits) of the VNC desktop to be created. Acceptable values are 8, 15, 16 and 24 - any other values are likely to cause unpredictable behavior of applications. Edit file /etc/services to have the service defined. To do this, append the following snippet to the /etc/services file: To ensure that the configuration changes take effect, reboot the machine. Alternatively, you can run the following. Change init levels to 3 and back to 5 to force gdm to reload. Verify that gdm is listening on UDP port 177. Restart the xinetd service. Verify that the xinetd service has loaded the new services. Test the setup using a vncviewer command: The command will launch a VNC session to the localhost where no password is asked. You will see a GDM login screen, and you will be able to log in to any user account on the system with a valid user name and password. Then you can run the same test on remote connections. Configure firewall for the setup. Run the firewall configuration tool and add TCP port 5950 to allow incoming connections to the system. 13.1.5. Terminating a VNC Session Similarly to enabling the vncserver service, you can disable the automatic start of the service at system start: Or, when your system is running, you can stop the service by issuing the following command as root : 13.2. Sharing an Existing Desktop By default a logged in user has a desktop provided by X Server on display 0 . A user can share their desktop using the TigerVNC server x0vncserver . Sharing an X Desktop To share the desktop of a logged in user, using the x0vncserver , proceed as follows: Enter the following command as root Set the VNC password for the user: Enter the following command as that user: Provided the firewall is configured to allow connections to port 5900 , the remote viewer can now connect to display 0 , and view the logged in users desktop. See Section 13.3.2.1, "Configuring the Firewall for VNC" for information on how to configure the firewall. 13.3. VNC Viewer vncviewer is a program which shows the graphical user interfaces and controls the vncserver remotely. For operating the vncviewer , there is a pop-up menu containing entries which perform various actions such as switching in and out of full-screen mode or quitting the viewer. Alternatively, you can operate vncviewer through the terminal. Enter vncviewer -h on the command line to list vncviewer 's parameters. 13.3.1. Installing VNC Viewer To install the TigerVNC client, vncviewer , issue the following command as root : 13.3.2. Connecting to VNC Server Once the VNC server is configured, you can connect to it from any VNC viewer. Connecting to a VNC Server Using a GUI Enter the vncviewer command with no arguments, the VNC Viewer: Connection Details utility appears. It prompts for a VNC server to connect to. If required, to prevent disconnecting any existing VNC connections to the same display, select the option to allow sharing of the desktop as follows: Select the Options button. Select the Misc. tab. Select the Shared button. Press OK to return to the main menu. Enter an address and display number to connect to: Press Connect to connect to the VNC server display. You will be prompted to enter the VNC password. This will be the VNC password for the user corresponding to the display number unless a global default VNC password was set. A window appears showing the VNC server desktop. Note that this is not the desktop the normal user sees, it is an Xvnc desktop. Connecting to a VNC Server Using the CLI Enter the viewer command with the address and display number as arguments: Where address is an IP address or host name. Authenticate yourself by entering the VNC password. This will be the VNC password for the user corresponding to the display number unless a global default VNC password was set. A window appears showing the VNC server desktop. Note that this is not the desktop the normal user sees, it is the Xvnc desktop. 13.3.2.1. Configuring the Firewall for VNC When using a non-encrypted connection, firewalld might block the connection. To allow firewalld to pass the VNC packets, you can open specific ports to TCP traffic. When using the -via option, traffic is redirected over SSH which is enabled by default in firewalld . Note The default port of VNC server is 5900. To reach the port through which a remote desktop will be accessible, sum the default port and the user's assigned display number. For example, for the second display: 2 + 5900 = 5902. For displays 0 to 3 , make use of firewalld 's support for the VNC service by means of the service option as described below. Note that for display numbers greater than 3 , the corresponding ports will have to be opened specifically as explained in Opening Ports in firewalld . Enabling VNC Service in firewalld Run the following command to see the information concerning firewalld settings: To allow all VNC connections from a specific address, use a command as follows: Note that these changes will not persist after the system start. To make permanent changes to the firewall, repeat the commands adding the --permanent option. See the Red Hat Enterprise Linux 7 Security Guide for more information on the use of firewall rich language commands. To verify the above settings, use a command as follows: To open a specific port or range of ports make use of the --add-port option to the firewall-cmd command Line tool. For example, VNC display 4 requires port 5904 to be opened for TCP traffic. Opening Ports in firewalld To open a port for TCP traffic in the public zone, issue a command as root as follows: To view the ports that are currently open for the public zone, issue a command as follows: A port can be removed using the firewall-cmd --zone= zone --remove-port= number/protocol command. Note that these changes will not persist after the system start. To make permanent changes to the firewall, repeat the commands adding the --permanent option. For more information on opening and closing ports in firewalld , see the Red Hat Enterprise Linux 7 Security Guide . 13.3.3. Connecting to VNC Server Using SSH VNC is a clear text network protocol with no security against possible attacks on the communication. To make the communication secure, you can encrypt your server-client connection by using the -via option. This will create an SSH tunnel between the VNC server and the client. The format of the command to encrypt a VNC server-client connection is as follows: Example 13.1. Using the -via Option To connect to a VNC server using SSH , enter a command as follows: When you are prompted to, type the password, and confirm by pressing Enter . A window with a remote desktop appears on your screen. Restricting VNC Access If you prefer only encrypted connections, you can prevent unencrypted connections altogether by using the -localhost option in the systemd.service file, the ExecStart line: This will stop vncserver from accepting connections from anything but the local host and port-forwarded connections sent using SSH as a result of the -via option. For more information on using SSH , see Chapter 12, OpenSSH . 13.4. Additional Resources For more information about TigerVNC, see the resources listed below. Installed Documentation vncserver(1) - The manual page for the VNC server utility. vncviewer(1) - The manual page for the VNC viewer. vncpasswd(1) - The manual page for the VNC password command. Xvnc(1) - The manual page for the Xvnc server configuration options. x0vncserver(1) - The manual page for the TigerVNC server for sharing existing X servers. | [
"~]# yum install tigervnc-server",
"~]# cp /usr/lib/systemd/system/[email protected] /etc/systemd/system/[email protected]",
"ExecStart=/usr/bin/vncserver_wrapper <USER> %i",
"~]# systemctl daemon-reload",
"~]# su - USER ~]USD vncpasswd Password: Verify:",
"~]USD su - USER_1 ~]USD vncpasswd Password: Verify: ~]USD su - USER_2 ~]USD vncpasswd Password: Verify:",
"~]# systemctl start vncserver@:display_number.service",
"~]# systemctl enable vncserver@:display_number.service",
"~]# systemctl start vncserver-USER_1@:3.service ~]# systemctl start vncserver-USER_2@:5.service",
"~]# yum install gdm tigervnc tigervnc-server xinetd",
"~]# systemctl enable xinetd.service",
"~]# systemctl get-default",
"~]# systemctl set-default target_name",
"[xdmcp] Enable=true",
"service service_name { disable = no protocol = tcp socket_type = stream wait = no user = nobody server = /usr/bin/Xvnc server_args = -inetd -query localhost -once -geometry selected_geometry -depth selected_depth securitytypes=none }",
"VNC xinetd GDM base service_name 5950/tcp",
"init 3 init 5",
"netstat -anu|grep 177 udp 0 0 0.0.0.0:177 0.0.0.0:*",
"~]# systemctl restart xinetd.service",
"netstat -anpt|grep 595 tcp 0 0 :::5950 :::* LISTEN 3119/xinetd",
"vncviewer localhost:5950",
"~]# firewall-cmd --permanent --zone=public --add-port=5950/tcp ~]# firewall-cmd --reload",
"~]# systemctl disable vncserver@:display_number.service",
"~]# systemctl stop vncserver@:display_number.service",
"~]# yum install tigervnc-server",
"~]USD vncpasswd Password: Verify:",
"~]USD x0vncserver -PasswordFile=.vnc/passwd -AlwaysShared=1",
"~]# yum install tigervnc",
"address : display_number",
"vncviewer address : display_number",
"~]USD firewall-cmd --list-all",
"~]# firewall-cmd --add-rich-rule='rule family=\"ipv4\" source address=\"192.168.122.116\" service name=vnc-server accept' success",
"~]# firewall-cmd --list-all public (default, active) interfaces: bond0 bond0.192 sources: services: dhcpv6-client ssh ports: masquerade: no forward-ports: icmp-blocks: rich rules: rule family=\"ipv4\" source address=\"192.168.122.116\" service name=\"vnc-server\" accept",
"~]# firewall-cmd --zone=public --add-port=5904/tcp success",
"~]# firewall-cmd --zone=public --list-ports 5904/tcp",
"vncviewer -via user @ host : display_number",
"~]USD vncviewer -via [email protected]:3",
"ExecStart=/usr/sbin/runuser -l user -c \"/usr/bin/vncserver -localhost %i\""
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system_administrators_guide/ch-TigerVNC |
Chapter 2. Requirements | Chapter 2. Requirements 2.1. Subscriptions and repositories It is important to keep the subscription, kernel, and patch level identical on all cluster nodes and to ensure that the correct repositories are enabled. Check out the following documentation for guidelines on how to enable the required subscriptions and repositories for running SAP NetWeaver or SAP S/4HANA application servers on RHEL 8 and have them managed by the RHEL HA Add-On: RHEL for SAP Subscriptions and Repositories . 2.2. Storage requirements The directories used by a SAP S/4HANA installation that is managed by the cluster must be set up according to the guidelines provided by SAP. See SAP Directories for more information. 2.2.1. Local directories As per SAP's guidance , the /usr/sap/ , /usr/sap/SYS/ , and /usr/sap/<SAPSID>/ directories should be created locally on each node. While /usr/sap/ will contain some additional files and directories after the installation of the SAP system that are specific to the node (for example, /usr/sap/sapservices , and /usr/sap/hostctrl ), /usr/sap/SYS/ only contains symlinks to other files and directories, and /usr/sap/<SAPSID>/ is primarily used as a mountpoint for the instance-specific directories. 2.2.2. Instance Specific Directories For the (A)SCS , ERS , and any other application server instance that is managed by the cluster, the instance-specific directory must be created on a separate SAN LUN or NFS export that can be mounted by the cluster as a local directory on the node where an instance is supposed to be running. For example: (A)SCS : /usr/sap/<SAPSID>/ASCS<Ins#>/ ERS : /usr/sap/<SAPSID>/ERS<Ins#>/ App Server: /usr/sap/<SAPSID>/D<Ins#>/ The cluster configuration must include resources for managing the filesystems for the instance directories as part of the resource group that is used to manage the instance and the virtual IP so that the cluster can automatically mount the filesystem on the node where the instance should be running. When using SAN LUNs for instance-specific directories, customers must use HA-LVM to ensure that the instance directories can only be mounted on one node at a time. The resources for managing the logical volumes (if SAN LUNS are used) and the filesystems must always be configured before the resource that is used for managing the SAP instance to ensure that the filesystem is mounted when the cluster attempts to start the instance itself. With the exception of NFS , using a shared file system (for example, GFS2) to host all the instance-specific directories and make them available on all cluster nodes at the same time is not supported for the solution described in this document. When using NFS exports for specific directories, if the directories are created on the same directory tree on an NFS file server, such as Azure NetApp Files (ANF) or Amazon EFS, the option force_unmount=safe must be used when configuring the Filesystem resource. This option will ensure that the cluster only stops the processes running on the specific NFS export instead of stopping all processes running on the directory tree where the exports have been created (see During failover of a pacemaker resource, a Filesystem resource kills processes not using the filesystem for more information). 2.2.3. Shared Directories The following directories must be available on all servers running SAP instances of an SAP system: /sapmnt/ /usr/sap/trans/ The /sapmnt/ directory must also be accessible on all other servers that are running services that are part of the SAP system (for example, the servers hosting the HANA DB instances or servers hosting additional application servers not managed by the cluster). To share the /sapmnt/ and /usr/sap/trans/ directories between all the servers hosting services of the same SAP system, either one of the following methods can be used: Using an external NFS server (as documented in Support Policies for RHEL High Availability Clusters - Management of Highly Available Filesystem Mounts using the same host as an NFS server and as an NFS client that mounts the same NFS exports ("loopback mounts") from this NFS server at the same time is not supported). Using the GFS2 filesystem (this requires all nodes to have Resilient Storage Add-on subscriptions, including servers that are running SAP instances not managed by the cluster). The shared directories can either be statically mounted via /etc/fstab or the mounts can be managed by the cluster (in this case, it must be ensured that the cluster mounts the /sapmnt/ directory on the cluster nodes before attempting to start any SAP instances by setting up appropriate constraints). 2.3. Fencing/STONITH As documented at Support Policies for RHEL High Availability Clusters - General Requirements for Fencing/STONITH , a working Fencing/STONITH device must be enabled on each cluster node in order for an HA cluster setup using the RHEL HA Add-on to be fully supported. Which Fencing/STONITH device to use depends on the platform the cluster is running on. Please check out the Fencing/STONITH section in the Support Policies for RHEL High Availability Clusters for recommendations on fencing agents, or consult with your hardware or cloud provider to find out which fence device to use on their platform. Note Using fence_scsi/fence_mpath as the fencing device for HA cluster setups for managing SAP NetWeaver/S/4HANA application server instances is not a supported option since, as documented in Support Policies for RHEL High Availability Clusters - fence_scsi and fence_mpath these fence devices can only be used for cluster setups that manage shared storage, which is simultaneously accessed by multiple clients for reading and writing. Since the main purpose of a HA cluster for managing SAP NetWeaver/S/4HANA is to manage the SAP application server instances and not the shared directories that are needed in such environments, using fence_scsi/fence_mpath could result in the SAP instances not being stopped in case a node needs to be fenced (since fence_scsi/fence_mpath normally only block access to the storage devices managed by the cluster). 2.4. Quorum While pacemaker provides some built-in mechanisms to determine if a cluster is quorate or not, in some situations it might be desirable to add additional "quorum devices" in the cluster setup to help the cluster determine which side of the cluster should stay up and running in case a "split-brain" situation occurs. For HA cluster setups that are used for managing SAP Application server instances, a quorum device is not required by default, but it is possible to add quorum devices to such setups if needed. The options for setting up quorum devices vary depending on the configuration. Please review the following guidelines for more information: Design Guidance for RHEL High Availability Clusters - Considerations with qdevice Quorum Arbitration . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/configuring_ha_clusters_to_manage_sap_netweaver_or_sap_s4hana_application_server_instances_using_the_rhel_ha_add-on/asmb_req_v8-configuring-clusters-to-manage |
probe::tcpmib.ActiveOpens | probe::tcpmib.ActiveOpens Name probe::tcpmib.ActiveOpens - Count an active opening of a socket Synopsis tcpmib.ActiveOpens Values op value to be added to the counter (default value of 1) sk pointer to the struct sock being acted on Description The packet pointed to by skb is filtered by the function tcpmib_filter_key . If the packet passes the filter is is counted in the global ActiveOpens (equivalent to SNMP's MIB TCP_MIB_ACTIVEOPENS) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-tcpmib-activeopens |
Chapter 55. SensorUpgradeService | Chapter 55. SensorUpgradeService 55.1. TriggerSensorUpgrade POST /v1/sensorupgrades/cluster/{id} 55.1.1. Description 55.1.2. Parameters 55.1.2.1. Path Parameters Name Description Required Default Pattern id X null 55.1.3. Return Type Object 55.1.4. Content Type application/json 55.1.5. Responses Table 55.1. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 55.1.6. Samples 55.1.7. Common object reference 55.1.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 55.1.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 55.1.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 55.2. GetSensorUpgradeConfig GET /v1/sensorupgrades/config 55.2.1. Description 55.2.2. Parameters 55.2.3. Return Type V1GetSensorUpgradeConfigResponse 55.2.4. Content Type application/json 55.2.5. Responses Table 55.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetSensorUpgradeConfigResponse 0 An unexpected error response. RuntimeError 55.2.6. Samples 55.2.7. Common object reference 55.2.7.1. GetSensorUpgradeConfigResponseSensorAutoUpgradeFeatureStatus Enum Values NOT_SUPPORTED SUPPORTED 55.2.7.2. GetSensorUpgradeConfigResponseUpgradeConfig Field Name Required Nullable Type Description Format enableAutoUpgrade Boolean autoUpgradeFeature GetSensorUpgradeConfigResponseSensorAutoUpgradeFeatureStatus NOT_SUPPORTED, SUPPORTED, 55.2.7.3. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 55.2.7.3.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 55.2.7.4. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 55.2.7.5. V1GetSensorUpgradeConfigResponse Field Name Required Nullable Type Description Format config GetSensorUpgradeConfigResponseUpgradeConfig 55.3. UpdateSensorUpgradeConfig POST /v1/sensorupgrades/config 55.3.1. Description 55.3.2. Parameters 55.3.2.1. Body Parameter Name Description Required Default Pattern body V1UpdateSensorUpgradeConfigRequest X 55.3.3. Return Type Object 55.3.4. Content Type application/json 55.3.5. Responses Table 55.3. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 55.3.6. Samples 55.3.7. Common object reference 55.3.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 55.3.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 55.3.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 55.3.7.3. StorageSensorUpgradeConfig SensorUpgradeConfig encapsulates configuration relevant to sensor auto-upgrades. Field Name Required Nullable Type Description Format enableAutoUpgrade Boolean Whether to automatically trigger upgrades for out-of-date sensors. 55.3.7.4. V1UpdateSensorUpgradeConfigRequest Field Name Required Nullable Type Description Format config StorageSensorUpgradeConfig 55.4. TriggerSensorCertRotation POST /v1/sensorupgrades/rotateclustercerts/{id} 55.4.1. Description 55.4.2. Parameters 55.4.2.1. Path Parameters Name Description Required Default Pattern id X null 55.4.3. Return Type Object 55.4.4. Content Type application/json 55.4.5. Responses Table 55.4. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 55.4.6. Samples 55.4.7. Common object reference 55.4.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 55.4.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 55.4.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny | [
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/api_reference/sensorupgradeservice |
28.2. Managing Certificates and Certificate Authorities | 28.2. Managing Certificates and Certificate Authorities Almost every IdM topology will include an integrated Dogtag Certificate System to manage certificates for servers/replicas, hosts, users, and services within the IdM domain. The Dogtag Certificate System configuration itself may require changes as the domain and the physical machines change. Note Using more than one certificate authority (CA) signing certificate within your IdM environment is not supported in Red Hat Enterprise Linux 6. To support this configuration, upgrade your IdM systems to Red Hat Enterprise Linux 7. 28.2.1. Renewing CA Certificates Issued by External CAs All certificates issued by the IdM servers, such as host and user certificates (including subsystem and server certificates used by internal IdM services), are tracked by the certmonger utility and automatically renewed as they near expiration. There is one exception: the CA certificate itself. This certificate is not automatically renewed when it expires. Warning Make sure to always renew the CA certificate in time before it expires. Note that you must monitor the expiration date of the CA certificate yourself. IdM does not monitor the expiration date automatically in Red Hat Enterprise Linux 6. The CA certificate must be renewed through the exernal CA which issued it, and then manually updated in the certificate databases (also called NSS databases ). This is done using the certutil NSS security utility. [8] Note It is not possible to renew the CA certificate using the IdM web UI or IdM command-line utilities. There are some requirements for renewing the certificate: The external CA which issued the certificate must allow renewals. The CA's private key must not change. The new certificate should have the same subject name as the original certificate. You need the original CSR (Certificate Signing Request) in order to obtain a new certificate. You may be able to find this in one of three places: The external CA may still have a copy of it, in the /root/ipa.csr file on the first-installed IdM server, in the ca.signing.certreq section of the /etc/pki-ca/CS.cfg file on the first-installed IdM server. This will need to be converted to the PEM format. You also need to know the nickname of your CA in the NSS databases. It is usually <REALM> IPA CA . We use EXAMPLE.COM IPA CA here. You can query the Apache database to find out the current nickname by running the following command: 28.2.1.1. The Renewal Procedure The renewal must take place in the period in which your other certificates are still valid. Your CA needs to be running in order to renew its own subsystem certificates. If you try to renew the CA certificate after it has expired such that its validity dates are past the expiration date of the CA subsystem certificates, your IdM server will not work. Renew the Certificate Give the CSR to your external CA and have them issue you a new certificate. We assume that the resulting certificate is saved into the /root/ipa.crt file. We also assume that the /root/external-ca.pem file contains the external CA certificate chain in the PEM format. The renewal needs to be done on the IdM CA designated for managing renewals. One way to identify the first-installed IdM server is to see if the value for subsystem.select is New : An alternative method is to look for the renew_ca_cert post-save command in the output of the getcert list command. Install the new CA certificate on your first-installed IdM server The CA needs to be shut down in order to update its certificate: Update the CA certificate NSS database: Replace the value of ca.signing.cert in /etc/pki-ca/CS.cfg . This is the base64 value of the certificate. You can obtain this by removing the BEGIN/END blocks from ipa.crt and compressing it into a single line. Update the Apache NSS database: Update the LDAP server instances: Update the CA certificate in the file system: Update the shared system database: Restart the service: Update the CA certificate in LDAP. First, convert the certificate to the DER format: Add the certificate to LDAP: Install the new CA certificate on other IdM servers with a CA Copy the updated certificate to the machine and stop the service. Let's assume the file is /root/ipa.crt . Update the Apache NSS database: Replace the value of ca.signing.cert in /etc/pki-ca/CS.cfg . This is the base64 value of the certificate. You can obtain this by removing the BEGIN/END blocks from ipa.crt and compressing it into a single line. Update the Apache NSS database: Update the LDAP server instances: Update the CA certificate in the file system: Update the shared system database: Restart the service: Install the new CA certificate on other IdM masters without a CA Copy the updated certificate to the machine and stop the service. Let's assume the file is /root/ipa.crt . Update the Apache NSS database: Update the LDAP server instances: Update the CA certificate in the file system: Update the shared system database: Restart the service: Install the new CA certificate on all IdM client machines Retrieve the updated IdM CA certificate. Let's assume the file is /tmp/ipa.crt . 28.2.2. Renewing CA Certificates Issued by the IdM CA All certificates issued by the IdM servers, such as host and user certificates (including subsystem and server certificates used by internal IdM services), are tracked by the certmonger utility and automatically renewed as they near expiration. There is one exception: the CA certificate itself. This certificate is not automatically renewed when it expires. Warning Make sure to always renew the CA certificate in time before it expires. Note that you must monitor the expiration date of the CA certificate yourself. IdM does not monitor the expiration date automatically in Red Hat Enterprise Linux 6. 28.2.2.1. The Renewal Procedure The renewal must take place in the period in which your other certificates are still valid. Your CA needs to be running in order to renew its own subsystem certificates. If you try to renew the CA certificate after it has expired such that its validity dates are past the expiration date of the CA subsystem certificates, your IdM server will not work. Renew the Signing Certificate of your IdM CA and install the new CA certificate on your first-installed IdM server Ensure IPA is stopped: Ensure ntpd is not running: Start the Directory Server and ensure it is running: Start the Dogtag CA and ensure it is running: Enter the following command to attempt to renew the Dogtag CA signing certificate directly via the certmonger helper, dogtag-ipa-renew-agent-submit : Update the CA certificate NSS database: Replace the value of ca.signing.cert in /etc/pki-ca/CS.cfg . This is the base64 value of the certificate. You can obtain this by removing the BEGIN/END blocks from ipa.crt and compressing it into a single line. Update the Apache NSS database: Update the LDAP server instances: Update the CA certificate in the file system: Update the shared system database: Restart the service: Update the CA certificate in LDAP. First, convert the certificate to the DER format: Add the certificate to LDAP: Use ipa-getcert list to list all requests tracked by certmonger: If the output shows that any of the subsystem certificates are already expired, use ipa-getcert resubmit on each of them one by one to renew the certificates. For more details, see the Dealing with expiring IDM CA certificates on Red Hat Enterprise Linux 6 and 7 Knowledgebase solution. Install the new CA certificate on other IdM servers with a CA Copy the updated certificate to the machine and stop the service. Let's assume the file is /root/ipa.crt . Update the Apache NSS database: Replace the value of ca.signing.cert in /etc/pki-ca/CS.cfg . This is the base64 value of the certificate. You can obtain this by removing the BEGIN/END blocks from ipa.crt and compressing it into a single line. Update the Apache NSS database: Update the LDAP server instances: Update the CA certificate in the file system: Update the shared system database: Restart the service: Install the new CA certificate on other IdM masters without a CA Copy the updated certificate to the machine and stop the service. Let's assume the file is /root/ipa.crt . Update the Apache NSS database: Update the LDAP server instances: Update the CA certificate in the file system: Update the shared system database: Restart the service: Install the new CA certificate on all IdM client machines Retrieve the updated IdM CA certificate. Let's assume the file is /tmp/ipa.crt . 28.2.3. Configuring Alternate Certificate Authorities IdM creates a Dogtag Certificate System certificate authority (CA) during the server installation process. To use an external CA, it is possible to create the required server certificates and then import them into the 389 Directory Server and the HTTP server, which require IdM server certificates. Note Save an ASCII copy of the CA certificate as /usr/share/ipa/html/ca.crt . This allows users to download the correct certificate when they configure their browsers. Use the ipa-server-certinstall command to install the certificate. To keep using browser autoconfiguration in Firefox, regenerate the /usr/share/ipa/html/configure.jar file. Create a directory, and then create the new security databases in that directory. Import the PKCS #12 file for the signing certificate into that directory. Make a temporary signing directory, and copy the IdM JavaScript file to that directory. Use the object signing certificate to sign the JavaScript file and to regenerate the configure.jar file. 28.2.4. Changing Which Server Generates CRLs The master CA is the authoritative CA; it has the root CA signing key and generates CRLs which are distributed among the other servers and replicas in the topology. In general, the first IdM server installed owns the master CA in the PKI hierarchy. All subsequent replica databases are cloned (or copied) directly from that master database as part of running ipa-replica-install . Note The only reason to replace the master server is if the master server is being taken offline. There has to be a root CA which can issue CRLs and ultimately validate certificate checks. As explained in Section 1.3.1, "About IdM Servers and Replicas" , all servers and replicas work together to share data. This arrangement is the server topology . Servers (created with ipa-server-install ) is almost always created to host certificate authority services [9] . These are the original CA services. When a replica is created (with ipa-replica-install ), it is based on the configuration of an existing server. A replica can host CA services, but this is not required. After they are created, servers and replicas are equal peers in the server topology. They are all read-write data masters and replicate information to each other through multi-master replication. Servers and replicas which host a CA are also equal peers in the topology. They can all issue certificates and keys to IdM clients, and they all replicate information amongst themselves. The only difference between a server and a replica is which IdM instance issues the CRL. When the first server is installed, it is configured to issue CRLs. In its CA configuration file ( /var/lib/pki-ca/conf/CS.cfg ), it has CRL generation enabled: All replicas point to that master CA as the source for CRL information and disable the CRL settings: There must be one instance somewhere in the IdM topology which issues CRLs. If the original server is going to be taken offline or decommissioned, a replica needs to be configured to take its place. Promoting a replica to a master server changes its configuration and enables it to issue CRLs and function as the root CA. To move CRL generation from a server to a replica, first decommission the original master CA. Identify which server instance is the master CA server. Both CRL generation and renewal operations are handled by the same CA server. So, the master CA can be identified by having the renew_ca_cert certificate being tracked by certmonger . On the original master CA , disable tracking for all of the original CA certificates. Reconfigure the original master CA to retrieve renewed certificates from a new master CA. Copy the renewal helper into the certmonger directory, and set the appropriate permissions. Update the SELinux configuration. Restart certmonger . Check that the CA is listed to retrieve certificates. This is printed in the CA configuration. Get the CA certificate database PIN. Configure certmonger track the certificates for external renewal. This requires the database PIN. Stop CRL generation on the original master CA. Stop CA service: Open the CA configuration file. Change the values of the ca.crl.MasterCRL.enableCRLCache and ca.crl.MasterCRL.enableCRLUpdates parameters to false to disable CRL generation. Start CA service: Configure Apache to redirect CRL requests to the new master. Open the CA proxy configuration. Uncomment the RewriteRule on the last line: Restart Apache: Then, set up a replica as a new master: Stop tracking the CA's certificates to change the renewal settings. As a clone, the CA was configured to retrieve its renewed certificates from the master; as the master CA, it will issue the renewed certificates. Get the PIN for the CA certificate database. Set up the certificates to be tracked in certmonger using the renewal agent profile. Configure the new master CA to generate CRLs. Stop CA service: Open the CA configuration file. Change the values of the ca.crl.MasterCRL.enableCRLCache and ca.crl.MasterCRL.enableCRLUpdates parameters to true to enable CRL generation. Start CA service: Configure Apache to disable redirect CRL requests. As a clone, all CRL requests were routed to the original master. As the new master, this instance will respond to CRL requests. Open the CA proxy configuration. Comment out the RewriteRule argument on the last line: Restart Apache: 28.2.5. Configuring OCSP Responders A certificate is created with a validity period, meaning it has a point where it expires and is no longer valid. The expiration date is contained in the certificate itself, so a client always checks the validity period in the certificate to see if the certificate is still valid. However, a certificate can also be revoked before its validity period is up, but this information is not contained in the certificate. A CA publishes a certificate revocation list (CRL), which contains a complete list of every certificate that was issued by that CA and subsequently revoked. A client can check the CRL to see if a certificate within its validity period has been revoked and is, therefore, invalid. Validity checks are performed using the online certificate status protocol (OCSP), which sends a request to an OCSP responder . Each CA integrated with the IdM server uses an internal OCSP responder, and any client which runs a validity check can check the IdM CA's internal OCSP responder. Every certificate issued by the IdM CA puts its OCSP responder service URL in the certificate. For example: Note For the IdM OCSP responder to be available, port 9180 needs to be open in the firewall. 28.2.5.1. Using an OSCP Responder with SELinux Clients can use the Identity Management OCSP responder to check certificate validity or to retrieve CRLs. A client can be a number of different services, but is most frequently an Apache server and the mod_revocator module (which handles CRL and OCSP operations). The Identity Management CA has an OCSP responder listening over port 9180, which is also the port available for CRL retrieval. This port is protected by default SELinux policies to prevent unauthorized access. If an Apache server attempts to connect to the OCSP port, then it may be denied access by SELinux. The Apache server, on the local machine, must be granted access to port 9180 for it to be able to connect to the Identity Management OCSP responder. There are two ways to work around this by changing the SELinux policies: Edit the SELinux policy to allow Apache servers using the mod_revocator module to connect to port 9180: Generate a new SELinux policy to allow access based on the SELinux error logs for the mod_revocator connection attempt. 28.2.5.2. Changing the CRL Update Interval The CRL file is automatically generated by the Dogtag Certificate System CA every four hours. This interval can be changed by editing the Dogtag Certificate System configuration. Stop the CA server. Open the CS.cfg file. Change the ca.crl.MasterCRL.autoUpdateInterval to the new interval setting. Restart the CA server. 28.2.5.3. Changing the OCSP Responder Location Each IdM server generates its own CRL. Likewise, each IdM server uses its own OCSP responder, with its own OCSP responder URL in the certificates it issues. A DNS CNAME can be used by IdM clients, and then from there be redirected to the appropriate IdM server OCSP responder. Open the certificate profile. Change the policyset.serverCertSet.9.default.params.crlDistPointsPointName_0 parameter to the DNS CNAME hostname. Restart the CA server. That change must be made on every IdM server, with the crlDistPointsPointName_0 parameter set to the same hostname. [8] For more information about certutil , see the Mozilla NSS developer documentation . [9] The only exception to this is if system certificates are manually loaded during the installation for a CA-less installation. Otherwise, a Dogtag Certificate System instance is installed and configured. | [
"certutil -L -d /etc/httpd/alias",
"grep subsystem.select /etc/pki-ca/CS.cfg subsystem.select= New",
"Number of certificates and requests being tracked: 8. Request ID '20131125153455': status: MONITORING stuck: no key pair storage: type=NSSDB,location='/var/lib/pki-ca/alias',nickname='auditSigningCert cert-pki-ca',token='NSS Certificate DB',pin='455536908955' certificate: type=NSSDB,location='/var/lib/pki-ca/alias',nickname='auditSigningCert cert-pki-ca',token='NSS Certificate DB' CA: dogtag-ipa-renew-agent issuer: CN=Certificate Authority,O=EXAMPLE.COM subject: CN=CA Audit,O=EXAMPLE.COM expires: 2015-11-15 15:34:12 UTC pre-save command: /usr/lib64/ipa/certmonger/stop_pkicad post-save command: /usr/lib64/ipa/certmonger/renew_ca_cert \"auditSigningCert cert-pki-ca\" track: yes auto-renew: yes",
"service ipa stop",
"certutil -A -d /var/lib/pki-ca/alias -n 'caSigningCert cert-pki-ca' -t CT,C,C -a -i /root/ipa.crt",
"certutil -A -d /etc/httpd/alias -n 'EXAMPLE.COM IPA CA' -t CT,C,C -a -i /root/ipa.crt",
"certutil -A -d /etc/dirsrv/slapd-EXAMPLE-COM -n 'EXAMPLE.COM IPA CA' -t CT,C,C -a -i /root/ipa.crt certutil -A -d /etc/dirsrv/slapd-PKI-IPA -n 'EXAMPLE.COM IPA CA' -t CT,C,C -a -i /root/ipa.crt",
"cp /root/ipa.crt /etc/ipa/ca.crt cat /root/ipa.crt /root/external-ca.pem >/etc/httpd/alias/cacert.asc cp /etc/httpd/alias/cacert.asc /usr/share/ipa/html/ca.crt",
"certutil -A -d /etc/pki/nssdb -n 'IPA CA' -t CT,C,C -a -i /root/ipa.crt",
"service ipa start",
"openssl x509 -outform DER -in /root/ipa.crt -out /tmp/ipa.der",
"kinit admin ldapmodify -Y GSSAPI SASL/GSSAPI authentication started SASL username: [email protected] SASL SSF: 56 SASL data security layer installed. dn: cn=CAcert,cn=ipa,cn=etc,dc=example,dc=com changetype: modify replace: cacertificate;binary cacertificate;binary:<file:///tmp/ipa.der",
"service ipa stop",
"certutil -A -d /var/lib/pki-ca/alias -n 'caSigningCert cert-pki-ca' -t CT,C,C -a -i /root/ipa.crt",
"certutil -A -d /etc/httpd/alias -n 'EXAMPLE.COM IPA CA' -t CT,C,C -a -i /root/ipa.crt",
"certutil -A -d /etc/dirsrv/slapd-EXAMPLE-COM -n 'EXAMPLE.COM IPA CA' -t CT,C,C -a -i /root/ipa.crt certutil -A -d /etc/dirsrv/slapd-PKI-IPA -n 'EXAMPLE.COM IPA CA' -t CT,C,C -a -i /root/ipa.crt",
"cp /root/ipa.crt /etc/ipa/ca.crt cat /root/ipa.crt /root/external-ca.pem >/etc/httpd/alias/cacert.asc cp /etc/httpd/alias/cacert.asc /usr/share/ipa/html/ca.crt",
"certutil -A -d /etc/pki/nssdb -n 'IPA CA' -t CT,C,C -a -i /root/ipa.crt",
"service ipa start",
"service ipa stop",
"certutil -A -d /etc/httpd/alias -n 'EXAMPLE.COM IPA CA' -t CT,C,C -a -i /root/ipa.crt",
"certutil -A -d /etc/dirsrv/slapd-EXAMPLE-COM -n 'EXAMPLE.COM IPA CA' -t CT,C,C -a -i /root/ipa.crt certutil -A -d /etc/dirsrv/slapd-PKI-IPA -n 'EXAMPLE.COM IPA CA' -t CT,C,C -a -i /root/ipa.crt",
"cp /root/ipa.crt /etc/ipa/ca.crt cat /root/ipa.crt /root/external-ca.pem >/etc/httpd/alias/cacert.asc cp /etc/httpd/alias/cacert.asc /usr/share/ipa/html/ca.crt",
"certutil -A -d /etc/pki/nssdb -n 'IPA CA' -t CT,C,C -a -i /root/ipa.crt",
"service ipa start",
"certutil -A -d /etc/pki/nssdb -n 'IPA CA' -t CT,C,C -a -i /tmp/ipa.crt cp /tmp/ipa.crt /etc/ipa/ca.crt",
"ipactl status ipactl stop",
"service ntpd status service ntpd stop",
"service dirsrv start service dirsrv status",
"service pki-cad start service pki-cad status",
"/usr/libexec/certmonger/dogtag-ipa-renew-agent-submit -D 1 -T caCACert | tail -n 1 | xargs /usr/libexec/certmonger/dogtag-ipa-renew-agent-submit -d /etc/httpd/alias -n ipaCert -p /etc/httpd/alias/pwdfile.txt -v -S",
"certutil -A -d /var/lib/pki-ca/alias -n 'caSigningCert cert-pki-ca' -t CT,C,C -a -i /root/ipa.crt",
"certutil -A -d /etc/httpd/alias -n 'EXAMPLE.COM IPA CA' -t CT,C,C -a -i /root/ipa.crt",
"certutil -A -d /etc/dirsrv/slapd-EXAMPLE-COM -n 'EXAMPLE.COM IPA CA' -t CT,C,C -a -i /root/ipa.crt # certutil -A -d /etc/dirsrv/slapd-PKI-IPA -n 'EXAMPLE.COM IPA CA' -t CT,C,C -a -i /root/ipa.crt",
"cp /root/ipa.crt /etc/ipa/ca.crt # cat /root/ipa.crt /root/external-ca.pem >/etc/httpd/alias/cacert.asc # cp /etc/httpd/alias/cacert.asc /usr/share/ipa/html/ca.crt",
"certutil -A -d /etc/pki/nssdb -n 'IPA CA' -t CT,C,C -a -i /root/ipa.crt",
"ipactl start",
"openssl x509 -outform DER -in /root/ipa.crt -out /tmp/ipa.der",
"kinit admin # ldapmodify -Y GSSAPI SASL/GSSAPI authentication started SASL username: [email protected] SASL SSF: 56 SASL data security layer installed. dn: cn=CAcert,cn=ipa,cn=etc,dc=example,dc=com changetype: modify replace: cacertificate;binary cacertificate;binary:<file:///tmp/ipa.der",
"ipa-getcert list",
"service ipa stop",
"certutil -A -d /var/lib/pki-ca/alias -n 'caSigningCert cert-pki-ca' -t CT,C,C -a -i /root/ipa.crt",
"certutil -A -d /etc/httpd/alias -n 'EXAMPLE.COM IPA CA' -t CT,C,C -a -i /root/ipa.crt",
"certutil -A -d /etc/dirsrv/slapd-EXAMPLE-COM -n 'EXAMPLE.COM IPA CA' -t CT,C,C -a -i /root/ipa.crt # certutil -A -d /etc/dirsrv/slapd-PKI-IPA -n 'EXAMPLE.COM IPA CA' -t CT,C,C -a -i /root/ipa.crt",
"cp /root/ipa.crt /etc/ipa/ca.crt # cat /root/ipa.crt /root/external-ca.pem >/etc/httpd/alias/cacert.asc # cp /etc/httpd/alias/cacert.asc /usr/share/ipa/html/ca.crt",
"certutil -A -d /etc/pki/nssdb -n 'IPA CA' -t CT,C,C -a -i /root/ipa.crt",
"service ipa start",
"service ipa stop",
"certutil -A -d /etc/httpd/alias -n 'EXAMPLE.COM IPA CA' -t CT,C,C -a -i /root/ipa.crt",
"certutil -A -d /etc/dirsrv/slapd-EXAMPLE-COM -n 'EXAMPLE.COM IPA CA' -t CT,C,C -a -i /root/ipa.crt # certutil -A -d /etc/dirsrv/slapd-PKI-IPA -n 'EXAMPLE.COM IPA CA' -t CT,C,C -a -i /root/ipa.crt",
"cp /root/ipa.crt /etc/ipa/ca.crt # cat /root/ipa.crt /root/external-ca.pem >/etc/httpd/alias/cacert.asc # cp /etc/httpd/alias/cacert.asc /usr/share/ipa/html/ca.crt",
"certutil -A -d /etc/pki/nssdb -n 'IPA CA' -t CT,C,C -a -i /root/ipa.crt",
"service ipa start",
"certutil -A -d /etc/pki/nssdb -n 'IPA CA' -t CT,C,C -a -i /tmp/ipa.crt # cp /tmp/ipa.crt /etc/ipa/ca.crt",
"/usr/sbin/ipa-server-certinstall -d /path/to/pkcs12.p12",
"mkdir /tmp/signdb certutil -N -d /tmp/signdb",
"pk12util -i /path/to/ pkcs12.p12 -d /tmp/signdb",
"mkdir /tmp/sign cp /usr/share/ipa/html/preferences.html /tmp/sign",
"signtool -d /tmp/signdb -k Signing_cert_nickname -Z /usr/share/ipa/html/configure.jar -e .html /tmp/sign",
"ca.crl. issuingPointId .enableCRLCache=true ca.crl. issuingPointId .enableCRLUpdates=true ca.listenToCloneModifications=false",
"ca.crl. issuingPointId .enableCRLUpdates=false",
"getcert list -d /var/lib/pki-ca/alias -n \"subsystemCert cert-pki-ca\" | grep post-save post-save command: /usr/lib64/ipa/certmonger/renew_ca_cert \"subsystemCert cert-pki-ca\"",
"getcert stop-tracking -d /var/lib/pki-ca/alias -n \"auditSigningCert cert-pki-ca\" Request \"20131127184547\" removed. getcert stop-tracking -d /var/lib/pki-ca/alias -n \"ocspSigningCert cert-pki-ca\" Request \"20131127184548\" removed. getcert stop-tracking -d /var/lib/pki-ca/alias -n \"subsystemCert cert-pki-ca\" Request \"20131127184549\" removed. getcert stop-tracking -d /etc/httpd/alias -n ipaCert Request \"20131127184550\" removed.",
"cp /usr/share/ipa/ca_renewal /var/lib/certmonger/cas/ca_renewal chmod 0600 /var/lib/certmonger/cas/ca_renewal",
"/sbin/restorecon /var/lib/certmonger/cas/ca_renewal",
"service certmonger restart",
"getcert list-cas CA 'dogtag-ipa-retrieve-agent-submit': is-default: no ca-type: EXTERNAL helper-location: /usr/libexec/certmonger/dogtag-ipa-retrieve-agent-submit",
"grep internal= /var/lib/pki-ca/conf/password.conf",
"getcert start-tracking -c dogtag-ipa-retrieve-agent-submit -d /var/lib/pki-ca/alias -n \"auditSigningCert cert-pki-ca\" -B /usr/lib64/ipa/certmonger/stop_pkicad -C '/usr/lib64/ipa/certmonger/restart_pkicad \"auditSigningCert cert-pki-ca\"' -T \"auditSigningCert cert-pki-ca\" -P database_pin New tracking request \"20131127184743\" added. getcert start-tracking -c dogtag-ipa-retrieve-agent-submit -d /var/lib/pki-ca/alias -n \"ocspSigningCert cert-pki-ca\" -B /usr/lib64/ipa/certmonger/stop_pkicad -C '/usr/lib64/ipa/certmonger/restart_pkicad \"ocspSigningCert cert-pki-ca\"' -T \"ocspSigningCert cert-pki-ca\" -P database_pin New tracking request \"20131127184744\" added. getcert start-tracking -c dogtag-ipa-retrieve-agent-submit -d /var/lib/pki-ca/alias -n \"subsystemCert cert-pki-ca\" -B /usr/lib64/ipa/certmonger/stop_pkicad -C '/usr/lib64/ipa/certmonger/restart_pkicad \"subsystemCert cert-pki-ca\"' -T \"subsystemCert cert-pki-ca\" -P database_pin New tracking request \"20131127184745\" added. getcert start-tracking -c dogtag-ipa-retrieve-agent-submit -d /etc/httpd/alias -n ipaCert -C /usr/lib64/ipa/certmonger/restart_httpd -T ipaCert -p /etc/httpd/alias/pwdfile.txt New tracking request \"20131127184746\" added.",
"service pki-cad stop",
"vim /var/lib/pki-ca/conf/CS.cfg",
"ca.crl.MasterCRL.enableCRLCache=false ca.crl.MasterCRL.enableCRLUpdates=false",
"service pki-cad start",
"vim /etc/httpd/conf.d/ipa-pki-proxy.conf",
"RewriteRule ^/ipa/crl/MasterCRL.bin https://server.example.com/ca/ee/ca/getCRL?op=getCRL&crlIssuingPoint=MasterCRL [L,R=301,NC]",
"service httpd restart",
"getcert stop-tracking -d /var/lib/pki-ca/alias -n \"auditSigningCert cert-pki-ca\" Request \"20131127163822\" removed. getcert stop-tracking -d /var/lib/pki-ca/alias -n \"ocspSigningCert cert-pki-ca\" Request \"20131127163823\" removed. getcert stop-tracking -d /var/lib/pki-ca/alias -n \"subsystemCert cert-pki-ca\" Request \"20131127163824\" removed. getcert stop-tracking -d /etc/httpd/alias -n ipaCert Request \"20131127164042\" removed.",
"grep internal= /var/lib/pki-ca/conf/password.conf",
"getcert start-tracking -c dogtag-ipa-renew-agent -d /var/lib/pki-ca/alias -n \"auditSigningCert cert-pki-ca\" -B /usr/lib64/ipa/certmonger/stop_pkicad -C '/usr/lib64/ipa/certmonger/renew_ca_cert \"auditSigningCert cert-pki-ca\"' -P database_pin New tracking request \"20131127185430\" added. getcert start-tracking -c dogtag-ipa-renew-agent -d /var/lib/pki-ca/alias -n \"ocspSigningCert cert-pki-ca\" -B /usr/lib64/ipa/certmonger/stop_pkicad -C '/usr/lib64/ipa/certmonger/renew_ca_cert \"ocspSigningCert cert-pki-ca\"' -P database_pin New tracking request \"20131127185431\" added. getcert start-tracking -c dogtag-ipa-renew-agent -d /var/lib/pki-ca/alias -n \"subsystemCert cert-pki-ca\" -B /usr/lib64/ipa/certmonger/stop_pkicad -C '/usr/lib64/ipa/certmonger/renew_ca_cert \"subsystemCert cert-pki-ca\"' -P database_pin New tracking request \"20131127185432\" added. getcert start-tracking -c dogtag-ipa-renew-agent -d /etc/httpd/alias -n ipaCert -C /usr/lib64/ipa/certmonger/renew_ra_cert -p /etc/httpd/alias/pwdfile.txt New tracking request \"20131127185433\" added.",
"service pki-cad stop",
"vim /var/lib/pki-ca/conf/CS.cfg",
"ca.crl.MasterCRL.enableCRLCache=true ca.crl.MasterCRL.enableCRLUpdates=true",
"service pki-cad start",
"vim /etc/httpd/conf.d/ipa-pki-proxy.conf",
"RewriteRule ^/ipa/crl/MasterCRL.bin https://server.example.com/ca/ee/ca/getCRL?op=getCRL&crlIssuingPoint=MasterCRL [L,R=301,NC]",
"service httpd restart",
"http://ipaserver.example.com:9180/ca/ocsp",
"semodule -i revoker.pp",
"audit2allow -a -M revoker",
"service pki-ca stop",
"vim /var/lib/pki-ca/conf/CS.cfg",
"service pki-ca start",
"vim /var/lib/pki-ca/profiles/ca/caIPAserviceCert.cfg",
"service pki-ca restart"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/cas |
Storage | Storage Red Hat build of MicroShift 4.18 Configuring and managing cluster storage Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/storage/index |
Chapter 2. Build and Run a Java Application on the JBoss EAP for OpenShift Image | Chapter 2. Build and Run a Java Application on the JBoss EAP for OpenShift Image The following workflow demonstrates using the Source-to-Image (S2I) process to build and run a Java application on the JBoss EAP for OpenShift image. As an example, the kitchensink quickstart is used in this procedure. It demonstrates a Jakarta EE web-enabled database application using Jakarta Server Faces, Jakarta Contexts and Dependency Injection, Jakarta Enterprise Beans, Jakarta Persistence, and Jakarta Bean Validation. See the kitchensink quickstart that ships with JBoss EAP 7 for more information. 2.1. Prerequisites You have an OpenShift instance installed and operational. For more information on installing and configuring your OpenShift instance, see OpenShift Container Platform Getting Started guide . 2.2. Prepare OpenShift for Application Deployment Log in to your OpenShift instance using the oc login command. Create a new project in OpenShift. A project allows a group of users to organize and manage content separately from other groups. You can create a project in OpenShift using the following command. For example, for the kitchensink quickstart, create a new project named eap-demo using the following command. Optional : Create a keystore and a secret. Note Creating a keystore and a secret is required if you are using any HTTPS-enabled features in your OpenShift project. For example, if you are using the eap74-https-s2i template, you must create a keystore and secret. This workflow demonstration for the kitchensink quickstart does not use an HTTPS template, so a keystore and secret are not required. Create a keystore. Warning The following commands generate a self-signed certificate, but for production environments Red Hat recommends that you use your own SSL certificate purchased from a verified Certificate Authority (CA) for SSL-encrypted connections (HTTPS). You can use the Java keytool command to generate a keystore: For example, for the kitchensink quickstart, use the following command to generate a keystore: Create a secret from the keystore. Create a secret from the previously created keystore using the following command. For example, for the kitchensink quickstart, use the following command to create a secret. 2.3. Configure Authentication to the Red Hat Container Registry Before you can import and use the JBoss EAP for OpenShift image, you must first configure authentication to the Red Hat Container Registry. Red Hat recommends that you create an authentication token using a registry service account to configure access to the Red Hat Container Registry. This means that you don't have to use or store your Red Hat account's username and password in your OpenShift configuration. Follow the instructions on Red Hat Customer Portal to create an authentication token using a registry service account . Download the YAML file containing the OpenShift secret for the token. You can download the YAML file from the OpenShift Secret tab on your token's Token Information page. Create the authentication token secret for your OpenShift project using the YAML file that you downloaded: Configure the secret for your OpenShift project using the following commands, replacing the secret name in the example with the name of your secret created in the step. See the OpenShift documentation for more information on other methods for configuring access to secured registries . See the Red Hat Customer Portal for more information on configuring authentication to the Red Hat Container Registry . 2.4. Import the Latest JBoss EAP for OpenShift Imagestreams and Templates You must import the latest JBoss EAP for OpenShift imagestreams and templates for your JDK into the namespace of your OpenShift project. Note Log in to the Red Hat Container Registry using your Customer Portal credentials to import the JBoss EAP imagestreams and templates. For more information, see Red Hat Container Registry Authentication . Import command for JDK 8 This command imports the following imagestreams and templates. The JDK 8 builder imagestream: jboss-eap74-openjdk8-openshift The JDK 8 runtime imagestream: jboss-eap74-openjdk8-runtime-openshift Note If you use OpenShift 3 and create an EAP 7.4 ImageStream for the first time, run the following command instead of oc replace : Import command for JDK 11 This command imports the following imagestreams and templates. The JDK 11 builder imagestream: jboss-eap74-openjdk11-openshift The JDK 11 runtime imagestream: jboss-eap74-openjdk11-runtime-openshift Import command for templates This command imports all templates specified in the command. Note The JBoss EAP imagestreams and templates imported using these commands are only available within that OpenShift project. If you have administrative access to the general openshift namespace and want the imagestreams and templates to be accessible by all projects, add -n openshift to the oc replace line of the command. For example: If you use the cluster-samples-operator, refer to the OpenShift documentation on configuring the cluster samples operator. See Configuring the Samples Operator for details about configuring the cluster samples operator. 2.5. Deploy a JBoss EAP Source-to-Image (S2I) Application to OpenShift After you import the images and templates, you can deploy applications to OpenShift. Prerequisites Optional : A template can specify default values for many template parameters, and you might have to override some, or all, of the defaults. To see template information, including a list of parameters and any default values, use the command oc describe template TEMPLATE_NAME . Procedure Create a new OpenShift application that uses the JBoss EAP for OpenShift image and the source code of your Java application. You can use one of the provided JBoss EAP for OpenShift templates for S2I builds. You can also choose to provision a trimmed server. For example, to deploy the kitchensink quickstart using the JDK 8 builder image, enter the following command to use the eap74-basic-s2i template in the eap-demo project, created in Prepare OpenShift for Application Deployment , with the kitchensink source code on GitHub. This quickstart does not support the trimming capability. 1 The template to use. 2 The latest imagestreams and templates were imported into the project's namespace , so you must specify the namespace where to find the imagestream. This is usually the project's name. 3 The name of the EAP builder image stream for JDK8. 4 The name of the EAP runtime image stream for JDK8. 5 URL to the repository containing the application source code. 6 The Git repository reference to use for the source code. This can be a Git branch or tag reference. 7 The directory within the source repository to build. As another example, to deploy the helloworld-html5 quickstart using the JDK 11 runtime image and trimming JBoss EAP to include only the jaxrs-server layer, enter the following command. The command uses the eap74-basic-s2i template in the eap-demo project, created in Prepare OpenShift for Application Deployment , with the helloworld-html5 source code on GitHub. 1 The template to use. 2 The latest imagestreams and templates were imported into the project's namespace , so you must specify the namespace where to find the imagestream. This is usually the project's name. 3 The name of the EAP builder image stream for JDK11. 4 The name of the EAP runtime image stream for JDK11. 5 URL to the repository containing the application source code. 6 The Git repository reference to use for the source code. This can be a Git branch or tag reference. 7 Provision a trimmed server with only the jaxrs-server layer. 8 The directory within the source repository to build. Note You might also want to configure environment variables when creating your new OpenShift application. For example, if you are using an HTTPS template such as eap74-https-s2i , you must specify the required HTTPS environment variables HTTPS_NAME , HTTPS_PASSWORD , and HTTPS_KEYSTORE to match your keystore details. Note If the template uses AMQ, you must include the AMQ_IMAGE_NAME parameter with the appropriate value. If the template uses SSO, you must include the SSO_IMAGE_NAME parameter with the appropriate value. Retrieve the name of the build configuration. Use the name of the build configuration from the step to view the Maven progress of the build. For example, for the kitchensink quickstart, the following command shows the progress of the Maven build. Additional Resources Capability Trimming in JBoss EAP for OpenShift 2.6. Post deployment tasks Depending on your application, some tasks might need to be performed after your OpenShift application has been built and deployed. This might include exposing a service so that the application is viewable from outside of OpenShift, or scaling your application to a specific number of replicas. Get the service name of your application using the following command. Expose the main service as a route so you can access your application from outside of OpenShift. For example, for the kitchensink quickstart, use the following command to expose the required service and port. Note If you used a template to create the application, the route might already exist. If it does, continue on to the step. Get the URL of the route. Access the application in your web browser using the URL. The URL is the value of the HOST/PORT field from the command's output. If your application does not use the JBoss EAP root context, append the context of the application to the URL. For example, for the kitchensink quickstart, the URL might be http:// HOST_PORT_VALUE /kitchensink/ . Optionally, you can also scale up the application instance by running the following command. This increases the number of replicas to 3 . For example, for the kitchensink quickstart, use the following command to scale up the application. 2.7. Chained Build Support in JBoss EAP for OpenShift JBoss EAP for OpenShift supports chained builds in OpenShift. JBoss EAP for OpenShift templates employ chained builds. When you use these templates, two builds result: An intermediate image named [application name]-build-artifacts The final image, [application name] For details about chained builds, see the OpenShift documentation. Additional Resources OpenShift Chained build documentation | [
"oc new-project <project_name>",
"oc new-project eap-demo",
"keytool -genkey -keyalg RSA -alias <alias_name> -keystore <keystore_filename.jks> -validity 360 -keysize 2048",
"keytool -genkey -keyalg RSA -alias eapdemo-selfsigned -keystore keystore.jks -validity 360 -keysize 2048",
"oc create secret generic <secret_name> --from-file= <keystore_filename.jks>",
"oc create secret generic eap7-app-secret --from-file=keystore.jks",
"create -f 1234567_myserviceaccount-secret.yaml",
"secrets link default 1234567-myserviceaccount-pull-secret --for=pull secrets link builder 1234567-myserviceaccount-pull-secret --for=pull",
"replace -f https://raw.githubusercontent.com/jboss-container-images/jboss-eap-openshift-templates/eap74/eap74-openjdk8-image-stream.json",
"create -f https://raw.githubusercontent.com/jboss-container-images/jboss-eap-openshift-templates/eap74/eap74-openjdk8-image-stream.json",
"replace -f https://raw.githubusercontent.com/jboss-container-images/jboss-eap-openshift-templates/eap74/eap74-openjdk11-image-stream.json",
"for resource in eap74-amq-persistent-s2i.json eap74-amq-s2i.json eap74-basic-s2i.json eap74-https-s2i.json eap74-sso-s2i.json do oc replace -f https://raw.githubusercontent.com/jboss-container-images/jboss-eap-openshift-templates/eap74/templates/USD{resource} done",
"replace -n openshift -f",
"new-app --template=eap74-basic-s2i \\ 1 -p IMAGE_STREAM_NAMESPACE=eap-demo \\ 2 -p EAP_IMAGE_NAME=jboss-eap74-openjdk8-openshift:7.4.0 \\ 3 -p EAP_RUNTIME_IMAGE_NAME=jboss-eap74-openjdk8-runtime-openshift:7.4.0 \\ 4 -p SOURCE_REPOSITORY_URL=https://github.com/jboss-developer/jboss-eap-quickstarts \\ 5 -p SOURCE_REPOSITORY_REF=7.4.x \\ 6 -p CONTEXT_DIR=kitchensink 7",
"new-app --template=eap74-basic-s2i \\ 1 -p IMAGE_STREAM_NAMESPACE=eap-demo \\ 2 -p EAP_IMAGE_NAME=jboss-eap74-openjdk11-openshift:7.4.0 \\ 3 -p EAP_RUNTIME_IMAGE_NAME=jboss-eap74-openjdk11-runtime-openshift:7.4.0 \\ 4 -p SOURCE_REPOSITORY_URL=https://github.com/jboss-developer/jboss-eap-quickstarts \\ 5 -p SOURCE_REPOSITORY_REF=7.4.x \\ 6 -p GALLEON_PROVISION_LAYERS=jaxrs-server \\ 7 -p CONTEXT_DIR=helloworld-html5 8",
"oc get bc -o name",
"oc logs -f buildconfig/ BUILD_CONFIG_NAME",
"oc logs -f buildconfig/eap-app",
"oc get service",
"oc expose service/ eap-app --port= 8080",
"oc get route",
"oc scale deploymentconfig DEPLOYMENTCONFIG_NAME --replicas=3",
"oc scale deploymentconfig eap-app --replicas=3"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/getting_started_with_jboss_eap_for_openshift_container_platform/build_run_java_app_s2i |
Chapter 8. Miscellaneous changes | Chapter 8. Miscellaneous changes This section provides an overview of the various miscellaneous changes happening in this release. 8.1. Changes to delivery of JBoss EAP Natives and Apache HTTP Server JBoss EAP 8.0 natives are delivered differently in this release than in JBoss EAP 6. Some components include Red Hat JBoss Core Services product, which is a set of supplementary software that is common to many of the Red Hat JBoss middleware products. The new product allows for faster distribution of updates and a more consistent update experience. The JBoss Core Services product is available for download in a dedicated location on the Red Hat Customer Portal. The following table lists the differences in the delivery methods between the releases. Package JBoss EAP 6 JBoss EAP 8.0 AIO Natives for Messaging Delivered with the product in a separate "Native Utilities" download Included within the JBoss EAP distribution. Apache HTTP Server Delivered with the product in a separate "Apache HTTP Server" download Delivered with the new JBoss Core Services product mod_cluster, mod_jk, isapi, and nsapi connectors Delivered with the product in a separate "Webserver Connector Natives" download Delivered with the new JBoss Core Services product JSVC Delivered with the product in a separate "Native Utilities" download Delivered with the new JBoss Core Services product OpenSSL Delivered with the product in a separate "Native Utilities" download Delivered with the new JBoss Core Services product tcnatives Delivered with the product in a separate "Native Components" download Support for tcnatives was removed in JBoss EAP 7 Additional changes for JBoss EAP Natives and Apache HTTP Server You should also be aware of the following changes: Support was dropped for mod_cluster and mod_jk connectors used with Apache HTTP Server from Red Hat Enterprise Linux RPM channels. If you run Apache HTTP Server from Red Hat Enterprise Linux RPM channels and need to configure load balancing for JBoss EAP 8.0 servers, you can do one of the following: Use the Apache HTTP Server provided by JBoss Core Services. You can configure JBoss EAP 8.0 to act as a front-end load balancer. For more information, see Configuring JBoss EAP as a Front-end Load Balancer in the JBoss EAP 7.4 Configuration Guide . You can deploy Apache HTTP Server on a machine that is supported and certified and then run the load balancer on that machine. For the list of supported configurations, see Overview of HTTP Connectors in the JBoss EAP 7.4 Configuration Guide . You can find more information about JBoss Core Services in the Apache HTTP Server Installation Guide . You can configure JBoss EAP 8.0 to act as a front-end load balancer. For more information, see Configuring JBoss EAP as a Front-end Load Balancer in the JBoss EAP Configuration Guide . You can deploy Apache HTTP Server on a machine that is supported and certified and then run the load balancer on that machine. For the list of supported configurations. You can find more information about JBoss Core Services in the Apache HTTP Server Installation Guide . 8.2. Changes to deployments on Amazon EC2 Several changes have been made to the Amazon Machine Images (AMI) in JBoss EAP 7. This section briefly summarizes some of those changes. The way you start non-clustered and clustered JBoss EAP instances and domains in Amazon EC2 has changed significantly. In JBoss EAP 6, configuration depended on the User Data: field. In JBoss EAP 7, the AMI scripts that parsed the configuration in the User Data: field and started the servers automatically on instance startup have been removed. Red Hat JBoss Operations Network agent was installed in the JBoss EAP 6. Starting with JBoss EAP 7.0, you must install it separately. For details on deploying JBoss EAP 7 on Amazon EC2, see Deploying JBoss EAP on Amazon Web Services . 8.3. Remove applications that include shared modules Changes introduced in the JBoss EAP 7.1 server and the Maven plug-in can result in the following failure when you attempt to remove your application. This error can occur if your application contains modules that interact with or depend on each other. For example, assume you have an application that contains two Maven WAR project modules, application-A and application-B , that share data managed by the data-sharing module. When you deploy this application, you must deploy the shared data-sharing module first, and then deploy the modules that depend on it. The deployment order is specified in the <modules> element of the parent pom.xml file. This is true in JBoss EAP 6.4 through JBoss EAP 8.0. In releases prior to JBoss EAP 7.1, you could undeploy all of the archives for this application from the root of the parent project using the following command. In JBoss EAP 7.1 and later, you must first undeploy the archives that use the shared modules, and then undeploy the shared modules. Since there is no way to specify the order of undeployment in the project pom.xml file, you must undeploy the modules manually. You can accomplish this by running the following commands from the root of the parent directory. This updated undeploy behavior is more accurate and ensures that you do not end up in an unstable deployment state. 8.4. Changes to the add-user script The add-user script behavior has changed in JBoss EAP 7 due to a change in password policy. JBoss EAP 6 had a strict password policy. As a result, the add-user script rejected weak passwords that did not satisfy the minimum requirements. Starting with JBoss EAP 7, weak passwords are accepted and a warning is issued. For more information, see Setting Add-User Utility Password Restrictions in the JBoss EAP 7.4 Configuration Guide . 8.5. Removal of OSGi support When JBoss EAP 6.0 GA was first released, JBoss OSGi, an implementation of the OSGi specification, was included as a Technology Preview feature. With the release of JBoss EAP 6.1.0, JBoss OSGi was demoted from Technology Preview to Unsupported. In JBoss EAP 6.1.0, the configadmin and osgi extension modules and subsystem configuration for a standalone server were moved to a separate EAP_HOME /standalone/configuration/standalone-osgi.xml configuration file. Because you should not migrate this unsupported configuration file, the removal of JBoss OSGi support should not impact the migration of a standalone server configuration. If you modified any of the other standalone configuration files to configure osgi or configadmin , those configurations must be removed. For a managed domain, the osgi extension and subsystem configuration were removed from the EAP_HOME /domain/configuration/domain.xml file in the JBoss EAP 6.1.0 release. However, the configadmin module extension and subsystem configuration remain in the EAP_HOME /domain/configuration/domain.xml file. Starting with JBoss EAP 7, this configuration is no longer supported and must be removed. 8.6. Changes in SOAP with Attachments API for Java Update the user-defined SOAP handlers to comply with the SAAJ 3.0 specification when migrating to JBoss EAP 8.0. Additional resources Jakarta Soap with Attachments | [
"WFLYCTL0184: New missing/unsatisfied dependencies",
"mvn wildfly:undeploy",
"mvn wildfly:undeploy -pl application-A,application-B mvn wildfly:undeploy -pl data-shared"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/migration_guide/miscellaneous-changes |
Chapter 3. MTA 7.1.0 | Chapter 3. MTA 7.1.0 3.1. New features This section provides the new features and improvements of the Migration Toolkit for Applications (MTA) 7.1.0. Support for analyzing applications managed with Gradle has been added to MTA In earlier releases of Migration Toolkit for Applications (MTA), you could use MTA to analyze Java applications managed only with Maven. With this update, MTA can also extract dependencies from Gradle projects. As a result, you can now analyze applications that use Gradle instead of Maven. Important Support for analyzing applications managed with Gradle is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA. Support for analyzing .NET applications has been added to MTA In Migration Toolkit for Applications (MTA) 7.1.0, you can use MTA to analyze Windows-only .NET framework applications to aid migration from version 4.5 or later to multi-platform .NET 8.0 running on OpenShift Container Platform. This feature is only available in the command-line interface (CLI). Important Support for analyzing .NET applications is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA. Support for languages other than Java In Migration Toolkit for Applications (MTA) 7.1.0, you can use MTA to analyze .NET applications written in languages other than Java. To run analysis on .NET applications written in languages other than Java, add a custom rule set and do not specify a target language. Important Support for languages other than Java is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA. Assessment and review statuses are now displayed in the MTA UI In earlier releases of Migration Toolkit for Applications (MTA), the status of the assessment and the review was only displayed for applications. With this update, the status of the archetype assessment and review processes is displayed in the MTA user interface (UI): The assessment statuses are the following: Completed: All required assessments completed. InProgress: The assessment process is in progress. NotStarted: The assessment process has not been started. The review statuses are the following: Completed: A review exists. NotStarted: The review process has not been started. New Insights feature has been added Tagging rules that earlier generated tags and showed the presence of technology also generate Insights now and show the location of code. While Insights do not impact the migration, they contain useful information about the technologies used in the application and their usage in the code. Insights do not have an effort and category assigned but might have a message and tag. You can view Insights in the Static report under the Insights tab. Important Insights is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . New MTA CLI options are available to select language providers when analyzing multi-language applications In earlier releases of Migration Toolkit for Applications (MTA), you could not select specific language providers to run separately for the analysis of a multi-language application. With this update, you can use the new --provider MTA command-line interface (CLI) option to explicitly set which language provider to run. The following CLI options are also available with this update: --list-providers to list language providers supported for the analysis. --override-provider-settings to override an existing supported language provider or to run your own unsupported provider. Note You can now also configure supported language provider options in a provider's configuration file. A new Task Manager page is now available in the MTA UI In earlier releases of Migration Toolkit for Applications (MTA), tasks that were being performed and the queue of pending tasks were not displayed in the MTA user interface (UI). With this update, a new Task Manager page is available to view the following information about the tasks that are queued: ID : The ID of the task. Application : The application name associated with the task. Status : The status of the task, for example, Scheduled , Pending , In progress , Succeeded , or Failed . Kind : The type of the task, for example, analyzer or discovery . Priority : Priority of the task. The value is from zero to any positive integer. The higher the value in this column, the higher the priority of the task. Preemption : It allows the scheduler to cancel a running task and free the resources for higher priority tasks. The values are true or false . Created By : The name of the user who created the task. Multiple applications can now be selected to filter the applications list in Application Inventory In earlier releases of Migration Toolkit for Applications (MTA), you could select only one application to filter the results on the Application Inventory page. With this update, you can select multiple applications as a single filter to display the list of applications corresponding to this filter. Support for providing a single report when analyzing multiple applications on the CLI mta-cli was designed to analyze a single application and produce a report about that application. With this update, you can use the --bulk option of the analyze command to analyze multiple applications, one analyze command per application, but with a common output file for all of the reports. As described in the CLI Guide, this results in mta-cli generating a single analysis report for all the applications, instead of generating a separate report for each application. Important Support for providing a single report when analyzing multiple applications on the CLI is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA. New detailed reports generated during application analysis A new feature has been introduced that provides a more detailed analysis of your application. Two additional log reports, issues.yaml and deps.yaml , are now available for viewing and downloading. These reports contain details about unmatched rules. To enable the system to generate these reports, select the Enable enhanced analysis details checkbox in the Advanced options window during application analysis. 3.2. Upgrade notes This section provides upgrade notes for the Migration Toolkit for Applications (MTA) 7.1.0. PostgreSQL is migrated to version 15 In the Migration Toolkit for Applications (MTA) 7.1.0, the postgresql container image has been migrated to a new postgresql-15 version to be compatible with support. Note that this migration occurs during the upgrade to 7.1.0 and might take some time. The upgrade is considered to be completed when the status of the Tackle CR is Successful. To check the status of the Tackle, enter: 3.3. Known issues This section provides highlighted known issues in Migration Toolkit for Applications (MTA) version 7.1.0. Enabling Preemption does not work if authentication is enabled In Migration Toolkit for Applications (MTA) 7.1.0, Preemption cannot be enabled for tasks in the Task Manager page and the Task Manager drawer if MTA authentication is enabled. To work around this issue, disable authentication. ( MTA-3195 ) Dependency rule of a custom rules file in analysis is not fired A custom dependency rule is not fired, and no related migration issue is found. (MTA-3863) No multi-user access restrictions on resources There are no multi-user access restrictions on resources. For example, an analyzer task created by a user can be canceled by any other user. (MTA-3819) | [
"oc wait --for=condition=Successful --namespace=openshift-mta tackles.tackle.konveyor.io/tackle tackle.tackle.konveyor.io/tackle condition met"
] | https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.1/html/release_notes/mta-7-1-0 |
Chapter 11. Certified Cloud and Service Provider certification workflow | Chapter 11. Certified Cloud and Service Provider certification workflow The Certified Cloud Provider Agreement requires that Red Hat certifies the images (templates) from which tenant instances are created to ensure a fully supported configuration for end customers. There are two methods for certifying the images for Red Hat Enterprise Linux. The preferred method is to use the Certified Cloud and Service Provider (CCSP) image certification workflow. After certifications have been reviewed by Red Hat, a pass/fail will be assigned and certification will be posted to the public Red Hat certification website at Red Hat Ecosystem Catalog . 11.1. Additional resources Red Hat Certified Cloud and Service Provider Certification Workflow Guide Product Documentation for Red Hat Certified Cloud and Service Provider Certification 7.34 | null | https://docs.redhat.com/en/documentation/red_hat_update_infrastructure/4/html/configuring_and_managing_red_hat_update_infrastructure/assembly_cmg-certified-ccsp-certification-workflow |
22.3. Online Resources | 22.3. Online Resources For z/VM publications, refer to http://www.vm.ibm.com/library/ . For IBM Z I/O connectivity information, refer to http://www.ibm.com/systems/z/hardware/connectivity/index.html . For IBM Z cryptographic coprocessor information, refer to http://www.ibm.com/security/cryptocards/ . For IBM Z DASD storage information, refer to http://www-01.ibm.com/support/knowledgecenter/linuxonibm/com.ibm.linux.z.lgdd/lgdd_t_dasd_wrk.html . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-additional-references-online-s390 |
Chapter 117. Saga | Chapter 117. Saga Only producer is supported The Saga component provides a bridge to execute custom actions within a route using the Saga EIP. The component should be used for advanced tasks, such as deciding to complete or compensate a Saga with completionMode set to MANUAL . Refer to the Saga EIP documentation for help on using sagas in common scenarios. 117.1. Dependencies When using saga with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-saga-starter</artifactId> </dependency> 117.2. URI format 117.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 117.3.1. Configuring Component Options At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level. For example, a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. You can configure components using: the Component DSL . in a configuration file (application.properties, *.yaml files, etc). directly in the Java code. 117.3.2. Configuring Endpoint Options You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders . Property placeholders provide a few benefits: They help prevent using hardcoded urls, port numbers, sensitive information, and other settings. They allow externalizing the configuration from the code. They help the code to become more flexible and reusable. The following two sections list all the options, firstly for the component followed by the endpoint. 117.4. Component Options The Saga component supports 2 options, which are listed below. Name Description Default Type lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 117.5. Endpoint Options The Saga endpoint is configured using URI syntax: with the following path and query parameters: 117.5.1. Path Parameters (1 parameters) Name Description Default Type action (producer) Required Action to execute (complete or compensate). Enum values: COMPLETE COMPENSATE SagaEndpointAction 117.5.2. Query Parameters (1 parameters) Name Description Default Type lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean 117.6. Using camel-saga with Spring Boot and LRA coordinator This example shows how to work with Apache Camel Saga using Spring Boot and Narayana LRA Coordinator to manage long running actions. See Saga example for more information. 117.7. Spring Boot Auto-Configuration The component supports 3 options, which are listed below. Name Description Default Type camel.component.saga.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.saga.enabled Whether to enable auto configuration of the saga component. This is enabled by default. Boolean camel.component.saga.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-saga-starter</artifactId> </dependency>",
"saga:action",
"saga:action"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-saga-component-starter |
Chapter 5. PodSecurityPolicySelfSubjectReview [security.openshift.io/v1] | Chapter 5. PodSecurityPolicySelfSubjectReview [security.openshift.io/v1] Description PodSecurityPolicySelfSubjectReview checks whether this user/SA tuple can create the PodTemplateSpec Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds spec object PodSecurityPolicySelfSubjectReviewSpec contains specification for PodSecurityPolicySelfSubjectReview. status object PodSecurityPolicySubjectReviewStatus contains information/status for PodSecurityPolicySubjectReview. 5.1.1. .spec Description PodSecurityPolicySelfSubjectReviewSpec contains specification for PodSecurityPolicySelfSubjectReview. Type object Required template Property Type Description template PodTemplateSpec template is the PodTemplateSpec to check. 5.1.2. .status Description PodSecurityPolicySubjectReviewStatus contains information/status for PodSecurityPolicySubjectReview. Type object Property Type Description allowedBy ObjectReference allowedBy is a reference to the rule that allows the PodTemplateSpec. A rule can be a SecurityContextConstraint or a PodSecurityPolicy A nil , indicates that it was denied. reason string A machine-readable description of why this operation is in the "Failure" status. If this value is empty there is no information available. template PodTemplateSpec template is the PodTemplateSpec after the defaulting is applied. 5.2. API endpoints The following API endpoints are available: /apis/security.openshift.io/v1/namespaces/{namespace}/podsecuritypolicyselfsubjectreviews POST : create a PodSecurityPolicySelfSubjectReview 5.2.1. /apis/security.openshift.io/v1/namespaces/{namespace}/podsecuritypolicyselfsubjectreviews Table 5.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create a PodSecurityPolicySelfSubjectReview Table 5.2. Body parameters Parameter Type Description body PodSecurityPolicySelfSubjectReview schema Table 5.3. HTTP responses HTTP code Reponse body 200 - OK PodSecurityPolicySelfSubjectReview schema 201 - Created PodSecurityPolicySelfSubjectReview schema 202 - Accepted PodSecurityPolicySelfSubjectReview schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/security_apis/podsecuritypolicyselfsubjectreview-security-openshift-io-v1 |
Chapter 2. Fixed issues | Chapter 2. Fixed issues 2.1. AMQ JMS ENTMQCL-2681 - Wait() sometimes blocks forever when closing producers In earlier releases of the product, the producer wait() operation could block indefinitely when a message send failed. In this release, the wait() operation completes as expected. ENTMQCL-2784 - Do not de-duplicate failover URIs based on name resolution In earlier releases of the product, the client performed a DNS resolution step before removing duplicates in the failover list. This caused problems for servers running behind a proxy. In this release, the client removes duplicates using the names as given in the failover list. For a complete list of issues that have been fixed in the release, see AMQ Clients 2.10.x Resolved Issues . 2.2. AMQ C++ ENTMQCL-2583 - Example build fails using CMake 2.8 In earlier releases of the product, the examples failed to build when using CMake 2.8. In this release, the examples build as expected. | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/amq_clients_2.10_release_notes/fixed_issues |
Chapter 1. Getting Started | Chapter 1. Getting Started You can install Red Hat Enterprise Linux with an installation utility called Anaconda . Most users can simply follow the procedure outlined in Section 4.1, "Interactive Installation" to install Red Hat Enterprise Linux using the graphical interface in Anaconda . Users with advanced requirements can also use the graphical interface to configure many aspects of the installation, and install Red Hat Enterprise Linux on a wide variety of systems. On systems without a local interface, installation can be accessed entirely remotely. Installation can also be automated by using a Kickstart file, and performed with no interaction at all. 1.1. Graphical Installation The Red Hat Enterprise Linux installer, Anaconda , provides a simple graphical method to install Red Hat Enterprise Linux. The graphical installation interface has a built-in help system which can guide you through most installations, even if you have never installed Linux before. However, Anaconda can also be used to configure advanced installation options if required. Anaconda is different from most other operating system installation programs due to its parallel nature. Most installers follow a linear path; you must choose your language first, then you configure networking, and so on. There is usually only one way to proceed at any given time. In the graphical interface in Anaconda you are at first only required to select your language and locale, and then you are presented with a central screen, where you can configure most aspects of the installation in any order you like. While certain parts require others to be completed before configuration - for example, when installing from a network location, you must configure networking before you can select which packages to install - most options in Anaconda can be configured in any order. If a background task, such as network initialization or disk detection, is blocking configuration of a certain option, you can configure unrelated options while waiting for it to complete. Additional differences appear in certain screens; notably the custom partition process is very different from other Linux distributions. These differences are described in each screen's subsection. Some screens will be automatically configured depending on your hardware and the type of media you used to start the installation. You can still change the detected settings in any screen. Screens which have not been automatically configured, and therefore require your attention before you begin the installation, are marked by an exclamation mark. You cannot start the actual installation process before you finish configuring these settings. Installation can also be performed in text mode, however certain options, notably including custom partitioning, are unavailable. See Section 8.3, "Installing in Text Mode" , or if using an IBM Power system or IBM Z, see Section 13.3, "Installing in Text Mode" , or Section 18.4, "Installing in Text Mode" , respectively, for more information. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/chap-getting-started |
14.13.8. Changing the Memory Allocation for the Domain | 14.13.8. Changing the Memory Allocation for the Domain The virsh setmaxmem domain size --config --live --current allows the setting of the maximum memory allocation for a guest virtual machine as shown: The size that can be given for the maximum memory is a scaled integer that by default is expressed in kibibytes, unless a supported suffix is provided. The following options can be used with this command: --config - takes affect boot --live - controls the memory of the running domain, providing the hypervisor supports this action as not all hypervisors allow live changes of the maximum memory limit. --current - controls the memory on the current domain | [
"virsh setmaxmem rhel6 1024 --current"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-displaying_per_guest_virtual_machine_information-changing_the_memory_allocation_for_the_domain |
Chapter 2. Planning an upgrade | Chapter 2. Planning an upgrade An in-place upgrade is the recommended and supported way to upgrade your SAP HANA system to the major version of RHEL. You should consider the following before upgrading to RHEL 9: Operating system: SAP HANA is installed with a version which is supported on both the source and target RHEL minor versions. SAP HANA is installed using the default installation path of /hana/shared. Public clouds: The in-place upgrade is supported for on-demand Pay-As-You-Go (PAYG) instances on Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform with Red Hat Update Infrastructure (RHUI) . The in-place upgrade is also supported for Bring Your Own Subscription instances on all public clouds that use Red Hat Subscription Manager (RHSM) for a RHEL subscription. Additional Information: SAP HANA hosts must meet all of the following criteria: Running with e4s repositories on RHEL 8.8 or normal repositories on RHEL 8.10 Running on x86_64 or ppc64le systems which are certified by the hardware partner or CCSP for SAP HANA on the source and target OS versions Running on physical infrastructure or in a virtual environment Not using Red Hat HA Solutions for SAP HANA Using the Red Hat Enterprise Linux for SAP Solutions subscription SAP NetWeaver hosts must meet the following criteria: Using the Red Hat Enterprise Linux for SAP Solutions or Red Hat Enterprise Linux for SAP Applications subscription Note RHEL 8.10 is the final RHEL 8 release. There are no EUS or E4S repositories available in RHEL 8.10. RHEL 8.10 maintenance is defined by the Maintenance Support Phase policy. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/upgrading_sap_environments_from_rhel_8_to_rhel_9/asmb_planning-upgrade_asmb_supported-upgrade-paths |
Chapter 17. Servers and Services | Chapter 17. Servers and Services rear rebased to version 2.4 The rear packages that provide the Relax-and-Recover tool (ReaR) have been upgraded to upstream version 2.4, which provides a number of bug fixes and enhancements over the version. Notably: The default behavior when resizing partitions in migration mode has been changed. Only the size of the last partition is now changed by default; the start positions of every partition are preserved. If the behavior is needed, set the AUTORESIZE_PARTITIONS configuration variable to yes . See the description of the configuration variables AUTORESIZE_PARTITIONS , AUTORESIZE_EXCLUDE_PARTITIONS , AUTOSHRINK_DISK_SIZE_LIMIT_PERCENTAGE , and AUTOINCREASE_DISK_SIZE_THRESHOLD_PERCENTAGE in the /usr/share/rear/conf/default.conf file for more information on how to control the partition resizing. The network setup now supports teaming (with the exception of Link Aggregation Control Protocol - LACP), bridges, bonding, and VLANs. Support for Tivoli Storage Manager (TSM) has been improved. In particular, support for the password store in the TSM client versions 8.1.2 and later has been added, fixing the bug where the generated ISO image did not support restoring the OS if those TSM versions were used for backup. Support for partition names containing blank and slash characters has been fixed. SSH secrets (private keys) are no longer copied to the recovery system, which prevents their leaking. As a consequence, SSH in the recovery system cannot use the secret keys from the original system. See the description of the SSH_FILES , SSH_ROOT_PASSWORD , and SSH_UNPROTECTED_PRIVATE_KEYS variables in the /usr/share/rear/conf/default.conf file for more information on controlling this behavior. Numerous improvements to support of the IBM POWER Systems architecture have been added, such as support for including the backup in the rescue ISO image and for multiple ISOs. Multipath support has been enhanced. For example, support for software RAID on multipath devices has been added. Support for secure boot has been added. The SECURE_BOOT_BOOTLOADER variable can be used for specifying any custom-signed boot loader. Support for restoring disk layout of software RAID devices with missing components has been added. The standard error and standard output channels of programs invoked by ReaR are redirected to the log file instead of appearing on the terminal. Programs prompting for user input on the standard output or standard error channel will not work correctly. Their standard output channel should be redirected to file descriptor 7 and standard input channel from file descriptor 6 . See the Coding Style documentation on the ReaR wiki for more details. Support for recovery of systems with LVM thin pool and thin volumes has been added. (BZ# 1496518 , BZ# 1484051 , BZ# 1534646 , BZ#1498828, BZ# 1571266 , BZ# 1539063 , BZ# 1464353 , BZ# 1536023 ) The rear package now includes a user guide This update adds the user guide into the rear package, which provides the Relax-and-Recover tool (ReaR). After installation of rear , you can find the user guide in the /usr/share/doc/rear-2.4/relax-and-recover-user-guide.html file. (BZ# 1418459 ) The pcsc-lite interface now supports up to 32 devices In Red Hat Enterprise Linux 7.6, the number of devices the pcsc-lite smart card interface supports has been increased from 16 to 32. (BZ#1516993) tuned rebased to version 2.10.0 The tuned packages have been rebased to upstream version 2.10.0, which provides a number of bug fixes and enhancements over the version. Notable changes include: an added mssql profile (shipped in a separate tuned-profiles-mssql subpackage) the tuned-adm tool now displays a relevant log snippet in case of error fixed verification of a CPU mask on systems with more than 32 cores (BZ# 1546598 ) The STOU FTP command has improved algorithm for generating unique file names The STOU FTP command allows transferring files to the server and storing them with unique file names. Previously, the STOU command created the names of the files by taking the file name, supplied as an argument to the command, and adding a numerical suffix and incrementing the suffix by one. In some cases, this led to a race condition. Subsequently the scripts which used STOU to upload files with the same file name could fail. This update modifies STOU to create unique file names in a way which helps to avoid the race condition and improves the functioning of scripts that use STOU . To enable the improved algorithm for generating unique file names using STOU , enable the better_stou option in the configuration file (usually /etc/vsftpd/vsftpd.conf ) by adding the following line: better_stou=YES (BZ#1479237) rsyslog imfile now supports symlinks With this update, the rsyslog imfile module delivers better performance and more configuration options. This enables to use the module for more complicated file monitoring use cases. Users of rsyslog are now able to use file monitors with glob patterns anywhere along the configured path and rotate symlink targets with increased data throughput when compared to the version. (BZ# 1531295 ) New rsyslog module: omkafka To enable kafka centralized data storage scenarios, you can now forward logs to the kafka infrastructure using the new omkafka module. (BZ#1482819) New rsyslog module: mmkubernetes To enable scenarios using rsyslog in favor of other log collectors and where kubernetes container metadata are required, a new mmkubernetes module has been added to Red Hat Enterprise Linux. (BZ# 1539193 ) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.6_release_notes/new_features_servers_and_services |
Release Notes for Streams for Apache Kafka 2.9 on RHEL | Release Notes for Streams for Apache Kafka 2.9 on RHEL Red Hat Streams for Apache Kafka 2.9 Highlights of what's new and what's changed with this release of Streams for Apache Kafka on Red Hat Enterprise Linux | [
"config.providers.env.class=org.apache.kafka.common.config.provider.EnvVarConfigProvider"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html-single/release_notes_for_streams_for_apache_kafka_2.9_on_rhel/index |
14.3.8. Revoking an SSH CA Certificate | 14.3.8. Revoking an SSH CA Certificate If a certificate is stolen, it should be revoked. Although OpenSSH does not provide a mechanism to distribute the revocation list it is still easier to create the revocation list and distribute it by other means then to change the CA keys and all host and user certificates previously created and distributed. Keys can be revoked by adding them to the revoked_keys file and specifying the file name in the sshd_config file as follows: RevokedKeys /etc/ssh/revoked_keys Note that if this file is not readable, then public key authentication will be refused for all users. To test if a key has been revoked, query the revocation list for the presence of the key. Use a command as follows: ssh-keygen -Qf /etc/ssh/revoked_keys ~/.ssh/id_rsa.pub A user can revoke a CA certificate by changing the cert-authority directive to revoke in the known_hosts file. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-revoking_an_ssh_ca_certificate |
Chapter 5. Managing DHCP by using Capsule | Chapter 5. Managing DHCP by using Capsule Satellite can integrate with a DHCP service by using your Capsule. A Capsule has multiple DHCP providers that you can use to integrate Satellite with your existing DHCP infrastructure or deploy a new one. You can use the DHCP module of Capsule to query for available IP addresses, add new, and delete existing reservations. Note that your Capsule cannot manage subnet declarations. Available DHCP providers dhcp_infoblox - For more information, see Chapter 7, Using Infoblox as DHCP and DNS providers . dhcp_isc - ISC DHCP server over OMAPI. For more information, see Section 3.6, "Configuring DNS, DHCP, and TFTP on Capsule Server" . dhcp_remote_isc - ISC DHCP server over OMAPI with leases mounted through networking. For more information, see Section 4.2, "Configuring Capsule Server with external DHCP" . 5.1. Securing the dhcpd API Capsule interacts with DHCP daemon using the dhcpd API to manage DHCP. By default, the dhcpd API listens to any host without access control. You can add an omapi_key to provide basic security. Procedure On your Capsule, install the required packages: Generate a key: Use satellite-installer to secure the dhcpd API: | [
"satellite-maintain packages install bind-utils",
"dnssec-keygen -r /dev/urandom -a HMAC-MD5 -b 512 -n HOST omapi_key cat Komapi_key.+*.private | grep ^Key|cut -d ' ' -f2-",
"satellite-installer --foreman-proxy-dhcp-key-name \" My_Name \" --foreman-proxy-dhcp-key-secret \" My_Secret \""
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/installing_capsule_server/managing-dhcp-by-using-capsule |
Chapter 12. Socket Tapset | Chapter 12. Socket Tapset This family of probe points is used to probe socket activities. It contains the following probe points: | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/socket.stp |
33.6. Managing DNS Forwarding | 33.6. Managing DNS Forwarding DNS forwarding affects how DNS queries are answered. By default, the BIND service integrated with IdM is configured to act as both an authoritative and recursive DNS server. When a DNS client queries a name belonging to a DNS zone for which the IdM server is authoritative, BIND replies with data contained in the configured zone. Authoritative data always takes precedence over any other data. When a DNS client queries a name for which the IdM server is not authoritative, BIND attempts to resolve the query using other DNS servers. If no forwarders are defined, BIND asks the root servers on the Internet and uses recursive resolution algorithm to answer the DNS query. In some cases, it is not desirable to let BIND contact other DNS servers directly and perform the recursion based on data available on the Internet. These cases include: Split DNS configuration, also known as DNS views configuration, where DNS servers return different answers to different clients. Split DNS configuration is typical for environments where some DNS names are available inside the company network, but not from the outside. Configurations where a firewall restricts access to DNS on the Internet. Configurations with centralized filtering or logging on the DNS level. Configurations with forwarding to a local DNS cache, which helps optimize network traffic. In such configurations, BIND does not use full recursion on the public Internet. Instead, it uses another DNS server, a so-called forwarder , to resolve the query. When BIND is configured to use a forwarder, queries and answers are forwarded back and forth between the IdM server and the forwarder, and the IdM server acts as the DNS cache for non-authoritative data. Forward Policies IdM supports the first and only standard BIND forward policies, as well as the none IdM-specific forward policy. Forward first (default) DNS queries are forwarded to the configured forwarder. If a query fails because of a server error or timeout, BIND falls back to the recursive resolution using servers on the Internet. The forward first policy is the default policy. It is suitable for traffic optimization. Forward only DNS queries are forwarded to the configured forwarder. If a query fails because of a server error or timeout, BIND returns an error to the client. The forward only policy is recommended for environments with split DNS configuration. None: Forwarding disabled DNS queries are not forwarded. Disabling forwarding is only useful as a zone-specific override for global forwarding configuration. This options is the IdM equivalent of specifying an empty list of forwarders in BIND configuration. Forwarding Does Not Combine Data from IdM and Other DNS Servers Forwarding cannot be used to combine data in IdM with data from other DNS servers. You can only forward queries for specific subzones of the master zone in IdM DNS: see the section called "Zone Delegation in IdM DNS Master Zone" . By default, the BIND service does not forward queries to another server if the queried DNS name belongs to a zone for which the IdM server is authoritative. In such a situation, if the queried DNS name cannot be found in the IdM database, the NXDOMAIN answer is returned. Forwarding is not used. Example 33.9. Example Scenario The IdM server is authoritative for the test.example. DNS zone. BIND is configured to forward queries to the DNS server with the 192.0.2.254 IP address. When a client sends a query for the nonexistent.test.example. DNS name, BIND detects that the IdM server is authoritative for the test.example. zone and does not forward the query to the 192.0.2.254. server. As a result, the DNS client receives the NXDomain answer, informing the user that the queried domain does not exist. Zone Delegation in IdM DNS Master Zone It is possible to forward queries for specific subzones of a master zone in IdM DNS. For example, if the IdM DNS handles the zone idm.example.com , you can delegate the authority for the sub_zone1 .idm.example.com subzone to a different DNS server. To achieve this behavior, you need to use forwarding, as described above, along with a nameserver record which delegates the subzone to a different DNS server. In the following example, sub_zone1 is the subzone, and 192.0.2.1 is the IP address of the DNS server the subzone is delegated to: Adding the forward zone then looks like this: 33.6.1. Configuring Global Forwarders Global forwarders are DNS servers used for resolving all DNS queries for which an IdM server is not authoritative, as described in Section 33.6, "Managing DNS Forwarding" . The administrator can configure IP addresses and forward policies for global forwarding in the following two ways: Using the ipa dnsconfig-mod command or the IdM web UI Configuration set using these native IdM tools is immediately applied to all IdM DNS servers. As explained in Section 33.3, "DNS Configuration Priorities" , global DNS configuration has higher priority than local configuration defined in the /etc/named.conf files. By editing the /etc/named.conf file Manually editing the /etc/named.conf on every IdM DNS server allows using a different global forwarder and policy on each of the servers. Note that the BIND service must be restarted after changing /etc/named.conf . Configuring Forwarders in the Web UI To define the DNS global configuration in the IdM web UI: Click the Network Services tab, and select the DNS subtab, followed by the DNS Global Configuration section. To add a new global forwarder, click Add and enter the IP address. To define a new forward policy, select it from the list of available policies. Figure 33.28. Editing Global DNS Configuration in the Web UI Click Save to confirm the new configuration. Configuring Forwarders from the Command Line To set a global list of forwarders from the command line, use the ipa dnsconfig-mod command. It edits the DNS global configuration by editing the LDAP data. The ipa dnsconfig-mod command and its options affect all IdM DNS servers at once and override any local configuration. For example, to edit the list of global forwarders using ipa dnsconfig-mod : 33.6.2. Configuring Forward Zones Forward zones do not contain any authoritative data and instruct the name server to only forward queries for names belonging into a particular zone to a configured forwarder. Important Do not use forward zones unless absolutely required. Limit their use to overriding global forwarding configuration. In most cases, it is sufficient to only configure global forwarding , described in Section 33.6.1, "Configuring Global Forwarders" , and forward zones are not necessary. Forward zones are a non-standard solution, and using them can lead to unexpected and problematic behavior. When creating a new DNS zone, Red Hat recommends to always use standard DNS delegation using NS records and to avoid forward zones. For information on the supported forward policies, see the section called "Forward Policies" . For further information about the BIND service, see the Red Hat Enterprise Linux Networking Guide , the BIND 9 Administrator Reference Manual included in the /usr/share/doc/bind- version_number / directory, or external sources [5] . Configuring Forward Zones in the Web UI To manage forward zones in the web UI, click the Network Services tab, and select the DNS subtab, followed by the DNS Forward Zones section. Figure 33.29. Managing DNS Forward Zones In the DNS Forward Zones section, the administrator can handle all required operations regarding forward zones: show current list of forward zones, add a new forward zone, delete a forward zone, display a forward zone, allow to modify forwarders and forward policy per a forward zone, and disable or enable a forward zone. Configuring Forward Zones from the Command Line To manage forward zones from the command line, use the ipa dnsforwardzone-* commands described below. Note The ipa dnsforwardzone-* commands behave consistently with the ipa dnszone-* commands used to manage master zones. The ipa dnsforwardzone-* commands accept several options; notably, the --forwarder , --forward-policy , and --name-from-ip options. For detailed information about the available options, see Table 33.1, "Zone Attributes" or run the commands with the --help option added, for example: Adding Forward Zones Use the dnsforwardzone-add command to add a new forward zone. It is required to specify at least one forwarder if the forward policy is not set to none . Modifying Forward Zones Use the dnsforwardzone-mod command to modify a forward zone. It is required to specify at least one forwarder if the forward policy is not none . Modifications can be performed in several ways. Showing Forward Zones Use the dnsforwardzone-show command to display information about a specified forward zone. Finding Forward Zones Use the dnsforwardzone-find command to locate a specified forward zone. Deleting Forward Zones Use the dnsforwardzone-del command to delete specified forward zones. Enabling and Disabling Forward Zones Use dnsforwardzone-enable and dnsforwardzone-disable commands to enable and disable forward zones. Note that forward zones are enabled by default. Adding and Removing Permissions Use dnsforwardzone-add-permission and dnsforwardzone-remove-permission commands to add or remove system permissions. [5] For more information, see the BIND 9 Configuration Reference . | [
"ipa dnsrecord-add idm.example.com. sub_zone1 --ns-rec= 192.0.2.1",
"ipa dnsforwardzone-add sub_zone1 .idm.example.com. --forwarder 192.0.2.1",
"[user@server ~]USD ipa dnsconfig-mod --forwarder=192.0.2.254 Global forwarders: 192.0.2.254",
"ipa dnsforwardzone-add --help",
"[user@server ~]USD ipa dnsforwardzone-add zone.test. --forwarder=172.16.0.1 --forwarder=172.16.0.2 --forward-policy=first Zone name: zone.test. Zone forwarders: 172.16.0.1, 172.16.0.2 Forward policy: first",
"[user@server ~]USD ipa dnsforwardzone-mod zone.test. --forwarder=172.16.0.3 Zone name: zone.test. Zone forwarders: 172.16.0.3 Forward policy: first",
"[user@server ~]USD ipa dnsforwardzone-mod zone.test. --forward-policy=only Zone name: zone.test. Zone forwarders: 172.16.0.3 Forward policy: only",
"[user@server ~]USD ipa dnsforwardzone-show zone.test. Zone name: zone.test. Zone forwarders: 172.16.0.5 Forward policy: first",
"[user@server ~]USD ipa dnsforwardzone-find zone.test. Zone name: zone.test. Zone forwarders: 172.16.0.3 Forward policy: first ---------------------------- Number of entries returned 1 ----------------------------",
"[user@server ~]USD ipa dnsforwardzone-del zone.test. ---------------------------- Deleted forward DNS zone \"zone.test.\" ----------------------------",
"[user@server ~]USD ipa dnsforwardzone-enable zone.test. ---------------------------- Enabled forward DNS zone \"zone.test.\" ----------------------------",
"[user@server ~]USD ipa dnsforwardzone-disable zone.test. ---------------------------- Disabled forward DNS zone \"zone.test.\" ----------------------------",
"[user@server ~]USD ipa dnsforwardzone-add-permission zone.test. --------------------------------------------------------- Added system permission \"Manage DNS zone zone.test.\" --------------------------------------------------------- Manage DNS zone zone.test.",
"[user@server ~]USD ipa dnsforwardzone-remove-permission zone.test. --------------------------------------------------------- Removed system permission \"Manage DNS zone zone.test.\" --------------------------------------------------------- Manage DNS zone zone.test."
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/managing-dns-forwarding |
3.4. Saving a Configuration Change to a File | 3.4. Saving a Configuration Change to a File When using the pcs command, you can use the -f option to save a configuration change to a file without affecting the active CIB. If you have previously configured a cluster and there is already an active CIB, you use the following command to save the raw xml file. For example, the following command saves the raw xml from the CIB into a file named testfile . The following command creates a resource in the file testfile but does not add that resource to the currently running cluster configuration. You can push the current content of testfile to the CIB with the following command. | [
"pcs cluster cib filename",
"pcs cluster cib testfile",
"pcs -f testfile resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.120 cidr_netmask=24 op monitor interval=30s",
"pcs cluster cib-push testfile"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-pcsfilesave-HAAR |
A.3. Non-Reserved Keywords | A.3. Non-Reserved Keywords Keyword Usage ACCESSPATTERN other constraints , non-reserved identifier ARRAYTABLE array table , non-reserved identifier AUTO_INCREMENT table element , non-reserved identifier AVG standard aggregate function , non-reserved identifier CHAIN sql exception , non-reserved identifier COLUMNS array table , non-reserved identifier , object table , text table , xml table CONTENT non-reserved identifier , xml parse , xml serialize COUNT standard aggregate function , non-reserved identifier DELIMITER non-reserved identifier , text aggregate function , text table DENSE_RANK analytic aggregate function , non-reserved identifier DISABLED alter , non-reserved identifier DOCUMENT non-reserved identifier , xml parse , xml serialize EMPTY non-reserved identifier , xml query ENABLED alter , non-reserved identifier ENCODING non-reserved identifier , text aggregate function , xml serialize EVERY standard aggregate function , non-reserved identifier EXCEPTION compound statement , declare statement , non-reserved identifier EXCLUDING non-reserved identifier , xml serialize EXTRACT function , non-reserved identifier FIRST fetch clause , non-reserved identifier , sort specification HEADER non-reserved identifier , text aggregate function , text table INCLUDING non-reserved identifier , xml serialize INDEX other constraints , table element , non-reserved identifier INSTEAD alter , create trigger , non-reserved identifier JSONARRAY_AGG non-reserved identifier , ordered aggregate function JSONOBJECT json object , non-reserved identifier KEY table element , create temporary table , foreign key , non-reserved identifier , primary key LAST non-reserved identifier , sort specification MAX standard aggregate function , non-reserved identifier MIN standard aggregate function , non-reserved identifier NAME function , non-reserved identifier , xml element NAMESPACE option namespace , non-reserved identifier fetch clause , non-reserved identifier NULLS non-reserved identifier , sort specification OBJECTTABLE non-reserved identifier , object table ORDINALITY non-reserved identifier , xml table column PASSING non-reserved identifier , object table , xml query , xml table PATH non-reserved identifier , xml table column QUERYSTRING non-reserved identifier , querystring function QUOTE non-reserved identifier , text aggregate function , text table RAISE non-reserved identifier , raise statement RANK analytic aggregate function , non-reserved identifier RESULT non-reserved identifier , procedure parameter ROW_NUMBER analytic aggregate function , non-reserved identifier SELECTOR non-reserved identifier , text table column , text table SERIAL non-reserved identifier , temporary table element SKIP non-reserved identifier , text table SQL_TSI_DAY time interval , non-reserved identifier SQL_TSI_FRAC_SECOND time interval , non-reserved identifier SQL_TSI_HOUR time interval , non-reserved identifier SQL_TSI_MINUTE time interval , non-reserved identifier SQL_TSI_MONTH time interval , non-reserved identifier SQL_TSI_QUARTER time interval , non-reserved identifier SQL_TSI_SECOND time interval , non-reserved identifier SQL_TSI_WEEK time interval , non-reserved identifier SQL_TSI_YEAR time interval , non-reserved identifier STDDEV_POP standard aggregate function , non-reserved identifier STDDEV_SAMP standard aggregate function , non-reserved identifier SUBSTRING function , non-reserved identifier SUM standard aggregate function , non-reserved identifier TEXTAGG non-reserved identifier , text aggregate function TEXTTABLE non-reserved identifier , text table TIMESTAMPADD function , non-reserved identifier TIMESTAMPDIFF function , non-reserved identifier TO_BYTES function , non-reserved identifier TO_CHARS function , non-reserved identifier TRIM function , non-reserved identifier , text table column VARIADIC non-reserved identifier , procedure parameter VAR_POP standard aggregate function , non-reserved identifier VAR_SAMP standard aggregate function , non-reserved identifier VERSION non-reserved identifier , xml serialize VIEW alter , alter options , create table , non-reserved identifier WELLFORMED non-reserved identifier , xml parse WIDTH non-reserved identifier , text table column XMLDECLARATION non-reserved identifier , xml serialize | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/non-reserved_keywords |
function::env_var | function::env_var Name function::env_var - Fetch environment variable from current process Synopsis Arguments name Name of the environment variable to fetch Description Returns the contents of the specified environment value for the current process. If the variable isn't set an empty string is returned. | [
"env_var:string(name:string)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-env-var |
3.12. UDP Unicast Traffic | 3.12. UDP Unicast Traffic As of the Red Hat Enterprise Linux 6.2 release, the nodes in a cluster can communicate with each other using the UDP Unicast transport mechanism. It is recommended, however, that you use IP multicasting for the cluster network. UDP unicast is an alternative that can be used when IP multicasting is not available. You can configure the Red Hat High-Availability Add-On to use UDP unicast by setting the cman transport="udpu" parameter in the cluster.conf configuration file. You can also specify Unicast from the Network Configuration page of the Conga user interface, as described in Section 4.5.3, "Network Configuration" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-unicast-traffic-CA |
Chapter 141. Hazelcast Set Component | Chapter 141. Hazelcast Set Component Available as of Camel version 2.7 The Hazelcast Set component is one of Camel Hazelcast Components which allows you to access Hazelcast distributed set. 141.1. Options The Hazelcast Set component supports 3 options, which are listed below. Name Description Default Type hazelcastInstance (advanced) The hazelcast instance reference which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance. HazelcastInstance hazelcastMode (advanced) The hazelcast mode reference which kind of instance should be used. If you don't specify the mode, then the node mode will be the default. node String resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Hazelcast Set endpoint is configured using URI syntax: with the following path and query parameters: 141.1.1. Path Parameters (1 parameters): Name Description Default Type cacheName Required The name of the cache String 141.1.2. Query Parameters (16 parameters): Name Description Default Type defaultOperation (common) To specify a default operation to use, if no operation header has been provided. HazelcastOperation hazelcastInstance (common) The hazelcast instance reference which can be used for hazelcast endpoint. HazelcastInstance hazelcastInstanceName (common) The hazelcast instance reference name which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance. String reliable (common) Define if the endpoint will use a reliable Topic struct or not. false boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean pollingTimeout (consumer) Define the polling timeout of the Queue consumer in Poll mode 10000 long poolSize (consumer) Define the Pool size for Queue Consumer Executor 1 int queueConsumerMode (consumer) Define the Queue Consumer mode: Listen or Poll Listen HazelcastQueueConsumer Mode exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean concurrentConsumers (seda) To use concurrent consumers polling from the SEDA queue. 1 int onErrorDelay (seda) Milliseconds before consumer continues polling after an error has occurred. 1000 int pollTimeout (seda) The timeout used when consuming from the SEDA queue. When a timeout occurs, the consumer can check whether it is allowed to continue running. Setting a lower value allows the consumer to react more quickly upon shutdown. 1000 int transacted (seda) If set to true then the consumer runs in transaction mode, where the messages in the seda queue will only be removed if the transaction commits, which happens when the processing is complete. false boolean transferExchange (seda) If set to true the whole Exchange will be transfered. If header or body contains not serializable objects, they will be skipped. false boolean 141.2. Spring Boot Auto-Configuration The component supports 6 options, which are listed below. Name Description Default Type camel.component.hazelcast-set.customizer.hazelcast-instance.enabled Enable or disable the cache-manager customizer. true Boolean camel.component.hazelcast-set.customizer.hazelcast-instance.override Configure if the cache manager eventually set on the component should be overridden by the customizer. false Boolean camel.component.hazelcast-set.enabled Enable hazelcast-set component true Boolean camel.component.hazelcast-set.hazelcast-instance The hazelcast instance reference which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance. The option is a com.hazelcast.core.HazelcastInstance type. String camel.component.hazelcast-set.hazelcast-mode The hazelcast mode reference which kind of instance should be used. If you don't specify the mode, then the node mode will be the default. node String camel.component.hazelcast-set.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean | [
"hazelcast-set:cacheName"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/hazelcast-set-component |
Chapter 80. OpenTelemetryTracing schema reference | Chapter 80. OpenTelemetryTracing schema reference Used in: KafkaBridgeSpec , KafkaConnectSpec , KafkaMirrorMaker2Spec , KafkaMirrorMakerSpec The type property is a discriminator that distinguishes use of the OpenTelemetryTracing type from JaegerTracing . It must have the value opentelemetry for the type OpenTelemetryTracing . Property Property type Description type string Must be opentelemetry . | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-OpenTelemetryTracing-reference |
function::atomic_read | function::atomic_read Name function::atomic_read - Retrieves an atomic variable from kernel memory Synopsis Arguments addr pointer to atomic variable Description Safely perform the read of an atomic variable. | [
"atomic_read:long(addr:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-atomic-read |
25.3. Booleans | 25.3. Booleans SELinux is based on the least level of access required for a service to run. Services can be run in a variety of ways; therefore, you need to specify how you run your services. Use the following Booleans to set up SELinux: openshift_use_nfs Having this Boolean enabled allows installing OpenShift on an NFS share. Note Due to the continuous development of the SELinux policy, the list above might not contain all Booleans related to the service at all times. To list them, enter the following command: Enter the following command to view description of a particular Boolean: Note that the additional policycoreutils-devel package providing the sepolicy utility is required for this command to work. | [
"~]USD getsebool -a | grep service_name",
"~]USD sepolicy booleans -b boolean_name"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-managing_confined_services-openshift-booleans |
Chapter 19. Authenticating the user in the desktop environment | Chapter 19. Authenticating the user in the desktop environment You can perform the following operations: Configure enterprise login options in GNOME, Enable smart card authentication, and Enable fingerprint authentication. 19.1. Using enterprise credentials to authenticate in GNOME You can use your enterprise domain credentials to access your system. This section explains how to log in using enterprise credentials in GNOME, configure enterprise credentials at the GNOME welcome screen, and add an authenticated user with enterprise credentials in GNOME. 19.1.1. Logging in with Enterprise Credentials in GNOME You can use your domain credentials to login to GNOME if your network has an Active Directory or Identity Management domain available, and you have a domain account. Prerequisites System is configured to use enterprise domain accounts. For more information, see Joining a RHEL 8 system to an IdM domain using the web console . Procedure While logging in, enter the domain user name followed by an @ sign, and then your domain name. For example, if your domain name is example.com and the user name is User , enter: Note If the machine is already configured for domain accounts, you should see a helpful hint describing the login format. 19.1.2. Configuring enterprise credentials at the GNOME welcome screen Perform the following steps to configure workstation for enterprise credentials using the welcome screen that belongs to the GNOME Initial Setup program. The initial setup runs only when you create a new user and log into that account for the first time. Procedure At the login welcome screen, choose Use Enterprise Login . Enter your domain name into the Domain field. Enter your domain account user name and password. Click . Depending on the domain configuration, a pop up prompts for the domain administrator's credentials. 19.1.3. Adding an authenticated user with enterprise credentials in GNOME This procedure helps to create a new user through the GNOME Settings application. The user is authenticated using enterprise credentials. Prerequisites Configured enterprise credentials at the GNOME welcome screen. For more information, see Configuring enterprise credentials at the GNOME welcome screen . Procedure Open the Settings window clicking icons in the top right corner of the screen. From the list of items, select Details > Users . Click Unlock and enter the administrator's password. Click Add user... Click Enterprise Login . Fill out the Domain , Username , and Password fields for your enterprise account. Click Add . Depending on the domain configuration, a pop up prompts for the domain administrator's credentials. 19.1.4. Troubleshooting enterprise login in GNOME You can use the realm utility and its various sub-commands to troubleshoot the enterprise login configuration. Procedure To see whether the machine is configured for enterprise logins, run the following command: Note Network administrators can configure and pre-join workstations to the relevant domains using the kickstart realm join command, or running realm join in an automated fashion from a script. Additional resources realm man page on your system 19.2. Enabling smart card authentication You can enable workstations to authenticate using smart cards. In order to do so, you must configure GDM to allow prompting for smart cards and configure operating system to log in using a smart card. You can use two ways to configure the GDM to allow prompting for smart card authentication with GUI or using the command line. 19.2.1. Configuring smart card authentication in GDM using the GUI You can enable smart card authentication using dconf editor GUI. The dconf Editor application helps to update the configuration-related values on a dconf database. Prerequisites Install the dconf-editor package: Procedure Open the dconf-Editor application and navigate to /org/gnome/login-screen . Turn on the enable-password-authentication option. Turn on the enable-smartcard-authentication option. Additional resources dconf-editor and dconf man pages on your system 19.2.2. Configuring smart card authentication in GDM using the command line You can use the dconf command-line utility to enable the GDM login screen to recognize smart card authentication. Procedure Create a keyfile for the GDM database in / etc/dconf/db/gdm.d/login-screen , which contains the following content: Update the system dconf databases: Additional resources dconf man page on your system 19.2.3. Enabling the smart card authentication method in the system For smart card authentication you can use the system-config-authentication tool to configure the system to allow you to use smart cards. Thus, you can avail GDM as a valid authentication method for the graphical environment. The tool is provided by the authconfig-gtk package. Prerequisites Install authconfig-gtk package Configure GDM for smart card authentication Additional resources For details about configuring system to allow smart card authentication and the system-config-authentication tool, see Configuring smart cards using authselect . 19.3. Fingerprint authentication You can use the system-config-authentication tool to enable fingerprint authentication to allow users to login using their enrolled fingerprints. The tool is provided by the authconfig-gtk package. Additional resources For more information about fingerprint authentication and the system-config-authentication tool, see the Configuring user authentication using authselect . | [
"[email protected]",
"realm list",
"yum install dconf-editor",
"[org/gnome/login-screen] enable-password-authentication='false' enable-smartcard-authentication='true'",
"dconf update"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_the_desktop_environment_in_rhel_8/authenticating-the-user-in-the-desktop-environment_using-the-desktop-environment-in-rhel-8 |
Appendix A. Using your subscription | Appendix A. Using your subscription Service Registry is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. Accessing your account Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. Activating a subscription Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. Downloading ZIP and TAR files To access ZIP or TAR files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat Integration entries in the Integration and Automation category. Select the desired Service Registry product. The Software Downloads page opens. Click the Download link for your component. Revised on 2024-02-22 17:15:23 UTC | null | https://docs.redhat.com/en/documentation/red_hat_integration/2023.q4/html/service_registry_user_guide/using_your_subscription |
8.25. createrepo | 8.25. createrepo 8.25.1. RHBA-2013:0879 - createrepo bug fix update An updated createrepo package that fixes two bugs is now available for Red Hat Enterprise Linux 6. The createrepo package contains a utility that generates a common metadata repository from a directory of RPM packages. Bug Fixes BZ# 877301 Previously, a time-stamp check did not pass if a file did not exist. As a consequence, an empty repository was incorrectly flagged as being up to date and the "createrepo --checkts" command performed no action on an empty repository. With this update, missing file is now considered as a failure, and not a pass. The "createrepo --checkts" command now properly creates a new repository when called on an empty repository. BZ# 892657 The --basedir, --retain-old-md, and --update-md-path options were reported only in the createrepo utility help message but not in the man page. This update amends the man page and the options are now properly documented in both the help message and the man page. Users of createrepo are advised to upgrade to this updated package, which fixes these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/createrepo |
17.2. Requesting Certificates through the Console | 17.2. Requesting Certificates through the Console The Certificate Setup Wizard for the CA, OCSP, KRA, and TKS automates the certificate enrollment process for subsystem certificates. The Console can create, submit, and install certificate requests and certificates for any of the certificates used by that subsystem. These certificates can be a server certificate or subsystem-specific certificate, such as a CA signing certificate or KRA transport certificate. 17.2.1. Requesting Signing Certificates Note It is important that the user generate and submit the client request from the computer that will be used later to access the subsystem because part of the request process generates a private key on the local machine. If location independence is required, use a hardware token, such as a smart card, to store the key pair and the certificate. Open the subsystem console. For example: In the Configuration tab, select System Keys and Certificates in the navigation tree. In the right panel, select the Local Certificates tab. Click Add/Renew . Select the Request a certificate radio button. Choose the signing certificate type to request. Select which type of CA will sign the request, either a root CA or a subordinate CA. Set the key-pair information and set the location to generate the keys (the token), which can be either the internal security database directory or one of the listed external tokens. To create a new certificate, you must create a new key pair. Using an existing key pair will simply renew an existing certificate. Select the message digest algorithm. Give the subject name. Either enter values for individual DN attributes to build the subject DN or enter the full string. The certificate request forms support all UTF-8 characters for the common name, organizational unit, and requester name fields. This support does not include supporting internationalized domain names. Specify the start and end dates of the validity period for the certificate and the time at which the validity period will start and end on those dates. The default validity period is five years. Set the standard extensions for the certificate. The required extensions are chosen by default. To change the default choices, read the guidelines explained in Appendix B, Defaults, Constraints, and Extensions for Certificates and CRLs . Note Certificate extensions are required to set up a CA hierarchy. Subordinate CAs must have certificates that include the extension identifying them as either a subordinate SSL CA (which allows them to issue certificates for SSL) or a subordinate email CA (which allows them to issue certificates for secure email). Disabling certificate extensions means that CA hierarchies cannot be set up. Basic Constraints. The associated fields are CA setting and a numeric setting for the certification path length. Extended Key Usage. Authority Key Identifier. Subject Key Identifier. Key Usage. The digital signature (bit 0), non-repudiation (bit 1), key certificate sign (bit 5), and CRL sign (bit 6) bits are set by default. The extension is marked critical as recommended by the PKIX standard and RFC 2459. See RFC 2459 for a description of the Key Usage extension. Base-64 SEQUENCE of extensions. This is for custom extensions. Paste the extension in MIME 64 DER-encoded format into the text field. To add multiple extensions, use the ExtJoiner program. For information on using the tools, see the Certificate System Command-Line Tools Guide . The wizard generates the key pairs and displays the certificate signing request. The request is in base-64 encoded PKCS #10 format and is bounded by the marker lines -----BEGIN NEW CERTIFICATE REQUEST----- and -----END NEW CERTIFICATE REQUEST----- . For example: The wizard also copies the certificate request to a text file it creates in the configuration directory, which is located in /var/lib/pki/ instance_name / subsystem_type /conf/ . The name of the text file depends on the type of certificate requested. The possible text files are listed in Table 17.1, "Files Created for Certificate Signing Requests" . Table 17.1. Files Created for Certificate Signing Requests Filename Certificate Signing Request cacsr.txt CA signing certificate ocspcsr.txt Certificate Manager OCSP signing certificate ocspcsr.txt OCSP signing certificate Do not modify the certificate request before sending it to the CA. The request can either be submitted automatically through the wizard or copied to the clipboard and manually submitted to the CA through its end-entities page. Note The wizard's auto-submission feature can submit requests to a remote Certificate Manager only. It cannot be used for submitting the request to a third-party CA. To submit it to a third-party CA, use the certificate request file. Retrieve the certificate. Open the Certificate Manager end-entities page. Click the Retrieval tab. Fill in the request ID number that was created when the certificate request was submitted, and click Submit . The page shows the status of the certificate request. If the status is complete , then there is a link to the certificate. Click the Issued certificate link. The new certificate information is shown in pretty-print format, in base-64 encoded format, and in PKCS #7 format. Copy the base-64 encoded certificate, including the -----BEGIN CERTIFICATE----- and -----END CERTIFICATE----- marker lines, to a text file. Save the text file, and use it to store a copy of the certificate in a subsystem's internal database. See Section 15.3.2.1, "Creating Users" . Note pkiconsole is being deprecated. 17.2.2. Requesting Other Certificates Note It is important that the user generate and submit the client request from the computer that will be used later to access the subsystem because part of the request process generates a private key on the local machine. If location independence is required, use a hardware token, such as a smart card, to store the key pair and the certificate. Open the subsystem console. For example: In the Configuration tab, select System Keys and Certificates in the navigation tree. In the right panel, select the Local Certificates tab. Click Add/Renew . Select the Request a certificate radio button. Choose the certificate type to request. The types of certificates that can be requested varies depending on the subsystem. Note If selecting to create an "other" certificate, the Certificate Type field becomes active. Fill in the type of certificate to create, either caCrlSigning for the CRL signing certificate, caSignedLogCert for an audit log signing certificate, or client for an SSL client certificate. Select which type of CA will sign the request. The options are to use the local CA signing certificate or to create a request to submit to another CA. Set the key-pair information and set the location to generate the keys (the token), which can be either the internal security database directory or one of the listed external tokens. To create a new certificate, you must create a new key pair. Using an existing key pair will simply renew an existing certificate. Give the subject name. Either enter values for individual DN attributes to build the subject DN or enter the full string. Note For an SSL server certificate, the common name must be the fully-qualified host name of the Certificate System in the format machine_name.domain.domain . The CA certificate request forms support all UTF-8 characters for the common name, organizational unit, and requester name fields. This support does not include supporting internationalized domain names. Specify the start and end dates of the validity period for the certificate and the time at which the validity period will start and end on those dates. The default validity period is five years. Set the standard extensions for the certificate. The required extensions are chosen by default. To change the default choices, read the guidelines explained in Appendix B, Defaults, Constraints, and Extensions for Certificates and CRLs . Extended Key Usage. Authority Key Identifier. Subject Key Identifier. Key Usage. The digital signature (bit 0), non-repudiation (bit 1), key certificate sign (bit 5), and CRL sign (bit 6) bits are set by default. The extension is marked critical as recommended by the PKIX standard and RFC 2459. See RFC 2459 for a description of the Key Usage extension. Base-64 SEQUENCE of extensions. This is for custom extensions. Paste the extension in MIME 64 DER-encoded format into the text field. To add multiple extensions, use the ExtJoiner program. For information on using the tools, see the Certificate System Command-Line Tools Guide . The wizard generates the key pairs and displays the certificate signing request. The request is in base-64 encoded PKCS #10 format and is bounded by the marker lines -----BEGIN NEW CERTIFICATE REQUEST----- and -----END NEW CERTIFICATE REQUEST----- . For example: The wizard also copies the certificate request to a text file it creates in the configuration directory, which is located in /var/lib/pki/ instance_name / subsystem_type /conf/ . The name of the text file depends on the type of certificate requested. The possible text files are listed in Table 17.2, "Files Created for Certificate Signing Requests" . Table 17.2. Files Created for Certificate Signing Requests Filename Certificate Signing Request kracsr.txt KRA transport certificate sslcsr.txt SSL server certificate othercsr.txt Other certificates, such as Certificate Manager CRL signing certificate or SSL client certificate Do not modify the certificate request before sending it to the CA. The request can either be submitted automatically through the wizard or copied to the clipboard and manually submitted to the CA through its end-entities page. Note The wizard's auto-submission feature can submit requests to a remote Certificate Manager only. It cannot be used for submitting the request to a third-party CA. To submit the request to a third-party CA, use one of the certificate request files. Retrieve the certificate. Open the Certificate Manager end-entities page. Click the Retrieval tab. Fill in the request ID number that was created when the certificate request was submitted, and click Submit . The page shows the status of the certificate request. If the status is complete , then there is a link to the certificate. Click the Issued certificate link. The new certificate information is shown in pretty-print format, in base-64 encoded format, and in PKCS #7 format. Copy the base-64 encoded certificate, including the -----BEGIN CERTIFICATE----- and -----END CERTIFICATE----- marker lines, to a text file. Save the text file, and use it to store a copy of the certificate in a subsystem's internal database. See Section 15.3.2.1, "Creating Users" . | [
"pkiconsole https://server.example.com:8443/ca",
"-----BEGIN NEW CERTIFICATE REQUEST----- MIICJzCCAZCgAwIBAgIBAzANBgkqhkiG9w0BAQQFADBC6SAwHgYDVQQKExdOZXRzY2FwZSBDb21tdW5pY2 F0aW9uczngjhnMVQ2VydGlmaWNhdGUgQXV0aG9yaXR5MB4XDTk4MDgyNzE5MDAwMFoXDTk5MDIyMzE5MDA wMnbjdgngYoxIDAeBgNVBAoTF05ldHNjYXBlIENvbW11bmljYXRpb25zMQ8wDQYDVQQLEwZQZW9wbGUxFz AVBgoJkiaJkIsZAEBEwdzdXByaXlhMRcwFQYDVQQDEw5TdXByaXlhIFNoZXR0eTEjMCEGCSqGSIb3Dbndg JARYUc3Vwcml5Yhvfggsvwryw4y7214vAOBgNVHQ8BAf8EBAMCBLAwFAYJYIZIAYb4QgEBAQHBAQDAgCAM A0GCSqGSIb3DQEBBAUAA4GBAFi9FzyJlLmS+kzsue0kTXawbwamGdYql2w4hIBgdR+jWeLmD4CP4x -----END NEW CERTIFICATE REQUEST-----",
"https://server.example.com:8443/ca/ee/ca",
"pkiconsole https://server.example.com:8443/ca",
"-----BEGIN NEW CERTIFICATE REQUEST----- MIICJzCCAZCgAwIBAgIBAzANBgkqhkiG9w0BAQQFADBC6SAwHgYDVQQKExdOZXRzY2FwZSBDb21tdW5pY2 F0aW9uczngjhnMVQ2VydGlmaWNhdGUgQXV0aG9yaXR5MB4XDTk4MDgyNzE5MDAwMFoXDTk5MDIyMzE5MDA wMnbjdgngYoxIDAeBgNVBAoTF05ldHNjYXBlIENvbW11bmljYXRpb25zMQ8wDQYDVQQLEwZQZW9wbGUxFz AVBgoJkiaJkIsZAEBEwdzdXByaXlhMRcwFQYDVQQDEw5TdXByaXlhIFNoZXR0eTEjMCEGCSqGSIb3Dbndg JARYUc3Vwcml5Yhvfggsvwryw4y7214vAOBgNVHQ8BAf8EBAMCBLAwFAYJYIZIAYb4QgEBAQHBAQDAgCAM A0GCSqGSIb3DQEBBAUAA4GBAFi9FzyJlLmS+kzsue0kTXawbwamGdYql2w4hIBgdR+jWeLmD4CP4x -----END NEW CERTIFICATE REQUEST-----",
"https://server.example.com:8443/ca/ee/ca"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/Requesting_a_Subsystem_Server_or_Signing_Certificate_through_the_Console |
16.6. Additional Resources | 16.6. Additional Resources For additional information, see The DHCP Handbook; Ralph Droms and Ted Lemon; 2003 or the following resources. 16.6.1. Installed Documentation dhcpd man page - Describes how the DHCP daemon works. dhcpd.conf man page - Explains how to configure the DHCP configuration file; includes some examples. dhcpd.leases man page - Describes a persistent database of leases. dhcp-options man page - Explains the syntax for declaring DHCP options in dhcpd.conf ; includes some examples. dhcrelay man page - Explains the DHCP Relay Agent and its configuration options. /usr/share/doc/dhcp-< version >/ - Contains sample files, README files, and release notes for current versions of the DHCP service. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-dhcp-additional-resources |
Chapter 16. Deploying routed provider networks | Chapter 16. Deploying routed provider networks 16.1. Advantages of routed provider networks In Red Hat OpenStack Platform (RHOSP), administrators can create routed provider networks. Routed provider networks are typically used in edge deployments, and rely on multiple layer 2 network segments instead of traditional networks that have only one segment. Routed provider networks simplify the cloud for end users because they see only one network. For administrators, routed provider networks deliver scalabilty and fault tolerance. For example, if a major error occurs, only one segment is impacted instead of the entire network failing. Before routed provider networks, administrators typically had to choose from one of the following architectures: A single, large layer 2 network Multiple, smaller layer 2 networks Single, large layer 2 networks become complex when scaling and reduce fault tolerance (increase failure domains). Multiple, smaller layer 2 networks scale better and shrink failure domains, but can introduce complexity for end users. Starting with RHOSP 16.2 and later, you can deploy routed provider networks using the Modular Layer 2 plug-in with the Open Virtual Network mechanism driver (ML2/OVN). (Routed provider network support for the ML2/Open vSwitch (OVS) and SR-IOV mechanism drivers was introduced in RHOSP 16.1.1.) Additional resources Section 16.2, "Fundamentals of routed provider networks" 16.2. Fundamentals of routed provider networks A routed provider network is different from other types of networks because of the one-to-one association between a network subnet and a segment. In the past, the Red Hat OpenStack (RHOSP) Networking service has not supported routed provider networks, because the Networking service required that all subnets must either belong to the same segment or to no segment. With routed provider networks, the IP addresses available to virtual machine (VM) instances depend on the segment of the network available on the particular compute node. The Networking service port can be associated with only one network segment. Similar to conventional networking, layer 2 (switching) handles transit of traffic between ports on the same network segment and layer 3 (routing) handles transit of traffic between segments. The Networking service does not provide layer 3 services between segments. Instead, it relies on physical network infrastructure to route subnets. Thus, both the Networking service and physical network infrastructure must contain configuration for routed provider networks, similar to conventional provider networks. You can configure the Compute scheduler to filter Compute nodes that have affinity with routed network segments, so that the scheduler places instances only on Compute nodes that are in the required routed provider network segment. If you require a DHCP-metadata service, you must define an availability zone for each edge site or network segment, to ensure that the local DHCP agent is deployed. Additional resources Section 16.1, "Advantages of routed provider networks" 16.3. Limitations of routed provider networks The known constraints of routed provider networks in Red Hat OpenStack Platform include: North-south routing with central SNAT or a floating IP is not supported. When using SR-IOV or PCI pass-through, physical network (physnet) names must be the same in central and remote sites or segments. You cannot reuse segment IDs. 16.4. Preparing for a routed provider network To create a routed provider network in Red Hat OpenStack Platform (RHOSP), you must first gather the network information that is required to create it. You must configure the overcloud to create a custom role that deploys a RHOSP Networking service (neutron) metadata agent for the Compute nodes that contain the network segments. For environments that use the ML2/OVS mechanism driver, in addition to the metadata agent, you must also include the NeutronDhcpAgent service on the Compute nodes. On the Controllers that are running the Compute scheduler services, you must enable scheduling support for routed provider networks. Prerequisites You must be a RHOSP user with the admin role. Procedure Gather the VLAN IDs from the tripleo-heat-templates/network_data.yaml file for the network you want to create the routed provider network on, and assign unique physical network names for each segment that you will create on the routed provider network. This enables reuse of the same segmentation details between subnets. Create a reference table to visualize the relationships between the VLAN IDs, segments, and physical network names: Table 16.1. Example - routed provider network segment definitions Routed provider network VLAN ID Segment Physical network multisegment1 128 segment1 provider1 multisegment1 129 segment2 provider2 Plan the routing between segments. Each subnet on a segment must contain the gateway address of the router interface on that particular subnet. You need the subnet address in both IPv4 and IPv6 formats. Table 16.2. Example - routing plan for routed provider network segments Routed provider network Segment Subnet address Gateway address multisegment1 segment1 (IPv4) 203.0.113.0/24 203.0.113.1 multisegment1 segment1 (IPv6) fd00:203:0:113::/64 fd00:203:0:113::1 multisegment1 segment2 (IPv4) 198.51.100.0/24 198.51.100.1 multisegment1 segment2 (IPv6) fd00:198:51:100::/64 fd00:198:51:100::1 Routed provider networks require that Compute nodes reside on different segments. Check the templates/overcloud-baremetal-deployed.yaml file to ensure that every Compute host in a routed provider network has direct connectivity to one of its segments. For more information, see Provisioning bare metal nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide. Ensure that the NeutronMetadataAgent service is included in templates/roles_data-custom.yaml for the Compute nodes containing the segments: For more information, see Composable services and custom roles in the Customizing your Red Hat OpenStack Platform deployment guide. When using the ML2/OVS mechanism driver, in addition to the NeutronMetadataAgent service, also ensure that the NeutronDhcpAgent service is included in templates/roles_data-custom.yaml for the Compute nodes containing the segments: Tip Unlike conventional provider networks, a DHCP agent cannot support more than one segment within a network. Deploy DHCP agents on the Compute nodes containing the segments rather than on the network nodes to reduce the node count. Create an routed provider network environment file, for example, rpn_env.yaml . Configure DHCP to enable metadata support on isolated networks: Ensure that the segments service plug-in is loaded into the Networking service: If the segments plug-in is missing, add it to the NeutronServicePlugins parameter: Example Important When you add new values to the NeutronServicePlugins parameter, RHOSP director overwrites any previously declared values with the ones that you are adding. Therefore, when you are adding segments , you must also include any previously declared Networking service plug-ins. To verify the network with the Placement service before scheduling an instance on a host, enable scheduling support for routed provider networks on the Controllers that are running the Compute scheduler services. Example Add your routed provider network environment file to the stack with your other environment files and deploy the overcloud: steps Creating a routed provider network Additional resources Provisioning bare metal nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide Composable services and custom roles in the Customizing your Red Hat OpenStack Platform deployment guide 16.5. Creating a routed provider network Routed provider networks simplify the Red Hat OpenStack Platform (RHOSP) cloud for end users because they see only one network. For administrators, routed provider networks deliver scalabilty and fault tolerance. When you perform this procedure, you create an routed provider network with two network segments. Each segment contains one IPv4 subnet and one IPv6 subnet. Prerequisites Complete the steps in Section 16.4, "Preparing for a routed provider network" . You must be a RHOSP user with the admin role. Procedure Create a VLAN provider network that includes a default segment. In this example, the VLAN provider network is named multisegment1 and uses a physical network called provider1 and a VLAN whose ID is 128 : Example Sample output Rename the default network segment to segment1 . Obtain the segment ID: Sample output Using the segment ID, rename the network segment to segment1 : Create a second segment on the provider network. In this example, the network segment uses a physical network called provider2 and a VLAN whose ID is 129 : Example Sample output Verify that the network contains the segment1 and segment2 segments: Sample output Create one IPv4 subnet and one IPv6 subnet on the segment1 segment. In this example, the IPv4 subnet uses 203.0.113.0/24 : Example Sample output In this example, the IPv6 subnet uses fd00:203:0:113::/64 : Example Sample output Note By default, IPv6 subnets on provider networks rely on physical network infrastructure for stateless address autoconfiguration (SLAAC) and router advertisement. Create one IPv4 subnet and one IPv6 subnet on the segment2 segment. In this example, the IPv4 subnet uses 198.51.100.0/24 : Example Sample output In this example, the IPv6 subnet uses fd00:198:51:100::/64 : Example Sample output Verification Verify that each IPv4 subnet associates with at least one DHCP agent: Sample output Verify that inventories were created for each segment IPv4 subnet in the Compute service placement API. Run this command for all segment IDs: Sample output In this sample output, only one of the segments is shown: Verify that host aggregates were created for each segment in the Compute service: Sample output In this example, only one of the segments is shown: Launch one or more instances. Each instance obtains IP addresses according to the segment it uses on the particular compute node. Note If a fixed IP is specified by the user in the port create request, that particular IP is allocated immediately to the port. However, creating a port and passing it to an instance yields a different behavior than conventional networks. If the fixed IP is not specified on the port create request, the Networking service defers assignment of IP addresses to the port until the particular compute node becomes apparent. For example, when you run this command: Sample output Additional resources Section 16.4, "Preparing for a routed provider network" network create in the Command line interface reference network segment create in the Command line interface reference subnet create in the Command line interface reference port create in the Command line interface reference 16.6. Migrating a non-routed network to a routed provider network You can migrate a non-routed network to a routed provider network by associating the subnet of the network with the ID of the network segment. Prerequisites The non-routed network you are migrating must contain only one segment and only one subnet. Important In non-routed provider networks that contain multiple subnets or network segments it is not possible to safely migrate to an routed provider network. In non-routed networks, addresses from the subnet allocation pools are assigned to ports without consideration of the network segment to which the port is bound. Procedure For the network that is being migrated, obtain the ID of the current network segment. Example Sample output For the network that is being migrated, obtain the ID of the current subnet. Example Sample output Verify that the current segment_id of the subnet has a value of None . Example Sample output Change the value of the subnet segment_id to the network segment ID. Here is an example: Verification Verify that the subnet is now associated with the desired network segment. Example Sample output Additional resources subnet show in the Command line interface reference subnet set in the Command line interface reference | [
"- name: Compute ServicesDefault: - OS::TripleO::Services::NeutronMetadataAgent",
"- name: Compute ServicesDefault: - OS::TripleO::Services::NeutronDhcpAgent - OS::TripleO::Services::NeutronMetadataAgent",
"parameter_defaults: NeutronEnableIsolatedMetadata: true",
"openstack extension list --network --max-width 80 | grep -E \"Segment\"",
"parameter_defaults: NeutronEnableIsolatedMetadata: true NeutronServicePlugins: 'router,qos,segments,trunk,placement'",
"parameter_defaults: NeutronEnableIsolatedMetadata: true NeutronServicePlugins: 'router,qos,segments,trunk,placement' NovaSchedulerQueryPlacementForRoutedNetworkAggregates: true",
"openstack overcloud deploy --templates -e <your_environment_files> -e /home/stack/templates/rpn_env.yaml",
"openstack network create --share --provider-physical-network provider1 --provider-network-type vlan --provider-segment 128 multisegment1",
"+---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | UP | | id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 | | ipv4_address_scope | None | | ipv6_address_scope | None | | l2_adjacency | True | | mtu | 1500 | | name | multisegment1 | | port_security_enabled | True | | provider:network_type | vlan | | provider:physical_network | provider1 | | provider:segmentation_id | 128 | | revision_number | 1 | | router:external | Internal | | shared | True | | status | ACTIVE | | subnets | | | tags | [] | +---------------------------+--------------------------------------+",
"openstack network segment list --network multisegment1",
"+--------------------------------------+----------+--------------------------------------+--------------+---------+ | ID | Name | Network | Network Type | Segment | +--------------------------------------+----------+--------------------------------------+--------------+---------+ | 43e16869-ad31-48e4-87ce-acf756709e18 | None | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 | vlan | 128 | +--------------------------------------+----------+--------------------------------------+--------------+---------+",
"openstack network segment set --name segment1 43e16869-ad31-48e4-87ce-acf756709e18",
"openstack network segment create --physical-network provider2 --network-type vlan --segment 129 --network multisegment1 segment2",
"+------------------+--------------------------------------+ | Field | Value | +------------------+--------------------------------------+ | description | None | | headers | | | id | 053b7925-9a89-4489-9992-e164c8cc8763 | | name | segment2 | | network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 | | network_type | vlan | | physical_network | provider2 | | revision_number | 1 | | segmentation_id | 129 | | tags | [] | +------------------+--------------------------------------+",
"openstack network segment list --network multisegment1",
"+--------------------------------------+----------+--------------------------------------+--------------+---------+ | ID | Name | Network | Network Type | Segment | +--------------------------------------+----------+--------------------------------------+--------------+---------+ | 053b7925-9a89-4489-9992-e164c8cc8763 | segment2 | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 | vlan | 129 | | 43e16869-ad31-48e4-87ce-acf756709e18 | segment1 | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 | vlan | 128 | +--------------------------------------+----------+--------------------------------------+--------------+---------+",
"openstack subnet create --network multisegment1 --network-segment segment1 --ip-version 4 --subnet-range 203.0.113.0/24 multisegment1-segment1-v4",
"+-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | allocation_pools | 203.0.113.2-203.0.113.254 | | cidr | 203.0.113.0/24 | | enable_dhcp | True | | gateway_ip | 203.0.113.1 | | id | c428797a-6f8e-4cb1-b394-c404318a2762 | | ip_version | 4 | | name | multisegment1-segment1-v4 | | network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 | | revision_number | 1 | | segment_id | 43e16869-ad31-48e4-87ce-acf756709e18 | | tags | [] | +-------------------+--------------------------------------+",
"openstack subnet create --network multisegment1 --network-segment segment1 --ip-version 6 --subnet-range fd00:203:0:113::/64 --ipv6-address-mode slaac multisegment1-segment1-v6",
"+-------------------+------------------------------------------------------+ | Field | Value | +-------------------+------------------------------------------------------+ | allocation_pools | fd00:203:0:113::2-fd00:203:0:113:ffff:ffff:ffff:ffff | | cidr | fd00:203:0:113::/64 | | enable_dhcp | True | | gateway_ip | fd00:203:0:113::1 | | id | e41cb069-9902-4c01-9e1c-268c8252256a | | ip_version | 6 | | ipv6_address_mode | slaac | | ipv6_ra_mode | None | | name | multisegment1-segment1-v6 | | network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 | | revision_number | 1 | | segment_id | 43e16869-ad31-48e4-87ce-acf756709e18 | | tags | [] | +-------------------+------------------------------------------------------+",
"openstack subnet create --network multisegment1 --network-segment segment2 --ip-version 4 --subnet-range 198.51.100.0/24 multisegment1-segment2-v4",
"+-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | allocation_pools | 198.51.100.2-198.51.100.254 | | cidr | 198.51.100.0/24 | | enable_dhcp | True | | gateway_ip | 198.51.100.1 | | id | 242755c2-f5fd-4e7d-bd7a-342ca95e50b2 | | ip_version | 4 | | name | multisegment1-segment2-v4 | | network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 | | revision_number | 1 | | segment_id | 053b7925-9a89-4489-9992-e164c8cc8763 | | tags | [] | +-------------------+--------------------------------------+",
"openstack subnet create --network multisegment1 --network-segment segment2 --ip-version 6 --subnet-range fd00:198:51:100::/64 --ipv6-address-mode slaac multisegment1-segment2-v6",
"+-------------------+--------------------------------------------------------+ | Field | Value | +-------------------+--------------------------------------------------------+ | allocation_pools | fd00:198:51:100::2-fd00:198:51:100:ffff:ffff:ffff:ffff | | cidr | fd00:198:51:100::/64 | | enable_dhcp | True | | gateway_ip | fd00:198:51:100::1 | | id | b884c40e-9cfe-4d1b-a085-0a15488e9441 | | ip_version | 6 | | ipv6_address_mode | slaac | | ipv6_ra_mode | None | | name | multisegment1-segment2-v6 | | network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 | | revision_number | 1 | | segment_id | 053b7925-9a89-4489-9992-e164c8cc8763 | | tags | [] | +-------------------+--------------------------------------------------------+",
"openstack network agent list --agent-type dhcp --network multisegment1",
"+--------------------------------------+------------+-------------+-------------------+-------+-------+--------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+------------+-------------+-------------------+-------+-------+--------------------+ | c904ed10-922c-4c1a-84fd-d928abaf8f55 | DHCP agent | compute0001 | nova | :-) | UP | neutron-dhcp-agent | | e0b22cc0-d2a6-4f1c-b17c-27558e20b454 | DHCP agent | compute0101 | nova | :-) | UP | neutron-dhcp-agent | +--------------------------------------+------------+-------------+-------------------+-------+-------+--------------------+",
"SEGMENT_ID=053b7925-9a89-4489-9992-e164c8cc8763 openstack resource provider inventory list USDSEGMENT_ID",
"+----------------+------------------+----------+----------+-----------+----------+-------+ | resource_class | allocation_ratio | max_unit | reserved | step_size | min_unit | total | +----------------+------------------+----------+----------+-----------+----------+-------+ | IPV4_ADDRESS | 1.0 | 1 | 2 | 1 | 1 | 30 | +----------------+------------------+----------+----------+-----------+----------+-------+",
"openstack aggregate list",
"+----+---------------------------------------------------------+-------------------+ | Id | Name | Availability Zone | +----+---------------------------------------------------------+-------------------+ | 10 | Neutron segment id 053b7925-9a89-4489-9992-e164c8cc8763 | None | +----+---------------------------------------------------------+-------------------+",
"openstack port create --network multisegment1 port1",
"+-----------------------+--------------------------------------+ | Field | Value | +-----------------------+--------------------------------------+ | admin_state_up | UP | | binding_vnic_type | normal | | id | 6181fb47-7a74-4add-9b6b-f9837c1c90c4 | | ip_allocation | deferred | | mac_address | fa:16:3e:34:de:9b | | name | port1 | | network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 | | port_security_enabled | True | | revision_number | 1 | | security_groups | e4fcef0d-e2c5-40c3-a385-9c33ac9289c5 | | status | DOWN | | tags | [] | +-----------------------+--------------------------------------+",
"openstack network segment list --network my_network",
"+--------------------------------------+------+--------------------------------------+--------------+---------+ | ID | Name | Network | Network Type | Segment | +--------------------------------------+------+--------------------------------------+--------------+---------+ | 81e5453d-4c9f-43a5-8ddf-feaf3937e8c7 | None | 45e84575-2918-471c-95c0-018b961a2984 | flat | None | +--------------------------------------+------+--------------------------------------+--------------+---------+",
"openstack network segment list --network my_network",
"+--------------------------------------+-----------+--------------------------------------+---------------+ | ID | Name | Network | Subnet | +--------------------------------------+-----------+--------------------------------------+---------------+ | 71d931d2-0328-46ae-93bc-126caf794307 | my_subnet | 45e84575-2918-471c-95c0-018b961a2984 | 172.24.4.0/24 | +--------------------------------------+-----------+--------------------------------------+---------------+",
"openstack subnet show my_subnet --c segment_id",
"+------------+-------+ | Field | Value | +------------+-------+ | segment_id | None | +------------+-------+",
"openstack subnet set --network-segment 81e5453d-4c9f-43a5-8ddf-feaf3937e8c7 my_subnet",
"openstack subnet show my_subnet --c segment_id",
"+------------+--------------------------------------+ | Field | Value | +------------+--------------------------------------+ | segment_id | 81e5453d-4c9f-43a5-8ddf-feaf3937e8c7 | +------------+--------------------------------------+"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_red_hat_openstack_platform_networking/deploy-routed-prov-networks_rhosp-network |
Chapter 3. Cluster capabilities | Chapter 3. Cluster capabilities Cluster administrators can use cluster capabilities to enable or disable optional components prior to installation. Cluster administrators can enable cluster capabilities at anytime after installation. Note Cluster administrators cannot disable a cluster capability after it is enabled. 3.1. Enabling cluster capabilities If you are using an installation method that includes customizing your cluster by creating an install-config.yaml file, you can select which cluster capabilities you want to make available on the cluster. Note If you customize your cluster by enabling or disabling specific cluster capabilities, you must manually maintain your install-config.yaml file. New OpenShift Container Platform updates might declare new capability handles for existing components, or introduce new components altogether. Users who customize their install-config.yaml file should consider periodically updating their install-config.yaml file as OpenShift Container Platform is updated. You can use the following configuration parameters to select cluster capabilities: capabilities: baselineCapabilitySet: v4.11 1 additionalEnabledCapabilities: 2 - CSISnapshot - Console - Storage 1 Defines a baseline set of capabilities to install. Valid values are None , vCurrent and v4.x . If you select None , all optional capabilities are disabled. The default value is vCurrent , which enables all optional capabilities. Note v4.x refers to any value up to and including the current cluster version. For example, valid values for a OpenShift Container Platform 4.12 cluster are v4.11 and v4.12 . 2 Defines a list of capabilities to explicitly enable. These capabilities are enabled in addition to the capabilities specified in baselineCapabilitySet . Note In this example, the default capability is set to v4.11 . The additionalEnabledCapabilities field enables additional capabilities over the default v4.11 capability set. The following table describes the baselineCapabilitySet values. Table 3.1. Cluster capabilities baselineCapabilitySet values description Value Description vCurrent Specify this option when you want to automatically add new, default capabilities that are introduced in new releases. v4.11 Specify this option when you want to enable the default capabilities for OpenShift Container Platform 4.11. By specifying v4.11 , capabilities that are introduced in newer versions of OpenShift Container Platform are not enabled. The default capabilities in OpenShift Container Platform 4.11 are baremetal , MachineAPI , marketplace , and openshift-samples . v4.12 Specify this option when you want to enable the default capabilities for OpenShift Container Platform 4.12. By specifying v4.12 , capabilities that are introduced in newer versions of OpenShift Container Platform are not enabled. The default capabilities in OpenShift Container Platform 4.12 are baremetal , MachineAPI , marketplace , openshift-samples , Console , Insights , Storage , and CSISnapshot . v4.13 Specify this option when you want to enable the default capabilities for OpenShift Container Platform 4.13. By specifying v4.13 , capabilities that are introduced in newer versions of OpenShift Container Platform are not enabled. The default capabilities in OpenShift Container Platform 4.13 are baremetal , MachineAPI , marketplace , openshift-samples , Console , Insights , Storage , CSISnapshot , and NodeTuning . v4.14 Specify this option when you want to enable the default capabilities for OpenShift Container Platform 4.14. By specifying v4.14 , capabilities that are introduced in newer versions of OpenShift Container Platform are not enabled. The default capabilities in OpenShift Container Platform 4.14 are baremetal , MachineAPI , marketplace , openshift-samples , Console , Insights , Storage , CSISnapshot , NodeTuning , ImageRegistry , Build , and DeploymentConfig . v4.15 Specify this option when you want to enable the default capabilities for OpenShift Container Platform 4.15. By specifying v4.15 , capabilities that are introduced in newer versions of OpenShift Container Platform are not enabled. The default capabilities in OpenShift Container Platform 4.15 are baremetal , MachineAPI , marketplace , OperatorLifecycleManager , openshift-samples , Console , Insights , Storage , CSISnapshot , NodeTuning , ImageRegistry , Build , CloudCredential , and DeploymentConfig . None Specify when the other sets are too large, and you do not need any capabilities or want to fine-tune via additionalEnabledCapabilities . Additional resources Installing a cluster on AWS with customizations Installing a cluster on GCP with customizations 3.2. Optional cluster capabilities in OpenShift Container Platform 4.15 Currently, cluster Operators provide the features for these optional capabilities. The following summarizes the features provided by each capability and what functionality you lose if it is disabled. Additional resources Cluster Operators reference 3.2.1. Bare-metal capability Purpose The Cluster Baremetal Operator provides the features for the baremetal capability. The Cluster Baremetal Operator (CBO) deploys all the components necessary to take a bare-metal server to a fully functioning worker node ready to run OpenShift Container Platform compute nodes. The CBO ensures that the metal3 deployment, which consists of the Bare Metal Operator (BMO) and Ironic containers, runs on one of the control plane nodes within the OpenShift Container Platform cluster. The CBO also listens for OpenShift Container Platform updates to resources that it watches and takes appropriate action. The bare-metal capability is required for deployments using installer-provisioned infrastructure. Disabling the bare-metal capability can result in unexpected problems with these deployments. It is recommended that cluster administrators only disable the bare-metal capability during installations with user-provisioned infrastructure that do not have any BareMetalHost resources in the cluster. Important If the bare-metal capability is disabled, the cluster cannot provision or manage bare-metal nodes. Only disable the capability if there are no BareMetalHost resources in your deployment. The baremetal capability depends on the MachineAPI capability. If you enable the baremetal capability, you must also enable MachineAPI . Additional resources Deploying installer-provisioned clusters on bare metal Preparing for bare metal cluster installation Bare metal configuration 3.2.2. Build capability Purpose The Build capability enables the Build API. The Build API manages the lifecycle of Build and BuildConfig objects. Important If the Build capability is disabled, the cluster cannot use Build or BuildConfig resources. Disable the capability only if Build and BuildConfig resources are not required in the cluster. 3.2.3. Cloud credential capability Purpose The Cloud Credential Operator provides features for the CloudCredential capability. Note Currently, disabling the CloudCredential capability is only supported for bare-metal clusters. The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). The CCO syncs on CredentialsRequest custom resources (CRs) to allow OpenShift Container Platform components to request cloud provider credentials with the specific permissions that are required for the cluster to run. By setting different values for the credentialsMode parameter in the install-config.yaml file, the CCO can be configured to operate in several different modes. If no mode is specified, or the credentialsMode parameter is set to an empty string ( "" ), the CCO operates in its default mode. Additional resources About the Cloud Credential Operator 3.2.4. Cluster Image Registry capability Purpose The Cluster Image Registry Operator provides features for the ImageRegistry capability. The Cluster Image Registry Operator manages a singleton instance of the OpenShift image registry. It manages all configuration of the registry, including creating storage. On initial start up, the Operator creates a default image-registry resource instance based on the configuration detected in the cluster. This indicates what cloud storage type to use based on the cloud provider. If insufficient information is available to define a complete image-registry resource, then an incomplete resource is defined and the Operator updates the resource status with information about what is missing. The Cluster Image Registry Operator runs in the openshift-image-registry namespace and it also manages the registry instance in that location. All configuration and workload resources for the registry reside in that namespace. In order to integrate the image registry into the cluster's user authentication and authorization system, a service account token secret and an image pull secret are generated for each service account in the cluster. Important If you disable the ImageRegistry capability or if you disable the integrated OpenShift image registry in the Cluster Image Registry Operator's configuration, the service account token secret and image pull secret are not generated for each service account. If you disable the ImageRegistry capability, you can reduce the overall resource footprint of OpenShift Container Platform in resource-constrained environments. Depending on your deployment, you can disable this component if you do not need it. Project cluster-image-registry-operator Additional resources Image Registry Operator in OpenShift Container Platform Automatically generated secrets 3.2.5. Cluster storage capability Purpose The Cluster Storage Operator provides the features for the Storage capability. The Cluster Storage Operator sets OpenShift Container Platform cluster-wide storage defaults. It ensures a default storageclass exists for OpenShift Container Platform clusters. It also installs Container Storage Interface (CSI) drivers which enable your cluster to use various storage backends. Important If the cluster storage capability is disabled, the cluster will not have a default storageclass or any CSI drivers. Users with administrator privileges can create a default storageclass and manually install CSI drivers if the cluster storage capability is disabled. Notes The storage class that the Operator creates can be made non-default by editing its annotation, but this storage class cannot be deleted as long as the Operator runs. 3.2.6. Console capability Purpose The Console Operator provides the features for the Console capability. The Console Operator installs and maintains the OpenShift Container Platform web console on a cluster. The Console Operator is installed by default and automatically maintains a console. Additional resources Web console overview 3.2.7. CSI snapshot controller capability Purpose The Cluster CSI Snapshot Controller Operator provides the features for the CSISnapshot capability. The Cluster CSI Snapshot Controller Operator installs and maintains the CSI Snapshot Controller. The CSI Snapshot Controller is responsible for watching the VolumeSnapshot CRD objects and manages the creation and deletion lifecycle of volume snapshots. Additional resources CSI volume snapshots 3.2.8. DeploymentConfig capability Purpose The DeploymentConfig capability enables and manages the DeploymentConfig API. Important If you disable the DeploymentConfig capability, the following resources will not be available in the cluster: DeploymentConfig resources The deployer service account Disable the DeploymentConfig capability only if you do not require DeploymentConfig resources and the deployer service account in the cluster. 3.2.9. Insights capability Purpose The Insights Operator provides the features for the Insights capability. The Insights Operator gathers OpenShift Container Platform configuration data and sends it to Red Hat. The data is used to produce proactive insights recommendations about potential issues that a cluster might be exposed to. These insights are communicated to cluster administrators through Insights Advisor on console.redhat.com . Notes Insights Operator complements OpenShift Container Platform Telemetry. Additional resources Using Insights Operator 3.2.10. Machine API capability Purpose The machine-api-operator , cluster-autoscaler-operator , and cluster-control-plane-machine-set-operator Operators provide the features for the MachineAPI capability. You can disable this capability only if you install a cluster with user-provisioned infrastructure. The Machine API capability is responsible for all machine configuration and management in the cluster. If you disable the Machine API capability during installation, you need to manage all machine-related tasks manually. Additional resources Overview of machine management Machine API Operator Cluster Autoscaler Operator Control Plane Machine Set Operator 3.2.11. Marketplace capability Purpose The Marketplace Operator provides the features for the marketplace capability. The Marketplace Operator simplifies the process for bringing off-cluster Operators to your cluster by using a set of default Operator Lifecycle Manager (OLM) catalogs on the cluster. When the Marketplace Operator is installed, it creates the openshift-marketplace namespace. OLM ensures catalog sources installed in the openshift-marketplace namespace are available for all namespaces on the cluster. If you disable the marketplace capability, the Marketplace Operator does not create the openshift-marketplace namespace. Catalog sources can still be configured and managed on the cluster manually, but OLM depends on the openshift-marketplace namespace in order to make catalogs available to all namespaces on the cluster. Users with elevated permissions to create namespaces prefixed with openshift- , such as system or cluster administrators, can manually create the openshift-marketplace namespace. If you enable the marketplace capability, you can enable and disable individual catalogs by configuring the Marketplace Operator. Additional resources Red Hat-provided Operator catalogs 3.2.12. Node Tuning capability Purpose The Node Tuning Operator provides features for the NodeTuning capability. The Node Tuning Operator helps you manage node-level tuning by orchestrating the TuneD daemon and achieves low latency performance by using the Performance Profile controller. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs. If you disable the NodeTuning capability, some default tuning settings will not be applied to the control-plane nodes. This might limit the scalability and performance of large clusters with over 900 nodes or 900 routes. Additional resources Using the Node Tuning Operator 3.2.13. OpenShift samples capability Purpose The Cluster Samples Operator provides the features for the openshift-samples capability. The Cluster Samples Operator manages the sample image streams and templates stored in the openshift namespace. On initial start up, the Operator creates the default samples configuration resource to initiate the creation of the image streams and templates. The configuration object is a cluster scoped object with the key cluster and type configs.samples . The image streams are the Red Hat Enterprise Linux CoreOS (RHCOS)-based OpenShift Container Platform image streams pointing to images on registry.redhat.io . Similarly, the templates are those categorized as OpenShift Container Platform templates. If you disable the samples capability, users cannot access the image streams, samples, and templates it provides. Depending on your deployment, you might want to disable this component if you do not need it. Additional resources Configuring the Cluster Samples Operator 3.2.14. Operator Lifecycle Manager capability Purpose Operator Lifecycle Manager (OLM) helps users install, update, and manage the lifecycle of Kubernetes native applications (Operators) and their associated services running across their OpenShift Container Platform clusters. It is part of the Operator Framework , an open source toolkit designed to manage Operators in an effective, automated, and scalable way. If an Operator requires any of the following APIs, then you must enable the OperatorLifecycleManager capability: ClusterServiceVersion CatalogSource Subscription InstallPlan OperatorGroup Important The marketplace capability depends on the OperatorLifecycleManager capability. You cannot disable the OperatorLifecycleManager capability and enable the marketplace capability. Additional resources Operator Lifecycle Manager concepts and resources 3.3. Viewing the cluster capabilities As a cluster administrator, you can view the capabilities by using the clusterversion resource status. Prerequisites You have installed the OpenShift CLI ( oc ). Procedure To view the status of the cluster capabilities, run the following command: USD oc get clusterversion version -o jsonpath='{.spec.capabilities}{"\n"}{.status.capabilities}{"\n"}' Example output {"additionalEnabledCapabilities":["openshift-samples"],"baselineCapabilitySet":"None"} {"enabledCapabilities":["openshift-samples"],"knownCapabilities":["CSISnapshot","Console","Insights","Storage","baremetal","marketplace","openshift-samples"]} 3.4. Enabling the cluster capabilities by setting baseline capability set As a cluster administrator, you can enable cluster capabilities any time after a OpenShift Container Platform installation by setting the baselineCapabilitySet configuration parameter. Prerequisites You have installed the OpenShift CLI ( oc ). Procedure To set the baselineCapabilitySet configuration parameter, run the following command: USD oc patch clusterversion version --type merge -p '{"spec":{"capabilities":{"baselineCapabilitySet":"vCurrent"}}}' 1 1 For baselineCapabilitySet you can specify vCurrent , v4.15 , or None . 3.5. Enabling the cluster capabilities by setting additional enabled capabilities As a cluster administrator, you can enable cluster capabilities any time after a OpenShift Container Platform installation by setting the additionalEnabledCapabilities configuration parameter. Prerequisites You have installed the OpenShift CLI ( oc ). Procedure View the additional enabled capabilities by running the following command: USD oc get clusterversion version -o jsonpath='{.spec.capabilities.additionalEnabledCapabilities}{"\n"}' Example output ["openshift-samples"] To set the additionalEnabledCapabilities configuration parameter, run the following command: USD oc patch clusterversion/version --type merge -p '{"spec":{"capabilities":{"additionalEnabledCapabilities":["openshift-samples", "marketplace"]}}}' Important It is not possible to disable a capability which is already enabled in a cluster. The cluster version Operator (CVO) continues to reconcile the capability which is already enabled in the cluster. If you try to disable a capability, then CVO shows the divergent spec: USD oc get clusterversion version -o jsonpath='{.status.conditions[?(@.type=="ImplicitlyEnabledCapabilities")]}{"\n"}' Example output {"lastTransitionTime":"2022-07-22T03:14:35Z","message":"The following capabilities could not be disabled: openshift-samples","reason":"CapabilitiesImplicitlyEnabled","status":"True","type":"ImplicitlyEnabledCapabilities"} Note During the cluster upgrades, it is possible that a given capability could be implicitly enabled. If a resource was already running on the cluster before the upgrade, then any capabilities that is part of the resource will be enabled. For example, during a cluster upgrade, a resource that is already running on the cluster has been changed to be part of the marketplace capability by the system. Even if a cluster administrator does not explicitly enabled the marketplace capability, it is implicitly enabled by the system. | [
"capabilities: baselineCapabilitySet: v4.11 1 additionalEnabledCapabilities: 2 - CSISnapshot - Console - Storage",
"oc get clusterversion version -o jsonpath='{.spec.capabilities}{\"\\n\"}{.status.capabilities}{\"\\n\"}'",
"{\"additionalEnabledCapabilities\":[\"openshift-samples\"],\"baselineCapabilitySet\":\"None\"} {\"enabledCapabilities\":[\"openshift-samples\"],\"knownCapabilities\":[\"CSISnapshot\",\"Console\",\"Insights\",\"Storage\",\"baremetal\",\"marketplace\",\"openshift-samples\"]}",
"oc patch clusterversion version --type merge -p '{\"spec\":{\"capabilities\":{\"baselineCapabilitySet\":\"vCurrent\"}}}' 1",
"oc get clusterversion version -o jsonpath='{.spec.capabilities.additionalEnabledCapabilities}{\"\\n\"}'",
"[\"openshift-samples\"]",
"oc patch clusterversion/version --type merge -p '{\"spec\":{\"capabilities\":{\"additionalEnabledCapabilities\":[\"openshift-samples\", \"marketplace\"]}}}'",
"oc get clusterversion version -o jsonpath='{.status.conditions[?(@.type==\"ImplicitlyEnabledCapabilities\")]}{\"\\n\"}'",
"{\"lastTransitionTime\":\"2022-07-22T03:14:35Z\",\"message\":\"The following capabilities could not be disabled: openshift-samples\",\"reason\":\"CapabilitiesImplicitlyEnabled\",\"status\":\"True\",\"type\":\"ImplicitlyEnabledCapabilities\"}"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installation_overview/cluster-capabilities |
Chapter 10. LocalSubjectAccessReview [authorization.k8s.io/v1] | Chapter 10. LocalSubjectAccessReview [authorization.k8s.io/v1] Description LocalSubjectAccessReview checks whether or not a user or group can perform an action in a given namespace. Having a namespace scoped resource makes it much easier to grant namespace scoped policy that includes permissions checking. Type object Required spec 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object SubjectAccessReviewSpec is a description of the access request. Exactly one of ResourceAuthorizationAttributes and NonResourceAuthorizationAttributes must be set status object SubjectAccessReviewStatus 10.1.1. .spec Description SubjectAccessReviewSpec is a description of the access request. Exactly one of ResourceAuthorizationAttributes and NonResourceAuthorizationAttributes must be set Type object Property Type Description extra object Extra corresponds to the user.Info.GetExtra() method from the authenticator. Since that is input to the authorizer it needs a reflection here. extra{} array (string) groups array (string) Groups is the groups you're testing for. nonResourceAttributes object NonResourceAttributes includes the authorization attributes available for non-resource requests to the Authorizer interface resourceAttributes object ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface uid string UID information about the requesting user. user string User is the user you're testing for. If you specify "User" but not "Groups", then is it interpreted as "What if User were not a member of any groups 10.1.2. .spec.extra Description Extra corresponds to the user.Info.GetExtra() method from the authenticator. Since that is input to the authorizer it needs a reflection here. Type object 10.1.3. .spec.nonResourceAttributes Description NonResourceAttributes includes the authorization attributes available for non-resource requests to the Authorizer interface Type object Property Type Description path string Path is the URL path of the request verb string Verb is the standard HTTP verb 10.1.4. .spec.resourceAttributes Description ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface Type object Property Type Description group string Group is the API Group of the Resource. "*" means all. name string Name is the name of the resource being requested for a "get" or deleted for a "delete". "" (empty) means all. namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces "" (empty) is defaulted for LocalSubjectAccessReviews "" (empty) is empty for cluster-scoped resources "" (empty) means "all" for namespace scoped resources from a SubjectAccessReview or SelfSubjectAccessReview resource string Resource is one of the existing resource types. "*" means all. subresource string Subresource is one of the existing resource types. "" means none. verb string Verb is a kubernetes resource API verb, like: get, list, watch, create, update, delete, proxy. "*" means all. version string Version is the API Version of the Resource. "*" means all. 10.1.5. .status Description SubjectAccessReviewStatus Type object Required allowed Property Type Description allowed boolean Allowed is required. True if the action would be allowed, false otherwise. denied boolean Denied is optional. True if the action would be denied, otherwise false. If both allowed is false and denied is false, then the authorizer has no opinion on whether to authorize the action. Denied may not be true if Allowed is true. evaluationError string EvaluationError is an indication that some error occurred during the authorization check. It is entirely possible to get an error and be able to continue determine authorization status in spite of it. For instance, RBAC can be missing a role, but enough roles are still present and bound to reason about the request. reason string Reason is optional. It indicates why a request was allowed or denied. 10.2. API endpoints The following API endpoints are available: /apis/authorization.k8s.io/v1/namespaces/{namespace}/localsubjectaccessreviews POST : create a LocalSubjectAccessReview 10.2.1. /apis/authorization.k8s.io/v1/namespaces/{namespace}/localsubjectaccessreviews Table 10.1. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 10.2. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create a LocalSubjectAccessReview Table 10.3. Body parameters Parameter Type Description body LocalSubjectAccessReview schema Table 10.4. HTTP responses HTTP code Reponse body 200 - OK LocalSubjectAccessReview schema 201 - Created LocalSubjectAccessReview schema 202 - Accepted LocalSubjectAccessReview schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/authorization_apis/localsubjectaccessreview-authorization-k8s-io-v1 |
Chapter 5. Tools for administration of Red Hat Satellite | Chapter 5. Tools for administration of Red Hat Satellite You can use multiple tools to manage Red Hat Satellite. 5.1. Satellite web UI overview You can manage and monitor your Satellite infrastructure from a browser with the Satellite web UI. For example, you can use the following navigation features in the Satellite web UI: Navigation feature Description Organization dropdown Choose the organization you want to manage. Location dropdown Choose the location you want to manage. Monitor Provides summary dashboards and reports. Content Provides content management tools. This includes content views, activation keys, and lifecycle environments. Hosts Provides host inventory and provisioning configuration tools. Configure Provides general configuration tools and data, including host groups and Ansible content. Infrastructure Provides tools on configuring how Satellite interacts with the environment. Provides event notifications to keep administrators informed of important environment changes. Administer Provides advanced configuration for settings such as users, role-based access control (RBAC), and general settings. Additional resources See Administering Red Hat Satellite for details on using the Satellite web UI. 5.2. Hammer CLI overview You can configure and manage your Satellite Server with CLI commands by using Hammer. Using Hammer has the following benefits: Create shell scripts based on Hammer commands for basic task automation. Redirect output from Hammer to other tools. Use the --debug option with Hammer to test responses to API calls before applying the API calls in a script. For example: hammer --debug organization list . To issue Hammer commands, a user must have access to your Satellite Server. Note To ensure a user-friendly and intuitive experience, the Satellite web UI takes priority when developing new functionality. Therefore, some features that are available in the Satellite web UI might not yet be available for Hammer. In the background, each Hammer command first establishes a binding to the API, then sends a request. This can have performance implications when executing a large number of Hammer commands in sequence. In contrast, scripts that use API commands communicate directly with the Satellite API and they establish the binding only once. Additional resources See Hammer CLI guide for details on using Hammer CLI. 5.3. Satellite API overview You can write custom scripts and external applications that access the Satellite API over HTTP with the Representational State Transfer (REST) API provided by Satellite Server. Use the REST API to integrate with enterprise IT systems and third-party applications, perform automated maintenance or error checking tasks, and automate repetitive tasks with scripts. Using the REST API has the following benefits: Configure any programming language, framework, or system with support for HTTP protocol to use the API. Create client applications that require minimal knowledge of the Satellite infrastructure because users discover many details at runtime. Adopt the resource-based REST model for intuitively managing a virtualization platform. Scripts based on API commands communicate directly with the Satellite API, which makes them faster than scripts based on Hammer commands or Ansible playbooks relying on modules within redhat.satellite. Important API commands differ between versions of Satellite. When you prepare to upgrade Satellite Server, update all the scripts that contain Satellite API commands. Additional resources See API guide for details on using the Satellite API. 5.4. Remote execution in Red Hat Satellite With remote execution, you can run jobs on hosts remotely from Capsules using shell scripts or Ansible tasks and playbooks. Use remote execution for the following benefits in Satellite: Run jobs on multiple hosts at once. Use variables in your commands for more granular control over the jobs you run. Use host facts and parameters to populate the variable values. Specify custom values for templates when you run the command. Communication for remote execution occurs through Capsule Server, which means that Satellite Server does not require direct access to the target host, and can scale to manage many hosts. To use remote execution, you must define a job template. A job template is a command that you want to apply to remote hosts. You can execute a job template multiple times. Satellite uses ERB syntax job templates. For more information, see Template Writing Reference in Managing hosts . By default, Satellite includes several job templates for shell scripts and Ansible. For more information, see Setting up Job Templates in Managing hosts . Additional resources See Executing a Remote Job in Managing hosts . See Configuring and Setting Up Remote Jobs in Managing configurations using Ansible integration . 5.5. Managing Satellite with Ansible collections Satellite Ansible Collections is a set of Ansible modules that interact with the Satellite API. You can manage and automate many aspects of Satellite with Satellite Ansible collections. Additional resources See Managing configurations using Ansible integration . See Administering Red Hat Satellite . 5.6. Kickstart workflow You can automate the installation process of a Satellite Server or Capsule Server by creating a Kickstart file that contains all the information that is required for the installation. When you run a Red Hat Satellite Kickstart script, the script performs the following actions: It specifies the installation location of a Satellite Server or a Capsule Server. It installs the predefined packages. It installs Subscription Manager. It uses Activation Keys to subscribe the hosts to Red Hat Satellite. It installs Puppet, and configures a puppet.conf file to indicate the Red Hat Satellite or Capsule instance. It enables Puppet to run and request a certificate. It runs user defined snippets. Additional resources For more information about Kickstart, see Performing an automated installation using Kickstart in Performing an advanced RHEL 8 installation . | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/overview_concepts_and_deployment_considerations/tools-for-administration-of-satellite_planning |
Chapter 1. Account configuration | Chapter 1. Account configuration After creating your account, update basic information about your company. Set your location and add your contact information. Note 1.1. Add your company information Once you have created your new account, add you company information with these steps: Click the gear icon located in the right of the top navigation bar. You will see the Overview window. to the Account Details heading, click the Edit link. Fill in the information for your account. The address you specify here has two goals: If you are on a paid plan, use this address for billing purposes. If you use the billing and payment modules, this address is also what your user will see on your invoices. 1.2. Select your preferred time zone On the same page you can also select the time zone you will use on all system displays. This setting affects analytics graphs. However, billing cycle calculations are made according to UTC time. | [
"The account view is only visible to administrators, not to members."
] | https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/admin_portal_guide/account-configuration |
Chapter 5. Configuring the web console in OpenShift Container Platform | Chapter 5. Configuring the web console in OpenShift Container Platform You can modify the OpenShift Container Platform web console to set a logout redirect URL or disable the quick start tutorials. 5.1. Prerequisites Deploy an OpenShift Container Platform cluster. 5.2. Configuring the web console You can configure the web console settings by editing the console.config.openshift.io resource. Edit the console.config.openshift.io resource: USD oc edit console.config.openshift.io cluster The following example displays the sample resource definition for the console: apiVersion: config.openshift.io/v1 kind: Console metadata: name: cluster spec: authentication: logoutRedirect: "" 1 status: consoleURL: "" 2 1 Specify the URL of the page to load when a user logs out of the web console. If you do not specify a value, the user returns to the login page for the web console. Specifying a logoutRedirect URL allows your users to perform single logout (SLO) through the identity provider to destroy their single sign-on session. 2 The web console URL. To update this to a custom value, see Customizing the web console URL . 5.3. Disabling quick starts in the web console You can use the Administrator perspective of the web console to disable one or more quick starts. Prerequisites You have cluster administrator permissions and are logged in to the web console. Procedure In the Administrator perspective, navigate to Administation Cluster Settings . On the Cluster Settings page, click the Configuration tab. On the Configuration page, click the Console configuration resource with the description operator.openshift.io . From the Action drop-down list, select Customize , which opens the Cluster configuration page. On the General tab, in the Quick starts section, you can select items in either the Enabled or Disabled list, and move them from one list to the other by using the arrow buttons. To enable or disable a single quick start, click the quick start, then use the single arrow buttons to move the quick start to the appropriate list. To enable or disable multiple quick starts at once, press Ctrl and click the quick starts you want to move. Then, use the single arrow buttons to move the quick starts to the appropriate list. To enable or disable all quick starts at once, click the double arrow buttons to move all of the quick starts to the appropriate list. | [
"oc edit console.config.openshift.io cluster",
"apiVersion: config.openshift.io/v1 kind: Console metadata: name: cluster spec: authentication: logoutRedirect: \"\" 1 status: consoleURL: \"\" 2"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/web_console/configuring-web-console |
probe::vm.kmem_cache_free | probe::vm.kmem_cache_free Name probe::vm.kmem_cache_free - Fires when kmem_cache_free is requested Synopsis vm.kmem_cache_free Values caller_function Name of the caller function. call_site Address of the function calling this kmemory function ptr Pointer to the kmemory allocated which is returned by kmem_cache name Name of the probe point | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-vm-kmem-cache-free |
Chapter 20. Red Hat Enterprise Linux Atomic Host 7.5.3 | Chapter 20. Red Hat Enterprise Linux Atomic Host 7.5.3 20.1. Atomic Host OStree update : New Tree Version: 7.5.3 (hash: 03d524a16c8d76897f097565ca7452c1a5e2541f8c2beab145adf622499c7c64) Changes since Tree Version 7.5.2 (hash: 7eae04224d894f6f0b57bf3c77f78c749d64813bd1543290f4b0276c81082617) Updated packages : microdnf-2-5.el7 cockpit-ostree-172-2.el7 20.2. Extras Updated packages : buildah-1.2-2.gitbe87762.el7 cockpit-172-2.el7 container-selinux-2.68-1.el7 container-storage-setup-0.11.0-2.git5eaf76c.el7 containernetworking-plugins-0.7.1-1.el7 docker-1.13.1-74.git6e3bb8e.el7 oci-systemd-hook-0.1.17-2.git83283a0.el7 podman-0.7.3-1.git0791210.el7 rhel-system-roles-1.0-2.el7 * runc-1.0.0-37.rc5.dev.gitad0f525.el7 The asterisk (*) marks packages that are available for Red Hat Enterprise Linux only. 20.2.1. Container Images Updated : Red Hat Enterprise Linux 7 Init Container Image (rhel7/rhel7-init) Red Hat Enterprise Linux Atomic Identity Management Server Container Image (rhel7/ipa-server) Red Hat Enterprise Linux Atomic Image (rhel-atomic, rhel7-atomic, rhel7/rhel-atomic) Red Hat Enterprise Linux Atomic Net-SNMP Container Image (rhel7/net-snmp) Red Hat Enterprise Linux Atomic SSSD Container Image (rhel7/sssd) Red Hat Enterprise Linux Atomic Support Tools Container Image (rhel7/support-tools) Red Hat Enterprise Linux Atomic Tools Container Image (rhel7/rhel-tools) Red Hat Enterprise Linux Atomic cockpit-ws Container Image (rhel7/cockpit-ws) Red Hat Enterprise Linux Atomic etcd Container Image (rhel7/etcd) Red Hat Enterprise Linux Atomic flannel Container Image (rhel7/flannel) Red Hat Enterprise Linux Atomic open-vm-tools Container Image (rhel7/open-vm-tools) Red Hat Enterprise Linux Atomic openscap Container Image (rhel7/openscap) Red Hat Enterprise Linux Atomic rsyslog Container Image (rhel7/rsyslog) Red Hat Enterprise Linux Atomic sadc Container Image (rhel7/sadc) Red Hat Enterprise Linux Container Image (rhel7.5, rhel7, rhel7/rhel, rhel) 20.3. New Features L1 Terminal Fault Attack vulnerability fixed in a new 7.5.3 image The RHEL Atomic Host 7.5.3 image has been updated to include security fixes for the L1 Terminal Fault Attack vulnerability. For more information, see this article . RHEL Atomic Host will not be supported on OpenShift 4.0 and later Beginning with Red Hat OpenShift 4.0, RHEL Atomic Host will not be supported on Red Hat OpenShift. Container images are now available for PowerPC 8 & 9 and s390x Beginning with RHEL Atomic Host 7.5.3, many of the container images are available not only for AMD64 and Intel 64 ( X86_64 ), but also for the little-endian variant of IBM Power Systems ( PowerPC 8 & 9 , also known as ppc64le ) and IBM z Systems ( s390x ). See Supported Architectures for Containers on RHEL if you need: details about this change architecture support information for individual images comprehensive information on architectures support for containers Distribution of architecture-specific base images will change in 7.6 Currently, the multi-architecture base OS images are available in the rhel7 repository and in the architecture-specific repository, for example rhel7/ppc64le . This will continue until RHEL Atomic Host 7.6. With RHEL Atomic Host 7.6, base images for all architectures will be available in the rhel7 repository. When you pull the base image, the image for the correct architecture will be pulled automatically based on the architecture you are using. Users of the architecture-specific repositories will need to update the from line in dockerfiles. Some users might not be able to access certain SRPMs using yum install For architectures other than AMD64 and Intel 64 ( X86_64 ), installing source RPMs from the Atomic Host and Extras channels is not possible using yum install . On the other hand, the source code is the same for all these architectures, and so is available using AMD64 and Intel 64 SRPMs. However, depending on your customer subscription, you might not be able to yum install AMD64 and Intel 64 SRPMs. In that case, follow the instructions in How to obtain source for Red Hat products shipped as container images . Also, if you only have IBM Power Systems ( PowerPC 8 & 9 , also known as ppc64le ) or IBM z Systems ( s390x ) subscriptions, you might need to request source code for the microdnf package directly from Red Hat. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/release_notes/red_hat_enterprise_linux_atomic_host_7_5_3 |
Chapter 3. Authentication [config.openshift.io/v1] | Chapter 3. Authentication [config.openshift.io/v1] Description Authentication specifies cluster-wide settings for authentication (like OAuth and webhook token authenticators). The canonical name of an instance is cluster . Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 3.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description oauthMetadata object oauthMetadata contains the discovery endpoint data for OAuth 2.0 Authorization Server Metadata for an external OAuth server. This discovery document can be viewed from its served location: oc get --raw '/.well-known/oauth-authorization-server' For further details, see the IETF Draft: https://tools.ietf.org/html/draft-ietf-oauth-discovery-04#section-2 If oauthMetadata.name is non-empty, this value has precedence over any metadata reference stored in status. The key "oauthMetadata" is used to locate the data. If specified and the config map or expected key is not found, no metadata is served. If the specified metadata is not valid, no metadata is served. The namespace for this config map is openshift-config. serviceAccountIssuer string serviceAccountIssuer is the identifier of the bound service account token issuer. The default is https://kubernetes.default.svc WARNING: Updating this field will not result in immediate invalidation of all bound tokens with the issuer value. Instead, the tokens issued by service account issuer will continue to be trusted for a time period chosen by the platform (currently set to 24h). This time period is subject to change over time. This allows internal components to transition to use new service account issuer without service distruption. type string type identifies the cluster managed, user facing authentication mode in use. Specifically, it manages the component that responds to login attempts. The default is IntegratedOAuth. webhookTokenAuthenticator object webhookTokenAuthenticator configures a remote token reviewer. These remote authentication webhooks can be used to verify bearer tokens via the tokenreviews.authentication.k8s.io REST API. This is required to honor bearer tokens that are provisioned by an external authentication service. Can only be set if "Type" is set to "None". webhookTokenAuthenticators array webhookTokenAuthenticators is DEPRECATED, setting it has no effect. webhookTokenAuthenticators[] object deprecatedWebhookTokenAuthenticator holds the necessary configuration options for a remote token authenticator. It's the same as WebhookTokenAuthenticator but it's missing the 'required' validation on KubeConfig field. 3.1.2. .spec.oauthMetadata Description oauthMetadata contains the discovery endpoint data for OAuth 2.0 Authorization Server Metadata for an external OAuth server. This discovery document can be viewed from its served location: oc get --raw '/.well-known/oauth-authorization-server' For further details, see the IETF Draft: https://tools.ietf.org/html/draft-ietf-oauth-discovery-04#section-2 If oauthMetadata.name is non-empty, this value has precedence over any metadata reference stored in status. The key "oauthMetadata" is used to locate the data. If specified and the config map or expected key is not found, no metadata is served. If the specified metadata is not valid, no metadata is served. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 3.1.3. .spec.webhookTokenAuthenticator Description webhookTokenAuthenticator configures a remote token reviewer. These remote authentication webhooks can be used to verify bearer tokens via the tokenreviews.authentication.k8s.io REST API. This is required to honor bearer tokens that are provisioned by an external authentication service. Can only be set if "Type" is set to "None". Type object Required kubeConfig Property Type Description kubeConfig object kubeConfig references a secret that contains kube config file data which describes how to access the remote webhook service. The namespace for the referenced secret is openshift-config. For further details, see: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication The key "kubeConfig" is used to locate the data. If the secret or expected key is not found, the webhook is not honored. If the specified kube config data is not valid, the webhook is not honored. 3.1.4. .spec.webhookTokenAuthenticator.kubeConfig Description kubeConfig references a secret that contains kube config file data which describes how to access the remote webhook service. The namespace for the referenced secret is openshift-config. For further details, see: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication The key "kubeConfig" is used to locate the data. If the secret or expected key is not found, the webhook is not honored. If the specified kube config data is not valid, the webhook is not honored. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 3.1.5. .spec.webhookTokenAuthenticators Description webhookTokenAuthenticators is DEPRECATED, setting it has no effect. Type array 3.1.6. .spec.webhookTokenAuthenticators[] Description deprecatedWebhookTokenAuthenticator holds the necessary configuration options for a remote token authenticator. It's the same as WebhookTokenAuthenticator but it's missing the 'required' validation on KubeConfig field. Type object Property Type Description kubeConfig object kubeConfig contains kube config file data which describes how to access the remote webhook service. For further details, see: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication The key "kubeConfig" is used to locate the data. If the secret or expected key is not found, the webhook is not honored. If the specified kube config data is not valid, the webhook is not honored. The namespace for this secret is determined by the point of use. 3.1.7. .spec.webhookTokenAuthenticators[].kubeConfig Description kubeConfig contains kube config file data which describes how to access the remote webhook service. For further details, see: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication The key "kubeConfig" is used to locate the data. If the secret or expected key is not found, the webhook is not honored. If the specified kube config data is not valid, the webhook is not honored. The namespace for this secret is determined by the point of use. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 3.1.8. .status Description status holds observed values from the cluster. They may not be overridden. Type object Property Type Description integratedOAuthMetadata object integratedOAuthMetadata contains the discovery endpoint data for OAuth 2.0 Authorization Server Metadata for the in-cluster integrated OAuth server. This discovery document can be viewed from its served location: oc get --raw '/.well-known/oauth-authorization-server' For further details, see the IETF Draft: https://tools.ietf.org/html/draft-ietf-oauth-discovery-04#section-2 This contains the observed value based on cluster state. An explicitly set value in spec.oauthMetadata has precedence over this field. This field has no meaning if authentication spec.type is not set to IntegratedOAuth. The key "oauthMetadata" is used to locate the data. If the config map or expected key is not found, no metadata is served. If the specified metadata is not valid, no metadata is served. The namespace for this config map is openshift-config-managed. 3.1.9. .status.integratedOAuthMetadata Description integratedOAuthMetadata contains the discovery endpoint data for OAuth 2.0 Authorization Server Metadata for the in-cluster integrated OAuth server. This discovery document can be viewed from its served location: oc get --raw '/.well-known/oauth-authorization-server' For further details, see the IETF Draft: https://tools.ietf.org/html/draft-ietf-oauth-discovery-04#section-2 This contains the observed value based on cluster state. An explicitly set value in spec.oauthMetadata has precedence over this field. This field has no meaning if authentication spec.type is not set to IntegratedOAuth. The key "oauthMetadata" is used to locate the data. If the config map or expected key is not found, no metadata is served. If the specified metadata is not valid, no metadata is served. The namespace for this config map is openshift-config-managed. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 3.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/authentications DELETE : delete collection of Authentication GET : list objects of kind Authentication POST : create an Authentication /apis/config.openshift.io/v1/authentications/{name} DELETE : delete an Authentication GET : read the specified Authentication PATCH : partially update the specified Authentication PUT : replace the specified Authentication /apis/config.openshift.io/v1/authentications/{name}/status GET : read status of the specified Authentication PATCH : partially update status of the specified Authentication PUT : replace status of the specified Authentication 3.2.1. /apis/config.openshift.io/v1/authentications HTTP method DELETE Description delete collection of Authentication Table 3.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Authentication Table 3.2. HTTP responses HTTP code Reponse body 200 - OK AuthenticationList schema 401 - Unauthorized Empty HTTP method POST Description create an Authentication Table 3.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.4. Body parameters Parameter Type Description body Authentication schema Table 3.5. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 201 - Created Authentication schema 202 - Accepted Authentication schema 401 - Unauthorized Empty 3.2.2. /apis/config.openshift.io/v1/authentications/{name} Table 3.6. Global path parameters Parameter Type Description name string name of the Authentication HTTP method DELETE Description delete an Authentication Table 3.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Authentication Table 3.9. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Authentication Table 3.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.11. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Authentication Table 3.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.13. Body parameters Parameter Type Description body Authentication schema Table 3.14. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 201 - Created Authentication schema 401 - Unauthorized Empty 3.2.3. /apis/config.openshift.io/v1/authentications/{name}/status Table 3.15. Global path parameters Parameter Type Description name string name of the Authentication HTTP method GET Description read status of the specified Authentication Table 3.16. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Authentication Table 3.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.18. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Authentication Table 3.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.20. Body parameters Parameter Type Description body Authentication schema Table 3.21. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 201 - Created Authentication schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/config_apis/authentication-config-openshift-io-v1 |
Chapter 1. Features | Chapter 1. Features The features added in this release, and that were not in releases of AMQ Streams, are outlined below. Note To view all the enhancements and bugs that are resolved in this release, see the AMQ Streams Jira project . 1.1. Kafka 2.8.0 support AMQ Streams now supports Apache Kafka version 2.8.0. AMQ Streams uses Kafka 2.8.0. Only Kafka distributions built by Red Hat are supported. For upgrade instructions, see AMQ Streams and Kafka upgrades . Refer to the Kafka 2.7.0 and Kafka 2.8.0 Release Notes for additional information. Note Kafka 2.7.x is supported only for the purpose of upgrading to AMQ Streams 1.8. For more information on supported versions, see the Red Hat Knowledgebase article Red Hat AMQ 7 Component Details Page . Kafka 2.8.0 requires ZooKeeper version 3.5.9. Therefore, you need to upgrade ZooKeeper when upgrading from AMQ Streams 1.7 to AMQ Streams 1.8, as described in the upgrade documentation. Warning Kafka 2.8.0 provides early access to self-managed mode , where Kafka runs without ZooKeeper by utilizing the Raft protocol. Note that self-managed mode is not supported in AMQ Streams . | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/release_notes_for_amq_streams_1.8_on_rhel/features-str |
Chapter 3. Running Red Hat build of Keycloak in a container | Chapter 3. Running Red Hat build of Keycloak in a container This chapter describes how to optimize and run the Red Hat build of Keycloak container image to provide the best experience running a container. Warning This chapter applies only for building an image that you run in a OpenShift environment. Only an OpenShift environment is supported for this image. It is not supported if you run it in other Kubernetes distributions. 3.1. Creating a customized and optimized container image The default Red Hat build of Keycloak container image ships ready to be configured and optimized. For the best start up of your Red Hat build of Keycloak container, build an image by running the build step during the container build. This step will save time in every subsequent start phase of the container image. 3.1.1. Writing your optimized Red Hat build of Keycloak Dockerfile The following Dockerfile creates a pre-configured Red Hat build of Keycloak image that enables the health and metrics endpoints, enables the token exchange feature, and uses a PostgreSQL database. Dockerfile: FROM registry.redhat.io/rhbk/keycloak-rhel9:24 as builder # Enable health and metrics support ENV KC_HEALTH_ENABLED=true ENV KC_METRICS_ENABLED=true # Configure a database vendor ENV KC_DB=postgres WORKDIR /opt/keycloak # for demonstration purposes only, please make sure to use proper certificates in production instead RUN keytool -genkeypair -storepass password -storetype PKCS12 -keyalg RSA -keysize 2048 -dname "CN=server" -alias server -ext "SAN:c=DNS:localhost,IP:127.0.0.1" -keystore conf/server.keystore RUN /opt/keycloak/bin/kc.sh build FROM registry.redhat.io/rhbk/keycloak-rhel9:24 COPY --from=builder /opt/keycloak/ /opt/keycloak/ # change these values to point to a running postgres instance ENV KC_DB=postgres ENV KC_DB_URL=<DBURL> ENV KC_DB_USERNAME=<DBUSERNAME> ENV KC_DB_PASSWORD=<DBPASSWORD> ENV KC_HOSTNAME=localhost ENTRYPOINT ["/opt/keycloak/bin/kc.sh"] The build process includes multiple stages: Run the build command to set server build options to create an optimized image. The files generated by the build stage are copied into a new image. In the final image, additional configuration options for the hostname and database are set so that you don't need to set them again when running the container. In the entrypoint, the kc.sh enables access to all the distribution sub-commands. To install custom providers, you just need to define a step to include the JAR file(s) into the /opt/keycloak/providers directory. This step must be placed before the line that RUNs the build command, as below: # A example build step that downloads a JAR file from a URL and adds it to the providers directory FROM registry.redhat.io/rhbk/keycloak-rhel9:24 as builder ... # Add the provider JAR file to the providers directory ADD --chown=keycloak:keycloak --chmod=644 <MY_PROVIDER_JAR_URL> /opt/keycloak/providers/myprovider.jar ... # Context: RUN the build command RUN /opt/keycloak/bin/kc.sh build 3.1.2. Installing additional RPM packages If you try to install new software in a stage FROM registry.redhat.io/rhbk/keycloak-rhel9 , you will notice that microdnf , dnf , and even rpm are not installed. Also, very few packages are available, only enough for a bash shell, and to run Red Hat build of Keycloak itself. This is due to security hardening measures, which reduce the attack surface of the Red Hat build of Keycloak container. First, consider if your use case can be implemented in a different way, and so avoid installing new RPMs into the final container: A RUN curl instruction in your Dockerfile can be replaced with ADD , since that instruction natively supports remote URLs. Some common CLI tools can be replaced by creative use of the Linux filesystem. For example, ip addr show tap0 becomes cat /sys/class/net/tap0/address Tasks that need RPMs can be moved to a former stage of an image build, and the results copied across instead. Here is an example. Running update-ca-trust in a former build stage, then copying the result forward: FROM registry.access.redhat.com/ubi9 AS ubi-micro-build COPY mycertificate.crt /etc/pki/ca-trust/source/anchors/mycertificate.crt RUN update-ca-trust FROM registry.redhat.io/rhbk/keycloak-rhel9 COPY --from=ubi-micro-build /etc/pki /etc/pki It is possible to install new RPMs if absolutely required, following this two-stage pattern established by ubi-micro: FROM registry.access.redhat.com/ubi9 AS ubi-micro-build RUN mkdir -p /mnt/rootfs RUN dnf install --installroot /mnt/rootfs <package names go here> --releasever 9 --setopt install_weak_deps=false --nodocs -y && \ dnf --installroot /mnt/rootfs clean all && \ rpm --root /mnt/rootfs -e --nodeps setup FROM registry.redhat.io/rhbk/keycloak-rhel9 COPY --from=ubi-micro-build /mnt/rootfs / This approach uses a chroot, /mnt/rootfs , so that only the packages you specify and their dependencies are installed, and so can be easily copied into the second stage without guesswork. Warning Some packages have a large tree of dependencies. By installing new RPMs you may unintentionally increase the container's attack surface. Check the list of installed packages carefully. 3.1.3. Building the container image To build the actual container image, run the following command from the directory containing your Dockerfile: podman build . -t mykeycloak 3.1.4. Starting the optimized Red Hat build of Keycloak container image To start the image, run: podman run --name mykeycloak -p 8443:8443 \ -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=change_me \ mykeycloak \ start --optimized Red Hat build of Keycloak starts in production mode, using only secured HTTPS communication, and is available on https://localhost:8443 . Health check endpoints are available at https://localhost:8443/health , https://localhost:8443/health/ready and https://localhost:8443/health/live . Opening up https://localhost:8443/metrics leads to a page containing operational metrics that could be used by your monitoring solution. 3.2. Exposing the container to a different port By default, the server is listening for http and https requests using the ports 8080 and 8443 , respectively. If you want to expose the container using a different port, you need to set the hostname-port accordingly: Exposing the container using a port other than the default ports By setting the hostname-port option you can now access the server at https://localhost:3000 . 3.3. Trying Red Hat build of Keycloak in development mode The easiest way to try Red Hat build of Keycloak from a container for development or testing purposes is to use the Development mode. You use the start-dev command: podman run --name mykeycloak -p 8080:8080 \ -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=change_me \ registry.redhat.io/rhbk/keycloak-rhel9:24 \ start-dev Invoking this command starts the Red Hat build of Keycloak server in development mode. This mode should be strictly avoided in production environments because it has insecure defaults. For more information about running Red Hat build of Keycloak in production, see Configuring Red Hat build of Keycloak for production . 3.4. Running a standard Red Hat build of Keycloak container In keeping with concepts such as immutable infrastructure, containers need to be re-provisioned routinely. In these environments, you need containers that start fast, therefore you need to create an optimized image as described in the preceding section. However, if your environment has different requirements, you can run a standard Red Hat build of Keycloak image by just running the start command. For example: podman run --name mykeycloak -p 8080:8080 \ -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=change_me \ registry.redhat.io/rhbk/keycloak-rhel9:24 \ start \ --db=postgres --features=token-exchange \ --db-url=<JDBC-URL> --db-username=<DB-USER> --db-password=<DB-PASSWORD> \ --https-key-store-file=<file> --https-key-store-password=<password> Running this command starts a Red Hat build of Keycloak server that detects and applies the build options first. In the example, the line --db=postgres --features=token-exchange sets the database vendor to PostgreSQL and enables the token exchange feature. Red Hat build of Keycloak then starts up and applies the configuration for the specific environment. This approach significantly increases startup time and creates an image that is mutable, which is not the best practice. 3.5. Provide initial admin credentials when running in a container Red Hat build of Keycloak only allows to create the initial admin user from a local network connection. This is not the case when running in a container, so you have to provide the following environment variables when you run the image: # setting the admin username -e KEYCLOAK_ADMIN=<admin-user-name> # setting the initial password -e KEYCLOAK_ADMIN_PASSWORD=change_me 3.6. Importing A Realm On Startup The Red Hat build of Keycloak containers have a directory /opt/keycloak/data/import . If you put one or more import files in that directory via a volume mount or other means and add the startup argument --import-realm , the Red Hat build of Keycloak container will import that data on startup! This may only make sense to do in Dev mode. podman run --name keycloak_unoptimized -p 8080:8080 \ -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=change_me \ -v /path/to/realm/data:/opt/keycloak/data/import \ registry.redhat.io/rhbk/keycloak-rhel9:24 \ start-dev --import-realm Feel free to join the open GitHub Discussion around enhancements of the admin bootstrapping process. 3.7. Specifying different memory settings The Red Hat build of Keycloak container, instead of specifying hardcoded values for the initial and maximum heap size, uses relative values to the total memory of a container. This behavior is achieved by JVM options -XX:MaxRAMPercentage=70 , and -XX:InitialRAMPercentage=50 . The -XX:MaxRAMPercentage option represents the maximum heap size as 70% of the total container memory. The -XX:InitialRAMPercentage option represents the initial heap size as 50% of the total container memory. These values were chosen based on a deeper analysis of Red Hat build of Keycloak memory management. As the heap size is dynamically calculated based on the total container memory, you should always set the memory limit for the container. Previously, the maximum heap size was set to 512 MB, and in order to approach similar values, you should set the memory limit to at least 750 MB. For smaller production-ready deployments, the recommended memory limit is 2 GB. The JVM options related to the heap might be overridden by setting the environment variable JAVA_OPTS_KC_HEAP . You can find the default values of the JAVA_OPTS_KC_HEAP in the source code of the kc.sh , or kc.bat script. For example, you can specify the environment variable and memory limit as follows: podman run --name mykeycloak -p 8080:8080 -m 1g \ -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=change_me \ -e JAVA_OPTS_KC_HEAP="-XX:MaxHeapFreeRatio=30 -XX:MaxRAMPercentage=65" \ registry.redhat.io/rhbk/keycloak-rhel9:24 \ start-dev Warning If the memory limit is not set, the memory consumption rapidly increases as the heap size can grow up to 70% of the total container memory. Once the JVM allocates the memory, it is returned to the OS reluctantly with the current Red Hat build of Keycloak GC settings. 3.8. Relevant options Value db 🛠 The database vendor. CLI: --db Env: KC_DB dev-file (default), dev-mem , mariadb , mssql , mysql , oracle , postgres db-password The password of the database user. CLI: --db-password Env: KC_DB_PASSWORD db-url The full database JDBC URL. If not provided, a default URL is set based on the selected database vendor. For instance, if using postgres , the default JDBC URL would be jdbc:postgresql://localhost/keycloak . CLI: --db-url Env: KC_DB_URL db-username The username of the database user. CLI: --db-username Env: KC_DB_USERNAME features 🛠 Enables a set of one or more features. CLI: --features Env: KC_FEATURES account-api[:v1] , account2[:v1] , account3[:v1] , admin-api[:v1] , admin-fine-grained-authz[:v1] , admin2[:v1] , authorization[:v1] , ciba[:v1] , client-policies[:v1] , client-secret-rotation[:v1] , client-types[:v1] , declarative-ui[:v1] , device-flow[:v1] , docker[:v1] , dpop[:v1] , dynamic-scopes[:v1] , fips[:v1] , hostname[:v1] , impersonation[:v1] , js-adapter[:v1] , kerberos[:v1] , linkedin-oauth[:v1] , login2[:v1] , multi-site[:v1] , offline-session-preloading[:v1] , oid4vc-vci[:v1] , par[:v1] , preview , recovery-codes[:v1] , scripts[:v1] , step-up-authentication[:v1] , token-exchange[:v1] , transient-users[:v1] , update-email[:v1] , web-authn[:v1] health-enabled 🛠 If the server should expose health check endpoints. If enabled, health checks are available at the /health , /health/ready and /health/live endpoints. CLI: --health-enabled Env: KC_HEALTH_ENABLED true , false (default) hostname Hostname for the Keycloak server. CLI: --hostname Env: KC_HOSTNAME https-key-store-file The key store which holds the certificate information instead of specifying separate files. CLI: --https-key-store-file Env: KC_HTTPS_KEY_STORE_FILE https-key-store-password The password of the key store file. CLI: --https-key-store-password Env: KC_HTTPS_KEY_STORE_PASSWORD password (default) metrics-enabled 🛠 If the server should expose metrics. If enabled, metrics are available at the /metrics endpoint. CLI: --metrics-enabled Env: KC_METRICS_ENABLED true , false (default) | [
"FROM registry.redhat.io/rhbk/keycloak-rhel9:24 as builder Enable health and metrics support ENV KC_HEALTH_ENABLED=true ENV KC_METRICS_ENABLED=true Configure a database vendor ENV KC_DB=postgres WORKDIR /opt/keycloak for demonstration purposes only, please make sure to use proper certificates in production instead RUN keytool -genkeypair -storepass password -storetype PKCS12 -keyalg RSA -keysize 2048 -dname \"CN=server\" -alias server -ext \"SAN:c=DNS:localhost,IP:127.0.0.1\" -keystore conf/server.keystore RUN /opt/keycloak/bin/kc.sh build FROM registry.redhat.io/rhbk/keycloak-rhel9:24 COPY --from=builder /opt/keycloak/ /opt/keycloak/ change these values to point to a running postgres instance ENV KC_DB=postgres ENV KC_DB_URL=<DBURL> ENV KC_DB_USERNAME=<DBUSERNAME> ENV KC_DB_PASSWORD=<DBPASSWORD> ENV KC_HOSTNAME=localhost ENTRYPOINT [\"/opt/keycloak/bin/kc.sh\"]",
"A example build step that downloads a JAR file from a URL and adds it to the providers directory FROM registry.redhat.io/rhbk/keycloak-rhel9:24 as builder Add the provider JAR file to the providers directory ADD --chown=keycloak:keycloak --chmod=644 <MY_PROVIDER_JAR_URL> /opt/keycloak/providers/myprovider.jar Context: RUN the build command RUN /opt/keycloak/bin/kc.sh build",
"FROM registry.access.redhat.com/ubi9 AS ubi-micro-build COPY mycertificate.crt /etc/pki/ca-trust/source/anchors/mycertificate.crt RUN update-ca-trust FROM registry.redhat.io/rhbk/keycloak-rhel9 COPY --from=ubi-micro-build /etc/pki /etc/pki",
"FROM registry.access.redhat.com/ubi9 AS ubi-micro-build RUN mkdir -p /mnt/rootfs RUN dnf install --installroot /mnt/rootfs <package names go here> --releasever 9 --setopt install_weak_deps=false --nodocs -y && dnf --installroot /mnt/rootfs clean all && rpm --root /mnt/rootfs -e --nodeps setup FROM registry.redhat.io/rhbk/keycloak-rhel9 COPY --from=ubi-micro-build /mnt/rootfs /",
"build . -t mykeycloak",
"run --name mykeycloak -p 8443:8443 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=change_me mykeycloak start --optimized",
"run --name mykeycloak -p 3000:8443 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=change_me mykeycloak start --optimized --hostname-port=3000",
"run --name mykeycloak -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=change_me registry.redhat.io/rhbk/keycloak-rhel9:24 start-dev",
"run --name mykeycloak -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=change_me registry.redhat.io/rhbk/keycloak-rhel9:24 start --db=postgres --features=token-exchange --db-url=<JDBC-URL> --db-username=<DB-USER> --db-password=<DB-PASSWORD> --https-key-store-file=<file> --https-key-store-password=<password>",
"setting the admin username -e KEYCLOAK_ADMIN=<admin-user-name> setting the initial password -e KEYCLOAK_ADMIN_PASSWORD=change_me",
"run --name keycloak_unoptimized -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=change_me -v /path/to/realm/data:/opt/keycloak/data/import registry.redhat.io/rhbk/keycloak-rhel9:24 start-dev --import-realm",
"run --name mykeycloak -p 8080:8080 -m 1g -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=change_me -e JAVA_OPTS_KC_HEAP=\"-XX:MaxHeapFreeRatio=30 -XX:MaxRAMPercentage=65\" registry.redhat.io/rhbk/keycloak-rhel9:24 start-dev"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/server_guide/containers- |
Chapter 6. Managing image streams | Chapter 6. Managing image streams Image streams provide a means of creating and updating container images in an on-going way. As improvements are made to an image, tags can be used to assign new version numbers and keep track of changes. This document describes how image streams are managed. 6.1. Why use imagestreams An image stream and its associated tags provide an abstraction for referencing container images from within OpenShift Container Platform. The image stream and its tags allow you to see what images are available and ensure that you are using the specific image you need even if the image in the repository changes. Image streams do not contain actual image data, but present a single virtual view of related images, similar to an image repository. You can configure builds and deployments to watch an image stream for notifications when new images are added and react by performing a build or deployment, respectively. For example, if a deployment is using a certain image and a new version of that image is created, a deployment could be automatically performed to pick up the new version of the image. However, if the image stream tag used by the deployment or build is not updated, then even if the container image in the container image registry is updated, the build or deployment continues using the , presumably known good image. The source images can be stored in any of the following: OpenShift Container Platform's integrated registry. An external registry, for example registry.redhat.io or quay.io. Other image streams in the OpenShift Container Platform cluster. When you define an object that references an image stream tag, such as a build or deployment configuration, you point to an image stream tag and not the repository. When you build or deploy your application, OpenShift Container Platform queries the repository using the image stream tag to locate the associated ID of the image and uses that exact image. The image stream metadata is stored in the etcd instance along with other cluster information. Using image streams has several significant benefits: You can tag, rollback a tag, and quickly deal with images, without having to re-push using the command line. You can trigger builds and deployments when a new image is pushed to the registry. Also, OpenShift Container Platform has generic triggers for other resources, such as Kubernetes objects. You can mark a tag for periodic re-import. If the source image has changed, that change is picked up and reflected in the image stream, which triggers the build or deployment flow, depending upon the build or deployment configuration. You can share images using fine-grained access control and quickly distribute images across your teams. If the source image changes, the image stream tag still points to a known-good version of the image, ensuring that your application do not break unexpectedly. You can configure security around who can view and use the images through permissions on the image stream objects. Users that lack permission to read or list images on the cluster level can still retrieve the images tagged in a project using image streams. 6.2. Configuring image streams An ImageStream object file contains the following elements. Imagestream object definition apiVersion: image.openshift.io/v1 kind: ImageStream metadata: annotations: openshift.io/generated-by: OpenShiftNewApp labels: app: ruby-sample-build template: application-template-stibuild name: origin-ruby-sample 1 namespace: test spec: {} status: dockerImageRepository: 172.30.56.218:5000/test/origin-ruby-sample 2 tags: - items: - created: 2017-09-02T10:15:09Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d 3 generation: 2 image: sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 4 - created: 2017-09-01T13:40:11Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 generation: 1 image: sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d tag: latest 5 1 The name of the image stream. 2 Docker repository path where new images can be pushed to add or update them in this image stream. 3 The SHA identifier that this image stream tag currently references. Resources that reference this image stream tag use this identifier. 4 The SHA identifier that this image stream tag previously referenced. Can be used to rollback to an older image. 5 The image stream tag name. 6.3. Image stream images An image stream image points from within an image stream to a particular image ID. Image stream images allow you to retrieve metadata about an image from a particular image stream where it is tagged. Image stream image objects are automatically created in OpenShift Container Platform whenever you import or tag an image into the image stream. You should never have to explicitly define an image stream image object in any image stream definition that you use to create image streams. The image stream image consists of the image stream name and image ID from the repository, delimited by an @ sign: To refer to the image in the ImageStream object example, the image stream image looks like: 6.4. Image stream tags An image stream tag is a named pointer to an image in an image stream. It is abbreviated as istag . An image stream tag is used to reference or retrieve an image for a given image stream and tag. Image stream tags can reference any local or externally managed image. It contains a history of images represented as a stack of all images the tag ever pointed to. Whenever a new or existing image is tagged under particular image stream tag, it is placed at the first position in the history stack. The image previously occupying the top position is available at the second position. This allows for easy rollbacks to make tags point to historical images again. The following image stream tag is from an ImageStream object: Image stream tag with two images in its history tags: - items: - created: 2017-09-02T10:15:09Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d generation: 2 image: sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 - created: 2017-09-01T13:40:11Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 generation: 1 image: sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d tag: latest Image stream tags can be permanent tags or tracking tags. Permanent tags are version-specific tags that point to a particular version of an image, such as Python 3.5. Tracking tags are reference tags that follow another image stream tag and can be updated to change which image they follow, like a symlink. These new levels are not guaranteed to be backwards-compatible. For example, the latest image stream tags that ship with OpenShift Container Platform are tracking tags. This means consumers of the latest image stream tag are updated to the newest level of the framework provided by the image when a new level becomes available. A latest image stream tag to v3.10 can be changed to v3.11 at any time. It is important to be aware that these latest image stream tags behave differently than the Docker latest tag. The latest image stream tag, in this case, does not point to the latest image in the Docker repository. It points to another image stream tag, which might not be the latest version of an image. For example, if the latest image stream tag points to v3.10 of an image, when the 3.11 version is released, the latest tag is not automatically updated to v3.11 , and remains at v3.10 until it is manually updated to point to a v3.11 image stream tag. Note Tracking tags are limited to a single image stream and cannot reference other image streams. You can create your own image stream tags for your own needs. The image stream tag is composed of the name of the image stream and a tag, separated by a colon: For example, to refer to the sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d image in the ImageStream object example earlier, the image stream tag would be: 6.5. Image stream change triggers Image stream triggers allow your builds and deployments to be automatically invoked when a new version of an upstream image is available. For example, builds and deployments can be automatically started when an image stream tag is modified. This is achieved by monitoring that particular image stream tag and notifying the build or deployment when a change is detected. 6.6. Image stream mapping When the integrated registry receives a new image, it creates and sends an image stream mapping to OpenShift Container Platform, providing the image's project, name, tag, and image metadata. Note Configuring image stream mappings is an advanced feature. This information is used to create a new image, if it does not already exist, and to tag the image into the image stream. OpenShift Container Platform stores complete metadata about each image, such as commands, entry point, and environment variables. Images in OpenShift Container Platform are immutable and the maximum name length is 63 characters. The following image stream mapping example results in an image being tagged as test/origin-ruby-sample:latest : Image stream mapping object definition apiVersion: image.openshift.io/v1 kind: ImageStreamMapping metadata: creationTimestamp: null name: origin-ruby-sample namespace: test tag: latest image: dockerImageLayers: - name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef size: 0 - name: sha256:ee1dd2cb6df21971f4af6de0f1d7782b81fb63156801cfde2bb47b4247c23c29 size: 196634330 - name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef size: 0 - name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef size: 0 - name: sha256:ca062656bff07f18bff46be00f40cfbb069687ec124ac0aa038fd676cfaea092 size: 177723024 - name: sha256:63d529c59c92843c395befd065de516ee9ed4995549f8218eac6ff088bfa6b6e size: 55679776 - name: sha256:92114219a04977b5563d7dff71ec4caa3a37a15b266ce42ee8f43dba9798c966 size: 11939149 dockerImageMetadata: Architecture: amd64 Config: Cmd: - /usr/libexec/s2i/run Entrypoint: - container-entrypoint Env: - RACK_ENV=production - OPENSHIFT_BUILD_NAMESPACE=test - OPENSHIFT_BUILD_SOURCE=https://github.com/openshift/ruby-hello-world.git - EXAMPLE=sample-app - OPENSHIFT_BUILD_NAME=ruby-sample-build-1 - PATH=/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin - STI_SCRIPTS_URL=image:///usr/libexec/s2i - STI_SCRIPTS_PATH=/usr/libexec/s2i - HOME=/opt/app-root/src - BASH_ENV=/opt/app-root/etc/scl_enable - ENV=/opt/app-root/etc/scl_enable - PROMPT_COMMAND=. /opt/app-root/etc/scl_enable - RUBY_VERSION=2.2 ExposedPorts: 8080/tcp: {} Labels: build-date: 2015-12-23 io.k8s.description: Platform for building and running Ruby 2.2 applications io.k8s.display-name: 172.30.56.218:5000/test/origin-ruby-sample:latest io.openshift.build.commit.author: Ben Parees <[email protected]> io.openshift.build.commit.date: Wed Jan 20 10:14:27 2016 -0500 io.openshift.build.commit.id: 00cadc392d39d5ef9117cbc8a31db0889eedd442 io.openshift.build.commit.message: 'Merge pull request #51 from php-coder/fix_url_and_sti' io.openshift.build.commit.ref: master io.openshift.build.image: centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e io.openshift.build.source-location: https://github.com/openshift/ruby-hello-world.git io.openshift.builder-base-version: 8d95148 io.openshift.builder-version: 8847438ba06307f86ac877465eadc835201241df io.openshift.s2i.scripts-url: image:///usr/libexec/s2i io.openshift.tags: builder,ruby,ruby22 io.s2i.scripts-url: image:///usr/libexec/s2i license: GPLv2 name: CentOS Base Image vendor: CentOS User: "1001" WorkingDir: /opt/app-root/src Container: 86e9a4a3c760271671ab913616c51c9f3cea846ca524bf07c04a6f6c9e103a76 ContainerConfig: AttachStdout: true Cmd: - /bin/sh - -c - tar -C /tmp -xf - && /usr/libexec/s2i/assemble Entrypoint: - container-entrypoint Env: - RACK_ENV=production - OPENSHIFT_BUILD_NAME=ruby-sample-build-1 - OPENSHIFT_BUILD_NAMESPACE=test - OPENSHIFT_BUILD_SOURCE=https://github.com/openshift/ruby-hello-world.git - EXAMPLE=sample-app - PATH=/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin - STI_SCRIPTS_URL=image:///usr/libexec/s2i - STI_SCRIPTS_PATH=/usr/libexec/s2i - HOME=/opt/app-root/src - BASH_ENV=/opt/app-root/etc/scl_enable - ENV=/opt/app-root/etc/scl_enable - PROMPT_COMMAND=. /opt/app-root/etc/scl_enable - RUBY_VERSION=2.2 ExposedPorts: 8080/tcp: {} Hostname: ruby-sample-build-1-build Image: centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e OpenStdin: true StdinOnce: true User: "1001" WorkingDir: /opt/app-root/src Created: 2016-01-29T13:40:00Z DockerVersion: 1.8.2.fc21 Id: 9d7fd5e2d15495802028c569d544329f4286dcd1c9c085ff5699218dbaa69b43 Parent: 57b08d979c86f4500dc8cad639c9518744c8dd39447c055a3517dc9c18d6fccd Size: 441976279 apiVersion: "1.0" kind: DockerImage dockerImageMetadataVersion: "1.0" dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d 6.7. Working with image streams The following sections describe how to use image streams and image stream tags. 6.7.1. Getting information about image streams You can get general information about the image stream and detailed information about all the tags it is pointing to. Procedure Get general information about the image stream and detailed information about all the tags it is pointing to: USD oc describe is/<image-name> For example: USD oc describe is/python Example output Name: python Namespace: default Created: About a minute ago Labels: <none> Annotations: openshift.io/image.dockerRepositoryCheck=2017-10-02T17:05:11Z Docker Pull Spec: docker-registry.default.svc:5000/default/python Image Lookup: local=false Unique Images: 1 Tags: 1 3.5 tagged from centos/python-35-centos7 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 About a minute ago Get all the information available about particular image stream tag: USD oc describe istag/<image-stream>:<tag-name> For example: USD oc describe istag/python:latest Example output Image Name: sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Docker Image: centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Name: sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Created: 2 minutes ago Image Size: 251.2 MB (first layer 2.898 MB, last binary layer 72.26 MB) Image Created: 2 weeks ago Author: <none> Arch: amd64 Entrypoint: container-entrypoint Command: /bin/sh -c USDSTI_SCRIPTS_PATH/usage Working Dir: /opt/app-root/src User: 1001 Exposes Ports: 8080/tcp Docker Labels: build-date=20170801 Note More information is output than shown. 6.7.2. Adding tags to an image stream You can add additional tags to image streams. Procedure Add a tag that points to one of the existing tags by using the `oc tag`command: USD oc tag <image-name:tag1> <image-name:tag2> For example: USD oc tag python:3.5 python:latest Example output Tag python:latest set to python@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25. Confirm the image stream has two tags, one, 3.5 , pointing at the external container image and another tag, latest , pointing to the same image because it was created based on the first tag. USD oc describe is/python Example output Name: python Namespace: default Created: 5 minutes ago Labels: <none> Annotations: openshift.io/image.dockerRepositoryCheck=2017-10-02T17:05:11Z Docker Pull Spec: docker-registry.default.svc:5000/default/python Image Lookup: local=false Unique Images: 1 Tags: 2 latest tagged from python@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 About a minute ago 3.5 tagged from centos/python-35-centos7 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 5 minutes ago 6.7.3. Adding tags for an external image You can add tags for external images. Procedure Add tags pointing to internal or external images, by using the oc tag command for all tag-related operations: USD oc tag <repository/image> <image-name:tag> For example, this command maps the docker.io/python:3.6.0 image to the 3.6 tag in the python image stream. USD oc tag docker.io/python:3.6.0 python:3.6 Example output Tag python:3.6 set to docker.io/python:3.6.0. If the external image is secured, you must create a secret with credentials for accessing that registry. 6.7.4. Updating image stream tags You can update a tag to reflect another tag in an image stream. Procedure Update a tag: USD oc tag <image-name:tag> <image-name:latest> For example, the following updates the latest tag to reflect the 3.6 tag in an image stream: USD oc tag python:3.6 python:latest Example output Tag python:latest set to python@sha256:438208801c4806548460b27bd1fbcb7bb188273d13871ab43f. 6.7.5. Removing image stream tags You can remove old tags from an image stream. Procedure Remove old tags from an image stream: USD oc tag -d <image-name:tag> For example: USD oc tag -d python:3.5 Example output Deleted tag default/python:3.5. See Removing deprecated image stream tags from the Cluster Samples Operator for more information on how the Cluster Samples Operator handles deprecated image stream tags. 6.7.6. Configuring periodic importing of image stream tags When working with an external container image registry, to periodically re-import an image, for example to get latest security updates, you can use the --scheduled flag. Procedure Schedule importing images: USD oc tag <repository/image> <image-name:tag> --scheduled For example: USD oc tag docker.io/python:3.6.0 python:3.6 --scheduled Example output Tag python:3.6 set to import docker.io/python:3.6.0 periodically. This command causes OpenShift Container Platform to periodically update this particular image stream tag. This period is a cluster-wide setting set to 15 minutes by default. Remove the periodic check, re-run above command but omit the --scheduled flag. This will reset its behavior to default. USD oc tag <repositiory/image> <image-name:tag> 6.8. Importing images and image streams from private registries An image stream can be configured to import tag and image metadata from private image registries requiring authentication. This procedures applies if you change the registry that the Cluster Samples Operator uses to pull content from to something other than registry.redhat.io . Note When importing from insecure or secure registries, the registry URL defined in the secret must include the :80 port suffix or the secret is not used when attempting to import from the registry. Procedure You must create a secret object that is used to store your credentials by entering the following command: USD oc create secret generic <secret_name> --from-file=.dockerconfigjson=<file_absolute_path> --type=kubernetes.io/dockerconfigjson After the secret is configured, create the new image stream or enter the oc import-image command: USD oc import-image <imagestreamtag> --from=<image> --confirm During the import process, OpenShift Container Platform picks up the secrets and provides them to the remote party. 6.8.1. Allowing pods to reference images from other secured registries The .dockercfg USDHOME/.docker/config.json file for Docker clients is a Docker credentials file that stores your authentication information if you have previously logged into a secured or insecure registry. To pull a secured container image that is not from OpenShift image registry, you must create a pull secret from your Docker credentials and add it to your service account. The Docker credentials file and the associated pull secret can contain multiple references to the same registry, each with its own set of credentials. Example config.json file { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io/repository-main":{ "auth":"b3Blb=", "email":"[email protected]" } } } Example pull secret apiVersion: v1 data: .dockerconfigjson: ewogICAiYXV0aHMiOnsKICAgICAgIm0iOnsKICAgICAgIsKICAgICAgICAgImF1dGgiOiJiM0JsYj0iLAogICAgICAgICAiZW1haWwiOiJ5b3VAZXhhbXBsZS5jb20iCiAgICAgIH0KICAgfQp9Cg== kind: Secret metadata: creationTimestamp: "2021-09-09T19:10:11Z" name: pull-secret namespace: default resourceVersion: "37676" uid: e2851531-01bc-48ba-878c-de96cfe31020 type: Opaque Procedure If you already have a .dockercfg file for the secured registry, you can create a secret from that file by running: USD oc create secret generic <pull_secret_name> \ --from-file=.dockercfg=<path/to/.dockercfg> \ --type=kubernetes.io/dockercfg Or if you have a USDHOME/.docker/config.json file: USD oc create secret generic <pull_secret_name> \ --from-file=.dockerconfigjson=<path/to/.docker/config.json> \ --type=kubernetes.io/dockerconfigjson If you do not already have a Docker credentials file for the secured registry, you can create a secret by running: USD oc create secret docker-registry <pull_secret_name> \ --docker-server=<registry_server> \ --docker-username=<user_name> \ --docker-password=<password> \ --docker-email=<email> To use a secret for pulling images for pods, you must add the secret to your service account. The name of the service account in this example should match the name of the service account the pod uses. The default service account is default : USD oc secrets link default <pull_secret_name> --for=pull | [
"apiVersion: image.openshift.io/v1 kind: ImageStream metadata: annotations: openshift.io/generated-by: OpenShiftNewApp labels: app: ruby-sample-build template: application-template-stibuild name: origin-ruby-sample 1 namespace: test spec: {} status: dockerImageRepository: 172.30.56.218:5000/test/origin-ruby-sample 2 tags: - items: - created: 2017-09-02T10:15:09Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d 3 generation: 2 image: sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 4 - created: 2017-09-01T13:40:11Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 generation: 1 image: sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d tag: latest 5",
"<image-stream-name>@<image-id>",
"origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d",
"tags: - items: - created: 2017-09-02T10:15:09Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d generation: 2 image: sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 - created: 2017-09-01T13:40:11Z dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:909de62d1f609a717ec433cc25ca5cf00941545c83a01fb31527771e1fab3fc5 generation: 1 image: sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d tag: latest",
"<imagestream name>:<tag>",
"origin-ruby-sample:latest",
"apiVersion: image.openshift.io/v1 kind: ImageStreamMapping metadata: creationTimestamp: null name: origin-ruby-sample namespace: test tag: latest image: dockerImageLayers: - name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef size: 0 - name: sha256:ee1dd2cb6df21971f4af6de0f1d7782b81fb63156801cfde2bb47b4247c23c29 size: 196634330 - name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef size: 0 - name: sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef size: 0 - name: sha256:ca062656bff07f18bff46be00f40cfbb069687ec124ac0aa038fd676cfaea092 size: 177723024 - name: sha256:63d529c59c92843c395befd065de516ee9ed4995549f8218eac6ff088bfa6b6e size: 55679776 - name: sha256:92114219a04977b5563d7dff71ec4caa3a37a15b266ce42ee8f43dba9798c966 size: 11939149 dockerImageMetadata: Architecture: amd64 Config: Cmd: - /usr/libexec/s2i/run Entrypoint: - container-entrypoint Env: - RACK_ENV=production - OPENSHIFT_BUILD_NAMESPACE=test - OPENSHIFT_BUILD_SOURCE=https://github.com/openshift/ruby-hello-world.git - EXAMPLE=sample-app - OPENSHIFT_BUILD_NAME=ruby-sample-build-1 - PATH=/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin - STI_SCRIPTS_URL=image:///usr/libexec/s2i - STI_SCRIPTS_PATH=/usr/libexec/s2i - HOME=/opt/app-root/src - BASH_ENV=/opt/app-root/etc/scl_enable - ENV=/opt/app-root/etc/scl_enable - PROMPT_COMMAND=. /opt/app-root/etc/scl_enable - RUBY_VERSION=2.2 ExposedPorts: 8080/tcp: {} Labels: build-date: 2015-12-23 io.k8s.description: Platform for building and running Ruby 2.2 applications io.k8s.display-name: 172.30.56.218:5000/test/origin-ruby-sample:latest io.openshift.build.commit.author: Ben Parees <[email protected]> io.openshift.build.commit.date: Wed Jan 20 10:14:27 2016 -0500 io.openshift.build.commit.id: 00cadc392d39d5ef9117cbc8a31db0889eedd442 io.openshift.build.commit.message: 'Merge pull request #51 from php-coder/fix_url_and_sti' io.openshift.build.commit.ref: master io.openshift.build.image: centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e io.openshift.build.source-location: https://github.com/openshift/ruby-hello-world.git io.openshift.builder-base-version: 8d95148 io.openshift.builder-version: 8847438ba06307f86ac877465eadc835201241df io.openshift.s2i.scripts-url: image:///usr/libexec/s2i io.openshift.tags: builder,ruby,ruby22 io.s2i.scripts-url: image:///usr/libexec/s2i license: GPLv2 name: CentOS Base Image vendor: CentOS User: \"1001\" WorkingDir: /opt/app-root/src Container: 86e9a4a3c760271671ab913616c51c9f3cea846ca524bf07c04a6f6c9e103a76 ContainerConfig: AttachStdout: true Cmd: - /bin/sh - -c - tar -C /tmp -xf - && /usr/libexec/s2i/assemble Entrypoint: - container-entrypoint Env: - RACK_ENV=production - OPENSHIFT_BUILD_NAME=ruby-sample-build-1 - OPENSHIFT_BUILD_NAMESPACE=test - OPENSHIFT_BUILD_SOURCE=https://github.com/openshift/ruby-hello-world.git - EXAMPLE=sample-app - PATH=/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin - STI_SCRIPTS_URL=image:///usr/libexec/s2i - STI_SCRIPTS_PATH=/usr/libexec/s2i - HOME=/opt/app-root/src - BASH_ENV=/opt/app-root/etc/scl_enable - ENV=/opt/app-root/etc/scl_enable - PROMPT_COMMAND=. /opt/app-root/etc/scl_enable - RUBY_VERSION=2.2 ExposedPorts: 8080/tcp: {} Hostname: ruby-sample-build-1-build Image: centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e OpenStdin: true StdinOnce: true User: \"1001\" WorkingDir: /opt/app-root/src Created: 2016-01-29T13:40:00Z DockerVersion: 1.8.2.fc21 Id: 9d7fd5e2d15495802028c569d544329f4286dcd1c9c085ff5699218dbaa69b43 Parent: 57b08d979c86f4500dc8cad639c9518744c8dd39447c055a3517dc9c18d6fccd Size: 441976279 apiVersion: \"1.0\" kind: DockerImage dockerImageMetadataVersion: \"1.0\" dockerImageReference: 172.30.56.218:5000/test/origin-ruby-sample@sha256:47463d94eb5c049b2d23b03a9530bf944f8f967a0fe79147dd6b9135bf7dd13d",
"oc describe is/<image-name>",
"oc describe is/python",
"Name: python Namespace: default Created: About a minute ago Labels: <none> Annotations: openshift.io/image.dockerRepositoryCheck=2017-10-02T17:05:11Z Docker Pull Spec: docker-registry.default.svc:5000/default/python Image Lookup: local=false Unique Images: 1 Tags: 1 3.5 tagged from centos/python-35-centos7 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 About a minute ago",
"oc describe istag/<image-stream>:<tag-name>",
"oc describe istag/python:latest",
"Image Name: sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Docker Image: centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Name: sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 Created: 2 minutes ago Image Size: 251.2 MB (first layer 2.898 MB, last binary layer 72.26 MB) Image Created: 2 weeks ago Author: <none> Arch: amd64 Entrypoint: container-entrypoint Command: /bin/sh -c USDSTI_SCRIPTS_PATH/usage Working Dir: /opt/app-root/src User: 1001 Exposes Ports: 8080/tcp Docker Labels: build-date=20170801",
"oc tag <image-name:tag1> <image-name:tag2>",
"oc tag python:3.5 python:latest",
"Tag python:latest set to python@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25.",
"oc describe is/python",
"Name: python Namespace: default Created: 5 minutes ago Labels: <none> Annotations: openshift.io/image.dockerRepositoryCheck=2017-10-02T17:05:11Z Docker Pull Spec: docker-registry.default.svc:5000/default/python Image Lookup: local=false Unique Images: 1 Tags: 2 latest tagged from python@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 About a minute ago 3.5 tagged from centos/python-35-centos7 * centos/python-35-centos7@sha256:49c18358df82f4577386404991c51a9559f243e0b1bdc366df25 5 minutes ago",
"oc tag <repository/image> <image-name:tag>",
"oc tag docker.io/python:3.6.0 python:3.6",
"Tag python:3.6 set to docker.io/python:3.6.0.",
"oc tag <image-name:tag> <image-name:latest>",
"oc tag python:3.6 python:latest",
"Tag python:latest set to python@sha256:438208801c4806548460b27bd1fbcb7bb188273d13871ab43f.",
"oc tag -d <image-name:tag>",
"oc tag -d python:3.5",
"Deleted tag default/python:3.5.",
"oc tag <repository/image> <image-name:tag> --scheduled",
"oc tag docker.io/python:3.6.0 python:3.6 --scheduled",
"Tag python:3.6 set to import docker.io/python:3.6.0 periodically.",
"oc tag <repositiory/image> <image-name:tag>",
"oc create secret generic <secret_name> --from-file=.dockerconfigjson=<file_absolute_path> --type=kubernetes.io/dockerconfigjson",
"oc import-image <imagestreamtag> --from=<image> --confirm",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io/repository-main\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"apiVersion: v1 data: .dockerconfigjson: ewogICAiYXV0aHMiOnsKICAgICAgIm0iOnsKICAgICAgIsKICAgICAgICAgImF1dGgiOiJiM0JsYj0iLAogICAgICAgICAiZW1haWwiOiJ5b3VAZXhhbXBsZS5jb20iCiAgICAgIH0KICAgfQp9Cg== kind: Secret metadata: creationTimestamp: \"2021-09-09T19:10:11Z\" name: pull-secret namespace: default resourceVersion: \"37676\" uid: e2851531-01bc-48ba-878c-de96cfe31020 type: Opaque",
"oc create secret generic <pull_secret_name> --from-file=.dockercfg=<path/to/.dockercfg> --type=kubernetes.io/dockercfg",
"oc create secret generic <pull_secret_name> --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson",
"oc create secret docker-registry <pull_secret_name> --docker-server=<registry_server> --docker-username=<user_name> --docker-password=<password> --docker-email=<email>",
"oc secrets link default <pull_secret_name> --for=pull"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/images/managing-image-streams |
8.130. ltrace | 8.130. ltrace 8.130.1. RHBA-2014:1604 - ltrace bug fix update Updated ltrace packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The ltrace utility is a debugging program that runs a specified command until the command exits. While the command is executing, ltrace intercepts and records both the dynamic library calls called by the executed process and the signals received by the executed process. The ltrace utility can also intercept and print system calls executed by the process. Bug Fixes BZ# 868280 Previously, the ltrace utility did not support the Position Independent Executables (PIE) binaries, which are linked similarly to shared libraries, and processes. Consequently, addresses found in images of those binaries needed additional adjustment for the actual address where the binary was loaded during the process startup. With this update, the support for the PIE binaries and processes has been added and ltrace now handles the additional processing for the PIE binaries correctly. BZ# 891607 When copying internal structures after cloning a process, the ltrace utility did not copy a string containing a path to an executable properly. This behavior led to errors in heap management and could cause ltrace to terminate unexpectedly. The underlying source code has been modified and ltrace now copies memory when cloning traced processes correctly. Users of ltrace are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/ltrace |
16.15. Disk Partitioning Setup | 16.15. Disk Partitioning Setup Warning It is always a good idea to back up any data that you have on your systems. For example, if you are upgrading or creating a dual-boot system, you should back up any data you wish to keep on your storage devices. Mistakes do happen and can result in the loss of all your data. Important If you install Red Hat Enterprise Linux in text mode, you can only use the default partitioning schemes described in this section. You cannot add or remove partitions or file systems beyond those that the installer automatically adds or removes. If you require a customized layout at installation time, you should perform a graphical installation over a VNC connection or a kickstart installation. Furthermore, advanced options such as LVM, encrypted filesystems, and resizable filesystems are available only in graphical mode and kickstart. Important If you have a RAID card, be aware that some BIOS types do not support booting from the RAID card. In cases such as these, the /boot/ partition must be created on a partition outside of the RAID array, such as on a separate hard drive. An internal hard drive is necessary to use for partition creation with problematic RAID cards. A /boot/ partition is also necessary for software RAID setups. If you have chosen to automatically partition your system, you should select Review and manually edit your /boot/ partition. Partitioning allows you to divide your hard drive into isolated sections, where each section behaves as its own hard drive. Partitioning is particularly useful if you run multiple operating systems. If you are not sure how you want your system to be partitioned, read Appendix A, An Introduction to Disk Partitions for more information. Figure 16.36. Disk Partitioning Setup On this screen you can choose to create the default partition layout in one of four different ways, or choose to partition storage devices manually to create a custom layout. The first four options allow you to perform an automated installation without having to partition your storage devices yourself. If you do not feel comfortable with partitioning your system, choose one of these options and let the installation program partition the storage devices for you. Depending on the option that you choose, you can still control what data (if any) is removed from the system. Your options are: Use All Space Select this option to remove all partitions on your hard drives (this includes partitions created by other operating systems such as Windows VFAT or NTFS partitions). Warning If you select this option, all data on the selected hard drives is removed by the installation program. Do not select this option if you have information that you want to keep on the hard drives where you are installing Red Hat Enterprise Linux. In particular, do not select this option when you configure a system to chain load the Red Hat Enterprise Linux boot loader from another boot loader. Replace Existing Linux System(s) Select this option to remove only partitions created by a Linux installation. This does not remove other partitions you may have on your hard drives (such as VFAT or FAT32 partitions). Shrink Current System Select this option to resize your current data and partitions manually and install a default Red Hat Enterprise Linux layout in the space that is freed. Warning If you shrink partitions on which other operating systems are installed, you might not be able to use those operating systems. Although this partitioning option does not destroy data, operating systems typically require some free space in their partitions. Before you resize a partition that holds an operating system that you might want to use again, find out how much space you need to leave free. Use Free Space Select this option to retain your current data and partitions and install Red Hat Enterprise Linux in the unused space available on the storage drives. Ensure that there is sufficient space available on the storage drives before you select this option - refer to Section 11.6, "Do You Have Enough Disk Space?" . Create Custom Layout Select this option to partition storage devices manually and create customized layouts. Refer to Section 16.17, " Creating a Custom Layout or Modifying the Default Layout " Choose your preferred partitioning method by clicking the radio button to the left of its description in the dialog box. Select Encrypt system to encrypt all partitions except the /boot partition. Refer to Appendix C, Disk Encryption for information on encryption. To review and make any necessary changes to the partitions created by automatic partitioning, select the Review option. After selecting Review and clicking to move forward, the partitions created for you by anaconda appear. You can make modifications to these partitions if they do not meet your needs. Important To configure the Red Hat Enterprise Linux boot loader to chain load from a different boot loader, you must specify the boot drive manually. If you chose any of the automatic partitioning options, you must now select the Review and modify partitioning layout option before you click or you cannot specify the correct boot drive. Important When you install Red Hat Enterprise Linux 6 on a system with multipath and non-multipath storage devices, the automatic partitioning layout in the installer might create volume groups that contain a mix of multipath and non-multipath devices. This defeats the purpose of multipath storage. We advise that you select only multipath or only non-multipath devices on the disk selection screen that appears after selecting automatic partitioning. Alternatively, select custom partitioning. Click once you have made your selections to proceed. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-diskpartsetup-ppc |
Subsets and Splits