title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/getting_started_with_fuse_on_apache_karaf/making-open-source-more-inclusive
Chapter 90. Ref
Chapter 90. Ref The Ref Expression Language is really just a way to lookup a custom Expression or Predicate from the Registry . This is particular useable in XML DSLs. 90.1. Ref Language options The Ref language supports 1 options, which are listed below. Name Default Java Type Description trim Boolean Whether to trim the value to remove leading and trailing whitespaces and line breaks. 90.2. Example usage The Splitter EIP in XML DSL can utilize a custom expression using <ref> like: <bean id="myExpression" class="com.mycompany.MyCustomExpression"/> <route> <from uri="seda:a"/> <split> <ref>myExpression</ref> <to uri="mock:b"/> </split> </route> in this case, the message coming from the seda:a endpoint will be splitted using a custom Expression which has the id myExpression in the Registry . And the same example using Java DSL: from("seda:a").split().ref("myExpression").to("seda:b"); 90.3. Dependencies The Ref language is part of camel-core . 90.4. Spring Boot Auto-Configuration When using ref with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-core-starter</artifactId> </dependency> The component supports 147 options, which are listed below. Name Description Default Type camel.cloud.consul.service-discovery.acl-token Sets the ACL token to be used with Consul. String camel.cloud.consul.service-discovery.block-seconds The seconds to wait for a watch event, default 10 seconds. 10 Integer camel.cloud.consul.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.consul.service-discovery.connect-timeout-millis Connect timeout for OkHttpClient. Long camel.cloud.consul.service-discovery.datacenter The data center. String camel.cloud.consul.service-discovery.enabled Enable the component. true Boolean camel.cloud.consul.service-discovery.password Sets the password to be used for basic authentication. String camel.cloud.consul.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.consul.service-discovery.read-timeout-millis Read timeout for OkHttpClient. Long camel.cloud.consul.service-discovery.url The Consul agent URL. String camel.cloud.consul.service-discovery.user-name Sets the username to be used for basic authentication. String camel.cloud.consul.service-discovery.write-timeout-millis Write timeout for OkHttpClient. Long camel.cloud.dns.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.dns.service-discovery.domain The domain name;. String camel.cloud.dns.service-discovery.enabled Enable the component. true Boolean camel.cloud.dns.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.dns.service-discovery.proto The transport protocol of the desired service. _tcp String camel.cloud.etcd.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.etcd.service-discovery.enabled Enable the component. true Boolean camel.cloud.etcd.service-discovery.password The password to use for basic authentication. String camel.cloud.etcd.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.etcd.service-discovery.service-path The path to look for for service discovery. /services/ String camel.cloud.etcd.service-discovery.timeout To set the maximum time an action could take to complete. Long camel.cloud.etcd.service-discovery.type To set the discovery type, valid values are on-demand and watch. on-demand String camel.cloud.etcd.service-discovery.uris The URIs the client can connect to. String camel.cloud.etcd.service-discovery.user-name The user name to use for basic authentication. String camel.cloud.kubernetes.service-discovery.api-version Sets the API version when using client lookup. String camel.cloud.kubernetes.service-discovery.ca-cert-data Sets the Certificate Authority data when using client lookup. String camel.cloud.kubernetes.service-discovery.ca-cert-file Sets the Certificate Authority data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-cert-data Sets the Client Certificate data when using client lookup. String camel.cloud.kubernetes.service-discovery.client-cert-file Sets the Client Certificate data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-algo Sets the Client Keystore algorithm, such as RSA when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-data Sets the Client Keystore data when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-file Sets the Client Keystore data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-passphrase Sets the Client Keystore passphrase when using client lookup. String camel.cloud.kubernetes.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.kubernetes.service-discovery.dns-domain Sets the DNS domain to use for DNS lookup. String camel.cloud.kubernetes.service-discovery.enabled Enable the component. true Boolean camel.cloud.kubernetes.service-discovery.lookup How to perform service lookup. Possible values: client, dns, environment. When using client, then the client queries the kubernetes master to obtain a list of active pods that provides the service, and then random (or round robin) select a pod. When using dns the service name is resolved as name.namespace.svc.dnsDomain. When using dnssrv the service name is resolved with SRV query for . ... svc... When using environment then environment variables are used to lookup the service. By default environment is used. environment String camel.cloud.kubernetes.service-discovery.master-url Sets the URL to the master when using client lookup. String camel.cloud.kubernetes.service-discovery.namespace Sets the namespace to use. Will by default use namespace from the ENV variable KUBERNETES_MASTER. String camel.cloud.kubernetes.service-discovery.oauth-token Sets the OAUTH token for authentication (instead of username/password) when using client lookup. String camel.cloud.kubernetes.service-discovery.password Sets the password for authentication when using client lookup. String camel.cloud.kubernetes.service-discovery.port-name Sets the Port Name to use for DNS/DNSSRV lookup. String camel.cloud.kubernetes.service-discovery.port-protocol Sets the Port Protocol to use for DNS/DNSSRV lookup. String camel.cloud.kubernetes.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.kubernetes.service-discovery.trust-certs Sets whether to turn on trust certificate check when using client lookup. false Boolean camel.cloud.kubernetes.service-discovery.username Sets the username for authentication when using client lookup. String camel.cloud.ribbon.load-balancer.client-name Sets the Ribbon client name. String camel.cloud.ribbon.load-balancer.configurations Define additional configuration definitions. Map camel.cloud.ribbon.load-balancer.enabled Enable the component. true Boolean camel.cloud.ribbon.load-balancer.namespace The namespace. String camel.cloud.ribbon.load-balancer.password The password. String camel.cloud.ribbon.load-balancer.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.ribbon.load-balancer.username The username. String camel.hystrix.allow-maximum-size-to-diverge-from-core-size Allows the configuration for maximumSize to take effect. That value can then be equal to, or higher, than coreSize. false Boolean camel.hystrix.circuit-breaker-enabled Whether to use a HystrixCircuitBreaker or not. If false no circuit-breaker logic will be used and all requests permitted. This is similar in effect to circuitBreakerForceClosed() except that continues tracking metrics and knowing whether it should be open/closed, this property results in not even instantiating a circuit-breaker. true Boolean camel.hystrix.circuit-breaker-error-threshold-percentage Error percentage threshold (as whole number such as 50) at which point the circuit breaker will trip open and reject requests. It will stay tripped for the duration defined in circuitBreakerSleepWindowInMilliseconds; The error percentage this is compared against comes from HystrixCommandMetrics.getHealthCounts(). 50 Integer camel.hystrix.circuit-breaker-force-closed If true the HystrixCircuitBreaker#allowRequest() will always return true to allow requests regardless of the error percentage from HystrixCommandMetrics.getHealthCounts(). The circuitBreakerForceOpen() property takes precedence so if it set to true this property does nothing. false Boolean camel.hystrix.circuit-breaker-force-open If true the HystrixCircuitBreaker.allowRequest() will always return false, causing the circuit to be open (tripped) and reject all requests. This property takes precedence over circuitBreakerForceClosed();. false Boolean camel.hystrix.circuit-breaker-request-volume-threshold Minimum number of requests in the metricsRollingStatisticalWindowInMilliseconds() that must exist before the HystrixCircuitBreaker will trip. If below this number the circuit will not trip regardless of error percentage. 20 Integer camel.hystrix.circuit-breaker-sleep-window-in-milliseconds The time in milliseconds after a HystrixCircuitBreaker trips open that it should wait before trying requests again. 5000 Integer camel.hystrix.configurations Define additional configuration definitions. Map camel.hystrix.core-pool-size Core thread-pool size that gets passed to java.util.concurrent.ThreadPoolExecutor#setCorePoolSize(int). 10 Integer camel.hystrix.enabled Enable the component. true Boolean camel.hystrix.execution-isolation-semaphore-max-concurrent-requests Number of concurrent requests permitted to HystrixCommand.run(). Requests beyond the concurrent limit will be rejected. Applicable only when executionIsolationStrategy == SEMAPHORE. 20 Integer camel.hystrix.execution-isolation-strategy What isolation strategy HystrixCommand.run() will be executed with. If THREAD then it will be executed on a separate thread and concurrent requests limited by the number of threads in the thread-pool. If SEMAPHORE then it will be executed on the calling thread and concurrent requests limited by the semaphore count. THREAD String camel.hystrix.execution-isolation-thread-interrupt-on-timeout Whether the execution thread should attempt an interrupt (using Future#cancel ) when a thread times out. Applicable only when executionIsolationStrategy() == THREAD. true Boolean camel.hystrix.execution-timeout-enabled Whether the timeout mechanism is enabled for this command. true Boolean camel.hystrix.execution-timeout-in-milliseconds Time in milliseconds at which point the command will timeout and halt execution. If executionIsolationThreadInterruptOnTimeout == true and the command is thread-isolated, the executing thread will be interrupted. If the command is semaphore-isolated and a HystrixObservableCommand, that command will get unsubscribed. 1000 Integer camel.hystrix.fallback-enabled Whether HystrixCommand.getFallback() should be attempted when failure occurs. true Boolean camel.hystrix.fallback-isolation-semaphore-max-concurrent-requests Number of concurrent requests permitted to HystrixCommand.getFallback(). Requests beyond the concurrent limit will fail-fast and not attempt retrieving a fallback. 10 Integer camel.hystrix.group-key Sets the group key to use. The default value is CamelHystrix. CamelHystrix String camel.hystrix.keep-alive-time Keep-alive time in minutes that gets passed to ThreadPoolExecutor#setKeepAliveTime(long,TimeUnit). 1 Integer camel.hystrix.max-queue-size Max queue size that gets passed to BlockingQueue in HystrixConcurrencyStrategy.getBlockingQueue(int) This should only affect the instantiation of a threadpool - it is not eliglible to change a queue size on the fly. For that, use queueSizeRejectionThreshold(). -1 Integer camel.hystrix.maximum-size Maximum thread-pool size that gets passed to ThreadPoolExecutor#setMaximumPoolSize(int) . This is the maximum amount of concurrency that can be supported without starting to reject HystrixCommands. Please note that this setting only takes effect if you also set allowMaximumSizeToDivergeFromCoreSize. 10 Integer camel.hystrix.metrics-health-snapshot-interval-in-milliseconds Time in milliseconds to wait between allowing health snapshots to be taken that calculate success and error percentages and affect HystrixCircuitBreaker.isOpen() status. On high-volume circuits the continual calculation of error percentage can become CPU intensive thus this controls how often it is calculated. 500 Integer camel.hystrix.metrics-rolling-percentile-bucket-size Maximum number of values stored in each bucket of the rolling percentile. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 10 Integer camel.hystrix.metrics-rolling-percentile-enabled Whether percentile metrics should be captured using HystrixRollingPercentile inside HystrixCommandMetrics. true Boolean camel.hystrix.metrics-rolling-percentile-window-buckets Number of buckets the rolling percentile window is broken into. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 6 Integer camel.hystrix.metrics-rolling-percentile-window-in-milliseconds Duration of percentile rolling window in milliseconds. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 10000 Integer camel.hystrix.metrics-rolling-statistical-window-buckets Number of buckets the rolling statistical window is broken into. This is passed into HystrixRollingNumber inside HystrixCommandMetrics. 10 Integer camel.hystrix.metrics-rolling-statistical-window-in-milliseconds This property sets the duration of the statistical rolling window, in milliseconds. This is how long metrics are kept for the thread pool. The window is divided into buckets and rolls by those increments. 10000 Integer camel.hystrix.queue-size-rejection-threshold Queue size rejection threshold is an artificial max size at which rejections will occur even if maxQueueSize has not been reached. This is done because the maxQueueSize of a BlockingQueue can not be dynamically changed and we want to support dynamically changing the queue size that affects rejections. This is used by HystrixCommand when queuing a thread for execution. 5 Integer camel.hystrix.request-log-enabled Whether HystrixCommand execution and events should be logged to HystrixRequestLog. true Boolean camel.hystrix.thread-pool-key Sets the thread pool key to use. Will by default use the same value as groupKey has been configured to use. CamelHystrix String camel.hystrix.thread-pool-rolling-number-statistical-window-buckets Number of buckets the rolling statistical window is broken into. This is passed into HystrixRollingNumber inside each HystrixThreadPoolMetrics instance. 10 Integer camel.hystrix.thread-pool-rolling-number-statistical-window-in-milliseconds Duration of statistical rolling window in milliseconds. This is passed into HystrixRollingNumber inside each HystrixThreadPoolMetrics instance. 10000 Integer camel.language.constant.enabled Whether to enable auto configuration of the constant language. This is enabled by default. Boolean camel.language.constant.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.csimple.enabled Whether to enable auto configuration of the csimple language. This is enabled by default. Boolean camel.language.csimple.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.exchangeproperty.enabled Whether to enable auto configuration of the exchangeProperty language. This is enabled by default. Boolean camel.language.exchangeproperty.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.file.enabled Whether to enable auto configuration of the file language. This is enabled by default. Boolean camel.language.file.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.header.enabled Whether to enable auto configuration of the header language. This is enabled by default. Boolean camel.language.header.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.ref.enabled Whether to enable auto configuration of the ref language. This is enabled by default. Boolean camel.language.ref.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.simple.enabled Whether to enable auto configuration of the simple language. This is enabled by default. Boolean camel.language.simple.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.tokenize.enabled Whether to enable auto configuration of the tokenize language. This is enabled by default. Boolean camel.language.tokenize.group-delimiter Sets the delimiter to use when grouping. If this has not been set then token will be used as the delimiter. String camel.language.tokenize.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.resilience4j.automatic-transition-from-open-to-half-open-enabled Enables automatic transition from OPEN to HALF_OPEN state once the waitDurationInOpenState has passed. false Boolean camel.resilience4j.circuit-breaker-ref Refers to an existing io.github.resilience4j.circuitbreaker.CircuitBreaker instance to lookup and use from the registry. When using this, then any other circuit breaker options are not in use. String camel.resilience4j.config-ref Refers to an existing io.github.resilience4j.circuitbreaker.CircuitBreakerConfig instance to lookup and use from the registry. String camel.resilience4j.configurations Define additional configuration definitions. Map camel.resilience4j.enabled Enable the component. true Boolean camel.resilience4j.failure-rate-threshold Configures the failure rate threshold in percentage. If the failure rate is equal or greater than the threshold the CircuitBreaker transitions to open and starts short-circuiting calls. The threshold must be greater than 0 and not greater than 100. Default value is 50 percentage. Float camel.resilience4j.minimum-number-of-calls Configures the minimum number of calls which are required (per sliding window period) before the CircuitBreaker can calculate the error rate. For example, if minimumNumberOfCalls is 10, then at least 10 calls must be recorded, before the failure rate can be calculated. If only 9 calls have been recorded the CircuitBreaker will not transition to open even if all 9 calls have failed. Default minimumNumberOfCalls is 100. 100 Integer camel.resilience4j.permitted-number-of-calls-in-half-open-state Configures the number of permitted calls when the CircuitBreaker is half open. The size must be greater than 0. Default size is 10. 10 Integer camel.resilience4j.sliding-window-size Configures the size of the sliding window which is used to record the outcome of calls when the CircuitBreaker is closed. slidingWindowSize configures the size of the sliding window. Sliding window can either be count-based or time-based. If slidingWindowType is COUNT_BASED, the last slidingWindowSize calls are recorded and aggregated. If slidingWindowType is TIME_BASED, the calls of the last slidingWindowSize seconds are recorded and aggregated. The slidingWindowSize must be greater than 0. The minimumNumberOfCalls must be greater than 0. If the slidingWindowType is COUNT_BASED, the minimumNumberOfCalls cannot be greater than slidingWindowSize . If the slidingWindowType is TIME_BASED, you can pick whatever you want. Default slidingWindowSize is 100. 100 Integer camel.resilience4j.sliding-window-type Configures the type of the sliding window which is used to record the outcome of calls when the CircuitBreaker is closed. Sliding window can either be count-based or time-based. If slidingWindowType is COUNT_BASED, the last slidingWindowSize calls are recorded and aggregated. If slidingWindowType is TIME_BASED, the calls of the last slidingWindowSize seconds are recorded and aggregated. Default slidingWindowType is COUNT_BASED. COUNT_BASED String camel.resilience4j.slow-call-duration-threshold Configures the duration threshold (seconds) above which calls are considered as slow and increase the slow calls percentage. Default value is 60 seconds. 60 Integer camel.resilience4j.slow-call-rate-threshold Configures a threshold in percentage. The CircuitBreaker considers a call as slow when the call duration is greater than slowCallDurationThreshold Duration. When the percentage of slow calls is equal or greater the threshold, the CircuitBreaker transitions to open and starts short-circuiting calls. The threshold must be greater than 0 and not greater than 100. Default value is 100 percentage which means that all recorded calls must be slower than slowCallDurationThreshold. Float camel.resilience4j.wait-duration-in-open-state Configures the wait duration (in seconds) which specifies how long the CircuitBreaker should stay open, before it switches to half open. Default value is 60 seconds. 60 Integer camel.resilience4j.writable-stack-trace-enabled Enables writable stack traces. When set to false, Exception.getStackTrace returns a zero length array. This may be used to reduce log spam when the circuit breaker is open as the cause of the exceptions is already known (the circuit breaker is short-circuiting calls). true Boolean camel.rest.api-component The name of the Camel component to use as the REST API (such as swagger) If no API Component has been explicit configured, then Camel will lookup if there is a Camel component responsible for servicing and generating the REST API documentation, or if a org.apache.camel.spi.RestApiProcessorFactory is registered in the registry. If either one is found, then that is being used. String camel.rest.api-context-path Sets a leading API context-path the REST API services will be using. This can be used when using components such as camel-servlet where the deployed web application is deployed using a context-path. String camel.rest.api-context-route-id Sets the route id to use for the route that services the REST API. The route will by default use an auto assigned route id. String camel.rest.api-host To use an specific hostname for the API documentation (eg swagger) This can be used to override the generated host with this configured hostname. String camel.rest.api-property Allows to configure as many additional properties for the api documentation (swagger). For example set property api.title to my cool stuff. Map camel.rest.api-vendor-extension Whether vendor extension is enabled in the Rest APIs. If enabled then Camel will include additional information as vendor extension (eg keys starting with x-) such as route ids, class names etc. Not all 3rd party API gateways and tools supports vendor-extensions when importing your API docs. false Boolean camel.rest.binding-mode Sets the binding mode to use. The default value is off. RestBindingMode camel.rest.client-request-validation Whether to enable validation of the client request to check whether the Content-Type and Accept headers from the client is supported by the Rest-DSL configuration of its consumes/produces settings. This can be turned on, to enable this check. In case of validation error, then HTTP Status codes 415 or 406 is returned. The default value is false. false Boolean camel.rest.component The Camel Rest component to use for the REST transport (consumer), such as netty-http, jetty, servlet, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestConsumerFactory is registered in the registry. If either one is found, then that is being used. String camel.rest.component-property Allows to configure as many additional properties for the rest component in use. Map camel.rest.consumer-property Allows to configure as many additional properties for the rest consumer in use. Map camel.rest.context-path Sets a leading context-path the REST services will be using. This can be used when using components such as camel-servlet where the deployed web application is deployed using a context-path. Or for components such as camel-jetty or camel-netty-http that includes a HTTP server. String camel.rest.cors-headers Allows to configure custom CORS headers. Map camel.rest.data-format-property Allows to configure as many additional properties for the data formats in use. For example set property prettyPrint to true to have json outputted in pretty mode. The properties can be prefixed to denote the option is only for either JSON or XML and for either the IN or the OUT. The prefixes are: json.in. json.out. xml.in. xml.out. For example a key with value xml.out.mustBeJAXBElement is only for the XML data format for the outgoing. A key without a prefix is a common key for all situations. Map camel.rest.enable-cors Whether to enable CORS headers in the HTTP response. The default value is false. false Boolean camel.rest.endpoint-property Allows to configure as many additional properties for the rest endpoint in use. Map camel.rest.host The hostname to use for exposing the REST service. String camel.rest.host-name-resolver If no hostname has been explicit configured, then this resolver is used to compute the hostname the REST service will be using. RestHostNameResolver camel.rest.json-data-format Name of specific json data format to use. By default json-jackson will be used. Important: This option is only for setting a custom name of the data format, not to refer to an existing data format instance. String camel.rest.port The port number to use for exposing the REST service. Notice if you use servlet component then the port number configured here does not apply, as the port number in use is the actual port number the servlet component is using. eg if using Apache Tomcat its the tomcat http port, if using Apache Karaf its the HTTP service in Karaf that uses port 8181 by default etc. Though in those situations setting the port number here, allows tooling and JMX to know the port number, so its recommended to set the port number to the number that the servlet engine uses. String camel.rest.producer-api-doc Sets the location of the api document (swagger api) the REST producer will use to validate the REST uri and query parameters are valid accordingly to the api document. This requires adding camel-swagger-java to the classpath, and any miss configuration will let Camel fail on startup and report the error(s). The location of the api document is loaded from classpath by default, but you can use file: or http: to refer to resources to load from file or http url. String camel.rest.producer-component Sets the name of the Camel component to use as the REST producer. String camel.rest.scheme The scheme to use for exposing the REST service. Usually http or https is supported. The default value is http. String camel.rest.skip-binding-on-error-code Whether to skip binding on output if there is a custom HTTP error code header. This allows to build custom error messages that do not bind to json / xml etc, as success messages otherwise will do. false Boolean camel.rest.use-x-forward-headers Whether to use X-Forward headers for Host and related setting. The default value is true. true Boolean camel.rest.xml-data-format Name of specific XML data format to use. By default jaxb will be used. Important: This option is only for setting a custom name of the data format, not to refer to an existing data format instance. String camel.rest.api-context-id-pattern Deprecated Sets an CamelContext id pattern to only allow Rest APIs from rest services within CamelContext's which name matches the pattern. The pattern name refers to the CamelContext name, to match on the current CamelContext only. For any other value, the pattern uses the rules from PatternHelper#matchPattern(String,String). String camel.rest.api-context-listing Deprecated Sets whether listing of all available CamelContext's with REST services in the JVM is enabled. If enabled it allows to discover these contexts, if false then only the current CamelContext is in use. false Boolean
[ "<bean id=\"myExpression\" class=\"com.mycompany.MyCustomExpression\"/> <route> <from uri=\"seda:a\"/> <split> <ref>myExpression</ref> <to uri=\"mock:b\"/> </split> </route>", "from(\"seda:a\").split().ref(\"myExpression\").to(\"seda:b\");", "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-core-starter</artifactId> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-ref-language-starter
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.2/html/intellij_idea_plugin_guide/making-open-source-more-inclusive
Chapter 3. Using Ansible to manage IdM user vaults: storing and retrieving secrets
Chapter 3. Using Ansible to manage IdM user vaults: storing and retrieving secrets This chapter describes how to manage user vaults in Identity Management using the Ansible vault module. Specifically, it describes how a user can use Ansible playbooks to perform the following three consecutive actions: Create a user vault in IdM . Store a secret in the vault . Retrieve a secret from the vault . The user can do the storing and the retrieving from two different IdM clients. Prerequisites The Key Recovery Authority (KRA) Certificate System component has been installed on one or more of the servers in your IdM domain. For details, see Installing the Key Recovery Authority in IdM . 3.1. Ensuring the presence of a standard user vault in IdM using Ansible Follow this procedure to use an Ansible playbook to create a vault container with one or more private vaults to securely store sensitive information. In the example used in the procedure below, the idm_user user creates a vault of the standard type named my_vault . The standard vault type ensures that idm_user will not be required to authenticate when accessing the file. idm_user will be able to retrieve the file from any IdM client to which the user is logged in. Prerequisites You have installed the ansible-freeipa package on the Ansible controller, that is the host on which you execute the steps in the procedure. You know the password of idm_user . Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/vault directory: Create an inventory file, for example inventory.file : Open inventory.file and define the IdM server that you want to configure in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the ensure-standard-vault-is-present.yml Ansible playbook file. For example: Open the ensure-standard-vault-is-present-copy.yml file for editing. Adapt the file by setting the following variables in the ipavault task section: Set the ipaadmin_principal variable to idm_user . Set the ipaadmin_password variable to the password of idm_user . Set the user variable to idm_user . Set the name variable to my_vault . Set the vault_type variable to standard . This the modified Ansible playbook file for the current example: Save the file. Run the playbook: 3.2. Archiving a secret in a standard user vault in IdM using Ansible Follow this procedure to use an Ansible playbook to store sensitive information in a personal vault. In the example used, the idm_user user archives a file with sensitive information named password.txt in a vault named my_vault . Prerequisites You have installed the ansible-freeipa package on the Ansible controller, that is the host on which you execute the steps in the procedure. You know the password of idm_user . idm_user is the owner, or at least a member user of my_vault . You have access to password.txt , the secret that you want to archive in my_vault . Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/vault directory: Open your inventory file and make sure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the data-archive-in-symmetric-vault.yml Ansible playbook file but replace "symmetric" by "standard". For example: Open the data-archive-in-standard-vault-copy.yml file for editing. Adapt the file by setting the following variables in the ipavault task section: Set the ipaadmin_principal variable to idm_user . Set the ipaadmin_password variable to the password of idm_user . Set the user variable to idm_user . Set the name variable to my_vault . Set the in variable to the full path to the file with sensitive information. Set the action variable to member . This the modified Ansible playbook file for the current example: Save the file. Run the playbook: 3.3. Retrieving a secret from a standard user vault in IdM using Ansible Follow this procedure to use an Ansible playbook to retrieve a secret from the user personal vault. In the example used in the procedure below, the idm_user user retrieves a file with sensitive data from a vault of the standard type named my_vault onto an IdM client named host01 . idm_user does not have to authenticate when accessing the file. idm_user can use Ansible to retrieve the file from any IdM client on which Ansible is installed. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package on the Ansible controller. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the password of idm_user . idm_user is the owner of my_vault . idm_user has stored a secret in my_vault . Ansible can write into the directory on the IdM host into which you want to retrieve the secret. idm_user can read from the directory on the IdM host into which you want to retrieve the secret. Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/vault directory: Open your inventory file and mention, in a clearly defined section, the IdM client onto which you want to retrieve the secret. For example, to instruct Ansible to retrieve the secret onto host01.idm.example.com , enter: Make a copy of the retrive-data-symmetric-vault.yml Ansible playbook file. Replace "symmetric" with "standard". For example: Open the retrieve-data-standard-vault.yml-copy.yml file for editing. Adapt the file by setting the hosts variable to ipahost . Adapt the file by setting the following variables in the ipavault task section: Set the ipaadmin_principal variable to idm_user . Set the ipaadmin_password variable to the password of idm_user . Set the user variable to idm_user . Set the name variable to my_vault . Set the out variable to the full path of the file into which you want to export the secret. Set the state variable to retrieved . This the modified Ansible playbook file for the current example: Save the file. Run the playbook: Verification SSH to host01 as user01 : View the file specified by the out variable in the Ansible playbook file: You can now see the exported secret. For more information about using Ansible to manage IdM vaults and user secrets and about playbook variables, see the README-vault.md Markdown file available in the /usr/share/doc/ansible-freeipa/ directory and the sample playbooks available in the /usr/share/doc/ansible-freeipa/playbooks/vault/ directory.
[ "cd /usr/share/doc/ansible-freeipa/playbooks/vault", "touch inventory.file", "[ipaserver] server.idm.example.com", "cp ensure-standard-vault-is-present.yml ensure-standard-vault-is-present-copy.yml", "--- - name: Tests hosts: ipaserver gather_facts: false vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - ipavault: ipaadmin_principal: idm_user ipaadmin_password: idm_user_password user: idm_user name: my_vault vault_type: standard", "ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-standard-vault-is-present-copy.yml", "cd /usr/share/doc/ansible-freeipa/playbooks/vault", "[ipaserver] server.idm.example.com", "cp data-archive-in-symmetric-vault.yml data-archive-in-standard-vault-copy.yml", "--- - name: Tests hosts: ipaserver gather_facts: false vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - ipavault: ipaadmin_principal: idm_user ipaadmin_password: idm_user_password user: idm_user name: my_vault in: /usr/share/doc/ansible-freeipa/playbooks/vault/password.txt action: member", "ansible-playbook --vault-password-file=password_file -v -i inventory.file data-archive-in-standard-vault-copy.yml", "cd /usr/share/doc/ansible-freeipa/playbooks/vault", "[ipahost] host01.idm.example.com", "cp retrive-data-symmetric-vault.yml retrieve-data-standard-vault.yml-copy.yml", "--- - name: Tests hosts: ipahost gather_facts: false vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - ipavault: ipaadmin_principal: idm_user ipaadmin_password: idm_user_password user: idm_user name: my_vault out: /tmp/password_exported.txt state: retrieved", "ansible-playbook --vault-password-file=password_file -v -i inventory.file retrieve-data-standard-vault.yml-copy.yml", "ssh [email protected]", "vim /tmp/password_exported.txt" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/working_with_vaults_in_identity_management/using-ansible-to-manage-idm-user-vaults-storing-and-retrieving-secrets_working-with-vaults-in-identity-management
Chapter 17. Delegating Access to Hosts and Services
Chapter 17. Delegating Access to Hosts and Services To manage in the context of this chapter means being able to retrieve a keytab and certificates for another host or service. Every host and service has a managedby entry which lists what hosts or services can manage it. By default, a host can manage itself and all of its services. It is also possible to allow a host to manage other hosts, or services on other hosts, by updating the appropriate delegations or providing a suitable managedby entry. An IdM service can be managed from any IdM host, as long as that host has been granted, or delegated , permission to access the service. Likewise, hosts can be delegated permissions to other hosts within the domain. Figure 17.1. Host and Service Delegation Note If a host is delegated authority to another host through a managedBy entry, it does not mean that the host has also been delegated management for all services on that host. Each delegation has to be performed independently. 17.1. Delegating Service Management A host is delegated control over a service using the service-add-host utility: There are two parts to delegating the service: Specifying the principal using the principal argument. Identifying the hosts with the control using the --hosts option. For example: Once the host is delegated authority, the host principal can be used to manage the service: To create a ticket for this service, create a certificate request on the host with the delegated authority: Use the cert-request utility to create a service entry and load the certification information: For more information on creating certificate requests and using ipa cert-request , see Section 24.1.1, "Requesting New Certificates for a User, Host, or Service" .
[ "ipa service-add-host principal --hosts= hostname", "ipa service-add HTTP/web.example.com ipa service-add-host HTTP/web.example.com --hosts=client1.example.com", "kinit -kt /etc/krb5.keytab host/client1.example.com ipa-getkeytab -s server.example.com -k /tmp/test.keytab -p HTTP/web.example.com Keytab successfully retrieved and stored in: /tmp/test.keytab", "kinit -kt /etc/krb5.keytab host/client1.example.com openssl req -newkey rsa:2048 -subj '/CN=web.example.com/O=EXAMPLE.COM' -keyout /etc/pki/tls/web.key -out /tmp/web.csr -nodes Generating a 2048 bit RSA private key .............................................................+++ ............................................................................................+++ Writing new private key to '/etc/pki/tls/private/web.key'", "ipa cert-request --principal=HTTP/web.example.com web.csr Certificate: MIICETCCAXqgA...[snip] Subject: CN=web.example.com,O=EXAMPLE.COM Issuer: CN=EXAMPLE.COM Certificate Authority Not Before: Tue Feb 08 18:51:51 2011 UTC Not After: Mon Feb 08 18:51:51 2016 UTC Serial number: 1005" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/host-delegation
11.2.5. Configuring a VLAN over a Bond
11.2.5. Configuring a VLAN over a Bond This section will show configuring a VLAN over a bond consisting of two Ethernet links between a server and an Ethernet switch. The switch has a second bond to another server. Only the configuration for the first server will be shown as the other is essentially the same apart from the IP addresses. Warning The use of direct cable connections without network switches is not supported for bonding. The failover mechanisms described here will not work as expected without the presence of network switches. See the Red Hat Knowledgebase article Why is bonding in not supported with direct connection using crossover cables? for more information. Note The active-backup, balance-tlb and balance-alb modes do not require any specific configuration of the switch. Other bonding modes require configuring the switch to aggregate the links. For example, a Cisco switch requires EtherChannel for Modes 0, 2, and 3, but for Mode 4 LACP and EtherChannel are required. See the documentation supplied with your switch and the bonding.txt file in the kernel-doc package (see Section 31.9, "Additional Resources" ). Check the available interfaces on the server: Procedure 11.1. Configuring the Interfaces on the Server Configure a slave interface using eth0 : The use of the NAME directive is optional. It is for display by a GUI interface, such as nm-connection-editor and nm-applet . Configure a slave interface using eth1 : The use of the NAME directive is optional. It is for display by a GUI interface, such as nm-connection-editor and nm-applet . Configure a channel bonding interface ifcfg-bond0 : The use of the NAME directive is optional. It is for display by a GUI interface, such as nm-connection-editor and nm-applet . In this example MII is used for link monitoring, see the Section 31.8.1.1, "Bonding Module Directives" section for more information on link monitoring. Check the status of the interfaces on the server: Procedure 11.2. Resolving Conflicts with Interfaces The interfaces configured as slaves should not have IP addresses assigned to them apart from the IPv6 link-local addresses (starting fe80 ). If you have an unexpected IP address, then there may be another configuration file with ONBOOT set to yes . If this occurs, issue the following command to list all ifcfg files that may be causing a conflict: The above shows the expected result on a new installation. Any file having both the ONBOOT directive as well as the IPADDR or SLAVE directive will be displayed. For example, if the ifcfg-eth1 file was incorrectly configured, the display might look similar to the following: Any other configuration files found should be moved to a different directory for backup, or assigned to a different interface by means of the HWADDR directive. After resolving any conflict set the interfaces " down " and " up " again or restart the network service as root : If you are using NetworkManager , you might need to restart it at this point to make it forget the unwanted IP address. As root : Procedure 11.3. Checking the bond on the Server Bring up the bond on the server as root : Check the status of the interfaces on the server: Notice that eth0 and eth1 have master bond0 state UP and bond0 has status of MASTER,UP . View the bond configuration details: Check the routes on the server: Procedure 11.4. Configuring the VLAN on the Server Important At the time of writing, it is important that the bond has slaves and that they are " up " before bringing up the VLAN interface. At the time of writing, adding a VLAN interface to a bond without slaves does not work. In Red Hat Enterprise Linux 6, setting the ONPARENT directive to yes is important to ensure that the VLAN interface does not attempt to come up before the bond is up. This is because a VLAN virtual device takes the MAC address of its parent, and when a NIC is enslaved, the bond changes its MAC address to that NIC's MAC address. Note A VLAN slave cannot be configured on a bond with the fail_over_mac=follow option, because the VLAN virtual device cannot change its MAC address to match the parent's new MAC address. In such a case, traffic would still be sent with the now incorrect source MAC address. Some older network interface cards, loopback interfaces, Wimax cards, and some Infiniband devices, are said to be VLAN challenged , meaning they cannot support VLANs. This is usually because the devices cannot cope with VLAN headers and the larger MTU size associated with VLANs. Create a VLAN interface file bond0.192 : Bring up the VLAN interface as root : Enabling VLAN tagging on the network switch. Consult the documentation for the switch to see what configuration is required. Check the status of the interfaces on the server: Notice there is now bond0.192@bond0 in the list of interfaces and the status is MASTER,UP . Check the route on the server: Notice there is now a route for the 192.168.10.0/24 network pointing to the VLAN interface bond0.192 . Configuring the Second Server Repeat the configuration steps for the second server, using different IP addresses but from the same subnets respectively. Test the bond is up and the network switch is working as expected: Testing the VLAN To test that the network switch is configured for the VLAN, try to ping the first servers' VLAN interface: No packet loss suggests everything is configured correctly and that the VLAN and underlying interfaces are " up " . Optional Steps If required, perform further tests by removing and replacing network cables one at a time to verify that failover works as expected. Make use of the ethtool utility to verify which interface is connected to which cable. For example: ethtool --identify ifname integer Where integer is the number of times to flash the LED on the network interface. The bonding module does not support STP , therefore consider disabling the sending of BPDU packets from the network switch. If the system is not linked to the network except over the connection just configured, consider enabling the switch port to transition directly to sending and receiving. For example on a Cisco switch, by means of the portfast command.
[ "~]USD ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000 link/ether 52:54:00:19:28:fe brd ff:ff:ff:ff:ff:ff 3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000 link/ether 52:54:00:f6:63:9a brd ff:ff:ff:ff:ff:ff", "~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0 NAME=bond0-slave0 DEVICE=eth0 TYPE=Ethernet BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes NM_CONTROLLED=no", "~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1 NAME=bond0-slave1 DEVICE=eth1 TYPE=Ethernet BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes NM_CONTROLLED=no", "~]# vi /etc/sysconfig/network-scripts/ifcfg-bond0 NAME=bond0 DEVICE=bond0 BONDING_MASTER=yes TYPE=Bond IPADDR=192.168.100.100 NETMASK=255.255.255.0 ONBOOT=yes BOOTPROTO=none BONDING_OPTS=\"mode=active-backup miimon=100\" NM_CONTROLLED=no", "~]USD ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:19:28:fe brd ff:ff:ff:ff:ff:ff inet6 fe80::5054:ff:fe19:28fe/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:f6:63:9a brd ff:ff:ff:ff:ff:ff inet6 fe80::5054:ff:fef6:639a/64 scope link valid_lft forever preferred_lft forever", "~]USD grep -r \"ONBOOT=yes\" /etc/sysconfig/network-scripts/ | cut -f1 -d\":\" | xargs grep -E \"IPADDR|SLAVE\" /etc/sysconfig/network-scripts/ifcfg-lo:IPADDR=127.0.0.1", "~]# grep -r \"ONBOOT=yes\" /etc/sysconfig/network-scripts/ | cut -f1 -d\":\" | xargs grep -E \"IPADDR|SLAVE\" /etc/sysconfig/network-scripts/ifcfg-lo:IPADDR=127.0.0.1 /etc/sysconfig/network-scripts/ifcfg-eth1:SLAVE=yes /etc/sysconfig/network-scripts/ifcfg-eth1:IPADDR=192.168.55.55", "~]# service network restart Shutting down interface bond0: [ OK ] Shutting down loopback interface: [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface bond0: Determining if ip address 192.168.100.100 is already in use for device bond0 [ OK ]", "~]# service NetworkManager restart", "~]# ifup /etc/sysconfig/network-scripts/ifcfg-bond0 Determining if ip address 192.168.100.100 is already in use for device bond0", "~]USD ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000 link/ether 52:54:00:19:28:fe brd ff:ff:ff:ff:ff:ff 3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000 link/ether 52:54:00:f6:63:9a brd ff:ff:ff:ff:ff:ff 4: bond0: <BROADCAST,MULTICAST, MASTER,UP ,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 52:54:00:19:28:fe brd ff:ff:ff:ff:ff:ff inet 192.168.100.100/24 brd 192.168.100.255 scope global bond0 inet6 fe80::5054:ff:fe19:28fe/64 scope link valid_lft forever preferred_lft forever", "~]USD cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009) Bonding Mode: transmit load balancing Primary Slave: None Currently Active Slave: eth0 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth0 MII Status: up Speed: 100 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 52:54:00:19:28:fe Slave queue ID: 0 Slave Interface: eth1 MII Status: up Speed: 100 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 52:54:00:f6:63:9a Slave queue ID: 0", "~]USD ip route 192.168.100.0/24 dev bond0 proto kernel scope link src 192.168.100.100 169.254.0.0/16 dev bond0 scope link metric 1004", "~]# vi /etc/sysconfig/network-scripts/ifcfg-bond0.192 DEVICE=bond0.192 NAME=bond0.192 BOOTPROTO=none ONPARENT=yes IPADDR=192.168.10.1 NETMASK=255.255.255.0 VLAN=yes NM_CONTROLLED=no", "~]# ifup /etc/sysconfig/network-scripts/ifcfg-bond0.192 Determining if ip address 192.168.10.1 is already in use for device bond0.192", "~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000 link/ether 52:54:00:19:28:fe brd ff:ff:ff:ff:ff:ff 3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000 link/ether 52:54:00:f6:63:9a brd ff:ff:ff:ff:ff:ff 4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 52:54:00:19:28:fe brd ff:ff:ff:ff:ff:ff inet 192.168.100.100/24 brd 192.168.100.255 scope global bond0 inet6 fe80::5054:ff:fe19:28fe/64 scope link valid_lft forever preferred_lft forever 5: bond0.192@bond0 : <BROADCAST,MULTICAST, MASTER,UP ,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 52:54:00:19:28:fe brd ff:ff:ff:ff:ff:ff inet 192.168.10.1/24 brd 192.168.10.255 scope global bond0.192 inet6 fe80::5054:ff:fe19:28fe/64 scope link valid_lft forever preferred_lft forever", "~]USD ip route 192.168.100.0/24 dev bond0 proto kernel scope link src 192.168.100.100 192.168.10.0/24 dev bond0.192 proto kernel scope link src 192.168.10.1 169.254.0.0/16 dev bond0 scope link metric 1004 169.254.0.0/16 dev bond0.192 scope link metric 1005", "~]USD ping -c4 192.168.100.100 PING 192.168.100.100 (192.168.100.100) 56(84) bytes of data. 64 bytes from 192.168.100.100: icmp_seq=1 ttl=64 time=1.35 ms 64 bytes from 192.168.100.100: icmp_seq=2 ttl=64 time=0.214 ms 64 bytes from 192.168.100.100: icmp_seq=3 ttl=64 time=0.383 ms 64 bytes from 192.168.100.100: icmp_seq=4 ttl=64 time=0.396 ms --- 192.168.100.100 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3002ms rtt min/avg/max/mdev = 0.214/0.586/1.353/0.448 ms", "~]# ping -c2 192.168.10.1 PING 192.168.10.1 (192.168.10.1) 56(84) bytes of data. 64 bytes from 192.168.10.1: icmp_seq=1 ttl=64 time=0.781 ms 64 bytes from 192.168.10.1: icmp_seq=2 ttl=64 time=0.977 ms --- 192.168.10.1 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss , time 1001ms rtt min/avg/max/mdev = 0.781/0.879/0.977/0.098 ms" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-Configuring_a_VLAN_over_a_Bond
Chapter 7. Uninstalling OpenShift Data Foundation
Chapter 7. Uninstalling OpenShift Data Foundation 7.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledgebase article on Uninstalling OpenShift Data Foundation .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_microsoft_azure/uninstalling_openshift_data_foundation
Chapter 7. Monitoring project and application metrics using the Developer perspective
Chapter 7. Monitoring project and application metrics using the Developer perspective The Monitoring view in the Developer perspective provides options to monitor your project or application metrics, such as CPU, memory, and bandwidth usage, and network related information. 7.1. Prerequisites You have created and deployed applications on OpenShift Container Platform . You have logged in to the web console and have switched to the Developer perspective . 7.2. Monitoring your project metrics After you create applications in your project and deploy them, you can use the Developer perspective in the web console to see the metrics for your project. Procedure On the left navigation panel of the Developer perspective, click Monitoring to see the Dashboard , Metrics , Alerts , and Events for your project. Use the Dashboard tab to see graphs depicting the CPU, memory, and bandwidth consumption and network related information, such as the rate of transmitted and received packets and the rate of dropped packets. Figure 7.1. Monitoring dashboard Use the following options to see further details: Select a workload from the All Workloads list to see the filtered metrics for the selected workload. Select an option from the Time Range list to determine the time frame for the data being captured. Select an option from the Refresh Interval list to determine the time period after which the data is refreshed. Hover your cursor over the graphs to see specific details for your pod. Click on any of the graphs displayed to see the details for that particular metric in the Metrics page. Use the Metrics tab to query for the required project metric. Figure 7.2. Monitoring metrics In the Select Query list, select an option to filter the required details for your project. The filtered metrics for all the application pods in your project are displayed in the graph. The pods in your project are also listed below. From the list of pods, clear the colored square boxes to remove the metrics for specific pods to further filter your query result. Click Show PromQL to see the Prometheus query. You can further modify this query with the help of prompts to customize the query and filter the metrics you want to see for that namespace. Use the drop-down list to set a time range for the data being displayed. You can click Reset Zoom to reset it to the default time range. Optionally, in the Select Query list, select Custom Query to create a custom Prometheus query and filter relevant metrics. Use the Alerts tab to see the rules that trigger alerts for the applications in your project, identify the alerts firing in the project, and silence them if required. Figure 7.3. Monitoring alerts Use the Filter list to filter the alerts by their Alert State and Severity . Click on an alert to go to the details page for that alert. In the Alerts Details page, you can click View Metrics to see the metrics for the alert. Use the Notifications toggle adjoining an alert rule to silence all the alerts for that rule, and then select the duration for which the alerts will be silenced from the Silence for list. You must have the permissions to edit alerts to see the Notifications toggle. Use the Options menu adjoining an alert rule to see the details of the alerting rule. Use the Events tab to see the events for your project. Figure 7.4. Monitoring events You can filter the displayed events using the following options: In the Resources list, select a resource to see events for that resource. In the All Types list, select a type of event to see events relevant to that type. Search for specific events using the Filter events by names or messages field. 7.3. Monitoring your application metrics After you create applications in your project and deploy them, you can use the Topology view in the Developer perspective to see the alerts and metrics for your application. Critical and warning alerts for your application are indicated on the workload node in the Topology view. Procedure To see the alerts for your workload: In the Topology view, click the workload to see the workload details in the right panel. Click the Monitoring tab to see the critical and warning alerts for the application; graphs for metrics, such as CPU, memory, and bandwidth usage; and all the events for the application. Note Only critical and warning alerts in the Firing state are displayed in the Topology view. Alerts in the Silenced , Pending and Not Firing states are not displayed. Figure 7.5. Monitoring application metrics Click the alert listed in the right panel to see the alert details in the Alert Details page. Click any of the charts to go to the Metrics tab to see the detailed metrics for the application. Click View monitoring dashboard to see the monitoring dashboard for that application. 7.4. Additional resources Monitoring overview
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/applications/odc-monitoring-project-and-application-metrics-using-developer-perspective
Chapter 6. Supported usage and versions of Satellite components
Chapter 6. Supported usage and versions of Satellite components Satellite supports the following use cases, architectures, and versions. 6.1. Supported usage of Satellite components Usage of all Red Hat Satellite components is supported within the context of Red Hat Satellite only as described below. Red Hat Enterprise Linux Server Each Red Hat Satellite subscription includes one supported instance of Red Hat Enterprise Linux Server. Reserve this instance solely for the purpose of running Red Hat Satellite. Not supported: Using the operating system included with Satellite to run other daemons, applications, or services within your environment. SELinux Ensure SELinux is in enforcing or permissive mode. Not supported: Installation with disabled SELinux. Foreman You can extend Foreman with plugins packaged with Red Hat Satellite. See Satellite 6 Component Versions in Red Hat Knowledgebase for information about supported Foreman plugins. Not supported: Extending Foreman with plugins in the Red Hat Satellite Optional repository. Red Hat Satellite also includes components, configuration, and functionality to provision and configure operating systems other than Red Hat Enterprise Linux. While these features are included, Red Hat supports their usage only for Red Hat Enterprise Linux. Pulp Interact with Pulp only by using the Satellite web UI, CLI, and API. Not supported: Direct modification or interaction with the Pulp local API or database. This can cause irreparable damage to the Red Hat Satellite databases. Candlepin Interact with Candlepin only by using the Satellite web UI, CLI, and API. Not supported: Direct interaction with Candlepin, its local API, or database. This can cause irreparable damage to the Red Hat Satellite databases. Embedded Tomcat Application Server Interact with the embedded Tomcat application server only by using the Satellite web UI, API, and database. Not supported: Direct interaction with the embedded Tomcat application server local API or database. Puppet When you run the Satellite installation program, you can install and configure Puppet servers as part of Capsule Servers. A Puppet module, running on a Puppet server on your Satellite Server or any Capsule Server, is also supported by Red Hat. Additional resources Red Hat supports many different scripting and other frameworks. See How does Red Hat support scripting frameworks in Red Hat Knowledgebase. 6.2. Supported client architectures for content management You can use the following combinations of major versions of Red Hat Enterprise Linux and hardware architectures for registering and managing hosts with Satellite. The Red Hat Satellite Client 6 repositories are also available for these combinations. Table 6.1. Content management support Platform Architectures Red Hat Enterprise Linux 9 x86_64, ppc64le, s390x, aarch64 Red Hat Enterprise Linux 8 x86_64, ppc64le, s390x Red Hat Enterprise Linux 7 x86_64, ppc64 (BE), ppc64le, aarch64, s390x Red Hat Enterprise Linux 6 x86_64, i386, s390x, ppc64 (BE) 6.3. Supported client architectures for host provisioning You can use the following combinations of major versions of Red Hat Enterprise Linux and hardware architectures for host provisioning with Satellite. Table 6.2. Host provisioning support Platform Architectures Red Hat Enterprise Linux 9 x86_64 Red Hat Enterprise Linux 8 x86_64 Red Hat Enterprise Linux 7 x86_64 Red Hat Enterprise Linux 6 x86_64, i386 6.4. Supported client architectures for configuration management You can use the following combinations of major versions of Red Hat Enterprise Linux and hardware architectures for configuration management with Satellite. Table 6.3. Configuration management support Platform Architectures Red Hat Enterprise Linux 9 x86_64 Red Hat Enterprise Linux 8 x86_64, aarch64 Red Hat Enterprise Linux 7 x86_64 Red Hat Enterprise Linux 6 x86_64, i386 6.5. Additional resources See Red Hat Satellite Product Life Cycle for information about support periods for Red Hat Satellite releases.
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/overview_concepts_and_deployment_considerations/supported-usage-and-versions-of-project-components_planning
Appendix B. Ceph network configuration options
Appendix B. Ceph network configuration options These are the common network configuration options for Ceph. public_network Description The IP address and netmask of the public (front-side) network (for example, 192.168.0.0/24 ). Set in [global] . You can specify comma-delimited subnets. Type <ip-address>/<netmask> [, <ip-address>/<netmask>] Required No Default N/A public_addr Description The IP address for the public (front-side) network. Set for each daemon. Type IP Address Required No Default N/A cluster_network Description The IP address and netmask of the cluster network (for example, 10.0.0.0/24 ). Set in [global] . You can specify comma-delimited subnets. Type <ip-address>/<netmask> [, <ip-address>/<netmask>] Required No Default N/A cluster_addr Description The IP address for the cluster network. Set for each daemon. Type Address Required No Default N/A ms_type Description The messenger type for the network transport layer. Red Hat supports the simple and the async messenger type using posix semantics. Type String. Required No. Default async+posix ms_public_type Description The messenger type for the network transport layer of the public network. It operates identically to ms_type , but is applicable only to the public or front-side network. This setting enables Ceph to use a different messenger type for the public or front-side and cluster or back-side networks. Type String. Required No. Default None. ms_cluster_type Description The messenger type for the network transport layer of the cluster network. It operates identically to ms_type , but is applicable only to the cluster or back-side network. This setting enables Ceph to use a different messenger type for the public or front-side and cluster or back-side networks. Type String. Required No. Default None. Host options You must declare at least one Ceph Monitor in the Ceph configuration file, with a mon addr setting under each declared monitor. Ceph expects a host setting under each declared monitor, metadata server and OSD in the Ceph configuration file. Important Do not use localhost . Use the short name of the node, not the fully-qualified domain name (FQDN). Do not specify any value for host when using a third party deployment system that retrieves the node name for you. mon_addr Description A list of <hostname>:<port> entries that clients can use to connect to a Ceph monitor. If not set, Ceph searches [mon.*] sections. Type String Required No Default N/A host Description The host name. Use this setting for specific daemon instances (for example, [osd.0] ). Type String Required Yes, for daemon instances. Default localhost TCP options Ceph disables TCP buffering by default. ms_tcp_nodelay Description Ceph enables ms_tcp_nodelay so that each request is sent immediately (no buffering). Disabling Nagle's algorithm increases network traffic, which can introduce congestion. If you experience large numbers of small packets, you may try disabling ms_tcp_nodelay , but be aware that disabling it will generally increase latency. Type Boolean Required No Default true ms_tcp_rcvbuf Description The size of the socket buffer on the receiving end of a network connection. Disabled by default. Type 32-bit Integer Required No Default 0 Bind options The bind options configure the default port ranges for the Ceph OSD daemons. The default range is 6800:7100 . You can also enable Ceph daemons to bind to IPv6 addresses. Important Verify that the firewall configuration allows you to use the configured port range. ms_bind_port_min Description The minimum port number to which an OSD daemon will bind. Type 32-bit Integer Default 6800 Required No ms_bind_ipv6 Description Enables Ceph daemons to bind to IPv6 addresses. Type Boolean Default false Required No Asynchronous messenger options These Ceph messenger options configure the behavior of AsyncMessenger . ms_async_op_threads Description Initial number of worker threads used by each AsyncMessenger instance. This configuration setting SHOULD equal the number of replicas or erasure code chunks, but it may be set lower if the CPU core count is low or the number of OSDs on a single server is high. Type 64-bit Unsigned Integer Required No Default 3 Connection mode configuration options For most connections, there are options that control the modes that are used for encryption and compression. ms_cluster_mode Description Connection mode used for intra-cluster communication between Ceph daemons. If multiple modes are listed, the modes listed first are preferred. Type String Default crc secure ms_service_mode Description A list of permitted modes for clients to use when connecting to the storage cluster. Type String Default crc secure ms_client_mode Description A list of connection modes, in order of preference, for clients to use when interacting with a Ceph cluster. Type String Default crc secure ms_mon_cluster_mode Description The connection mode to use between Ceph monitors. Type String Default secure crc ms_mon_service_mode Description A list of permitted modes for clients or other Ceph daemons to use when connecting to monitors. Type String Default secure crc ms_mon_client_mode Description A list of connection modes, in order of preference, for clients or non-monitor daemons to use when connecting to Ceph monitors. Type String Default secure crc Compression mode configuration options With the messenger v2 protocol, you can use the configuration options for the compression modes. ms_compress_secure Description Combining encryption with compression reduces the level of security of messages between peers. In case, both the encryption and compression are enabled, compression setting is ignored and message is not compressed. Override this setting with option. Send messages directly from the thread that generated them instead of queuing and sending from the AsyncMessenger thread. This option is known to decrease performance on systems with a lot of CPU cores, so it's disabled by default. Type Boolean Default false ms_osd_compress_mode Description Compression policy to use in messenger for communication with Ceph OSDs. Type String Default none Valid choices none or force ms_osd_compress_min_size Description Minimal message size eligible for on-wire compression. Type Integer Default 1 Ki ms_osd_compression_algorithm Description Compression algorithm for connections with OSD in order of preference Type String Default snappy Valid choices snappy , zstd , zlib , or lz4
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/configuration_guide/ceph-network-configuration-options_conf
Chapter 7. Teiid Designer Examples
Chapter 7. Teiid Designer Examples 7.1. Teiid Designer Examples We are going to dive right into a couple examples of common tasks in this section. These examples will give you a quick introduction to the capabilities that are built into Teiid Designer to assist you with common design tasks. Specifically, we will introduce the following concepts: Guides The Guides View is a good starting point for many common modeling tasks. The view includes categorized Modeling Actions and also links to Cheat Sheets for common tasks. The categorized Modeling Actions simply group together all of the actions that you'll need to accomplish a task. You can launch the actions directly from the Guides view, rather than hunting through the various Teiid Designer menus. Cheat Sheets The Cheat Sheets go beyond even the categorized Action Sets, and walk you step-by-step through some common tasks. At each step, the data entered in the step is carried through the process when possible. After seeing the Guides and Cheat Sheets in action, subsequent chapters will offer detailed explanations of the various concepts and actions.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/chap-teiid_designer_examples
5.210. nfs-utils
5.210. nfs-utils 5.210.1. RHBA-2012:0964 - nfs-utils bug fix update Updated nfs-utils packages that fix multiple bugs are now available for Red Hat Enterprise Linux 6. The nfs-utils packages provide a daemon for the kernel Network File System (NFS) server, and related tools such as the mount.nfs, umount.nfs, and showmount. Bug Fixes BZ# 737990 Prior to this update, the nfs(5) man page contained incorrect information on the Transmission Control Protocol (TCP) retries. This update modifies the man page and describes more accurately how the TCP time out code works. BZ# 740472 Prior to this update, the "nfs_rewrite_pmap_mount_options()" function did not interrupt RPC timeouts as expected. As a consequence, mounts that used the "-o bg" and "vers=" options did not retry but failed when the server was down. This update modifies the underlying code to allow mounts to retry when the server is down. BZ# 751089 Prior to this update, the rpc.idmapd daemon handled the "SIGUSR*" signal incorrectly. As a consequence, idmapd could, under certain circumstances, close without an error message. This update modifies the underlying code to process the "SIGUSR*" signal as expected. BZ# 758000 Prior to this update, mount points could not be unmounted when the path contained multiple slash characters. This update modifies the "umount" paths so that the mount point can now be unmounted as expected. BZ# 772543 Prior to this update, nfs-utils used the wrong nfs lock file. As a consequence, the "status nfsd" command did not return the correct status. This update modifies the startup script to use the "/var/lock/subsys/nfsd" file as the nfs lock file. Now the correct nfsd status is returned. BZ# 772619 Prior to this update, NFS ID Mapping could redirect Unicode characters using umlaut diacritics (o, a, u) in group names to the group "nobody". This update deactivates the Unicode characters check. BZ# 787970 Prior to this update, the name mapping daemon idmapd failed to decode group names that contained spaces. This update modifies the character size check for decoding the octal encoded value. Now, group names with spaces are decoded as expected. BZ# 800335 Prior to this update, concurrent executions of the "exportfs" command could, under certain circumstances, cause conflicts when updating the etab file. As a consequence, not all exports were successful. This update modifies the exportfs script to allow for concurrent executions. BZ# 801085 Prior to this update, symlinks mounted with NFS could not be unmounted. This update modifies the underlying code so that symlinks are now exported as expected. BZ# 803946 Prior to this update, the nfsd daemon was started before the mountd daemon and nfsd could not validate file handles with mountd. The NFS client received an "ESTALE" error and client applications failed if an existing client sent requests to the NFS server when nfsd was started. This update changes the startup order of the daemons so that nfsd can use the mountd daemon. BZ# 816149 Prior to this update, the preinstall scriptlet could fail to change the default group ID for nfsnobody. This update modifies the preinstall scriptlet and the default group ID is changed after the nfs-utils upgrade as expected. BZ# 816162 Prior to this update, mounting a subdirectory of non-user accounts could, under certain circumstances, fail. This update modifies the underlying code ensure that also the parent directory of the pseudo exports have root squashing disabled. Now, subdirectories of non-user accounts can be successfully mounted. All users of nfs-utils are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/nfs-utils
Chapter 25. Kernel
Chapter 25. Kernel kernel component, BZ#1019091 The following RAID controller cards are no longer supported. However, the aacraid driver still detects them. Thus, they are marked as not supported in the dmesg output. PERC 2/Si (Iguana/PERC2Si) PERC 3/Di (Opal/PERC3Di) PERC 3/Si (SlimFast/PERC3Si) PERC 3/Di (Iguana FlipChip/PERC3DiF) PERC 3/Di (Viper/PERC3DiV) PERC 3/Di (Lexus/PERC3DiL) PERC 3/Di (Jaguar/PERC3DiJ) PERC 3/Di (Dagger/PERC3DiD) PERC 3/Di (Boxster/PERC3DiB) Adaptec 2120S (Crusader) Adaptec 2200S (Vulcan) Adaptec 2200S (Vulcan-2m) Legend S220 (Legend Crusader) Legend S230 (Legend Vulcan) Adaptec 3230S (Harrier) Adaptec 3240S (Tornado) ASR-2020ZCR SCSI PCI-X ZCR (Skyhawk) ASR-2025ZCR SCSI SO-DIMM PCI-X ZCR (Terminator) ASR-2230S + ASR-2230SLP PCI-X (Lancer) ASR-2130S (Lancer) AAR-2820SA (Intruder) AAR-2620SA (Intruder) AAR-2420SA (Intruder) ICP9024RO (Lancer) ICP9014RO (Lancer) ICP9047MA (Lancer) ICP9087MA (Lancer) ICP5445AU (Hurricane44) ICP9085LI (Marauder-X) ICP5085BR (Marauder-E) ICP9067MA (Intruder-6) Themisto Jupiter Platform Callisto Jupiter Platform ASR-2020SA SATA PCI-X ZCR (Skyhawk) ASR-2025SA SATA SO-DIMM PCI-X ZCR (Terminator) AAR-2410SA PCI SATA 4ch (Jaguar II) CERC SATA RAID 2 PCI SATA 6ch (DellCorsair) AAR-2810SA PCI SATA 8ch (Corsair-8) AAR-21610SA PCI SATA 16ch (Corsair-16) ESD SO-DIMM PCI-X SATA ZCR (Prowler) AAR-2610SA PCI SATA 6ch ASR-2240S (SabreExpress) ASR-4005 ASR-4800SAS (Marauder-X) ASR-4805SAS (Marauder-E) ASR-3800 (Hurricane44) Adaptec 5400S (Mustang) Dell PERC2/QC HP NetRAID-4M The following cards detected by aacraid are also no longer supported but they are not identified as not supported in the dmesg output: IBM 8i (AvonPark) IBM 8i (AvonPark Lite) IBM 8k/8k-l8 (Aurora) IBM 8k/8k-l4 (Aurora Lite) Warning Note that the Kdump mechanism might not work properly on the aforementioned RAID controllers. kernel component, BZ#1061210 When the hpsa_allow_any option is used, the hpsa driver allows the use of PCI IDs that are not listed in the driver's pci-id table. Thus, cards detected when this option is used, are not supported in Red Hat Enterprise Linux 7. kernel component, BZ#975791 The following cciss controllers are no longer supported: Smart Array 5300 Smart Array 5i Smart Array 532 Smart Array 5312 Smart Array 641 Smart Array 642 Smart Array 6400 Smart Array 6400 EM Smart Array 6i Smart Array P600 Smart Array P800 Smart Array P400 Smart Array P400i Smart Array E200i Smart Array E200 Smart Array E500 Smart Array P700M kernel component, BZ# 1055089 The systemd service does not spawn the getty tool on the /dev/hvc0/ virtio console if the virtio console driver is not found before loading kernel modules at system startup. As a consequence, a TTY terminal does not start automatically after the system boot when the system is running as a KVM guest. To work around this problem, start getty on /dev/hvc0/ after the system boot. The ISA serial device, which is used more commonly, works as expected. kernel component, BZ#1060565 A previously applied patch is causing a memory leak when creating symbolic links over NFS. Consequently, if creating a very large number of symbolic links, on a scale of hundreds of thousands, the system may report the out of memory status. kernel component, BZ#1097468 The Linux kernel Non-Uniform Memory Access (NUMA) balancing does not always work correctly in Red Hat Enterprise Linux 7. As a consequence, when the numa_balancing parameter is set, some of the memory can move to an arbitrary non-destination node before moving to the constrained nodes, and the memory on the destination node also decreases under certain circumstances. There is currently no known workaround available. kernel component, BZ#915855 The QLogic 1G iSCSI Adapter present in the system can cause a call trace error when the qla4xx driver is sharing the interrupt line with the USB sub-system. This error has no impact on the system functionality. The error can be found in the kernel log messages located in the /var/log/messages file. To prevent the call trace from logging into the kernel log messages, add the nousb kernel parameter when the system is booting. system-config-kdump component, BZ#1077470 In the Kernel Dump Configuration window, selecting the Raw device option in the Target settings tab does not work. To work around this problem, edit the kdump.conf file manually. kernel component, BZ#1087796 An attempt to remove the bnx2x module while the bnx2fc driver is processing a corrupted frame causes a kernel panic. To work around this problem, shut down any active FCoE interfaces before executing the modprobe -r bnx2x command. kexec-tools component, BZ#1089788 Due to a wrong buffer size calculation in the makedumpfile utility, an OOM error could occur with a high probability. As a consequence, the vmcore file cannot be captured under certain circumstances. No workaround is currently available.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.0_release_notes/known-issues-kernel
Chapter 6. Setting up to Develop Applications Using Java
Chapter 6. Setting up to Develop Applications Using Java Red Hat Enterprise Linux supports the development of applications in Java. During the system installation, select the Java Platform Add-on to install OpenJDK as the default Java version. Alternatively, follow the instructions in the Installation Guide for Red Hat CodeReady Studio, Chapter 2.2, Installing OpenJDK 1.8.0 on RHEL to install OpenJDK separately. For an integrated graphical development environment, install the Eclipse-based Red Hat CodeReady Studio , which offers extensive support for Java development. Follow the instructions in the Installation Guide for Red Hat CodeReady Studio .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/developer_guide/setting-up_setup-developing-java
function::ipmib_filter_key
function::ipmib_filter_key Name function::ipmib_filter_key - Default filter function for ipmib.* probes Synopsis Arguments skb pointer to the struct sk_buff op value to be counted if skb passes the filter SourceIsLocal 1 is local operation and 0 is non-local operation Description This function is a default filter function. The user can replace this function with their own. The user-supplied filter function returns an index key based on the values in skb . A return value of 0 means this particular skb should be not be counted.
[ "ipmib_filter_key:long(skb:long,op:long,SourceIsLocal:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ipmib-filter-key
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_eclipse_temurin_17.0.12/making-open-source-more-inclusive
Chapter 4. Additional toolsets for development
Chapter 4. Additional toolsets for development 4.1. Using GCC Toolset 4.1.1. What is GCC Toolset Red Hat Enterprise Linux 8 introduces GCC Toolset, an Application Stream containing more up-to-date versions of development and performance analysis tools. GCC Toolset is similar to Red Hat Developer Toolset for RHEL 7. GCC Toolset is available as an Application Stream in the form of a software collection in the AppStream repository. GCC Toolset is fully supported under Red Hat Enterprise Linux Subscription Level Agreements, is functionally complete, and is intended for production use. Applications and libraries provided by GCC Toolset do not replace the Red Hat Enterprise Linux system versions, do not override them, and do not automatically become default or preferred choices. Using a framework called software collections, an additional set of developer tools is installed into the /opt/ directory and is explicitly enabled by the user on demand using the scl utility. Unless noted otherwise for specific tools or features, GCC Toolset is available for all architectures supported by Red Hat Enterprise Linux. For information about the length of support, see Red Hat Enterprise Linux Application Streams Life Cycle . 4.1.2. Installing GCC Toolset Installing GCC Toolset on a system installs the main tools and all necessary dependencies. Note that some parts of the toolset are not installed by default and must be installed separately. Procedure To install GCC Toolset version N : 4.1.3. Installing individual packages from GCC Toolset To install only certain tools from GCC Toolset instead of the whole toolset, list the available packages and install the selected ones with the yum package management tool. This procedure is useful also for packages that are not installed by default with the toolset. Procedure List the packages available in GCC Toolset version N : To install any of these packages: Replace package_name with a space-separated list of packages to install. For example, to install the gcc-toolset-13-annobin-annocheck and gcc-toolset-13-binutils-devel packages: 4.1.4. Uninstalling GCC Toolset To remove GCC Toolset from your system, uninstall it using the yum package management tool. Procedure To uninstall GCC Toolset version N : 4.1.5. Running a tool from GCC Toolset To run a tool from GCC Toolset, use the scl utility. Procedure To run a tool from GCC Toolset version N : 4.1.6. Running a shell session with GCC Toolset GCC Toolset allows running a shell session where the GCC Toolset tool versions are used instead of system versions of these tools, without explicitly using the scl command. This is useful when you need to interactively start the tools many times, such as when setting up or testing a development setup. Procedure To run a shell session where tool versions from GCC Toolset version N override system versions of these tools: 4.1.7. Additional resources Red Hat Developer Toolset User Guide 4.2. GCC Toolset 9 Learn about information specific to GCC Toolset version 9 and the tools contained in this version. 4.2.1. Tools and versions provided by GCC Toolset 9 GCC Toolset 9 provides the following tools and versions: Table 4.1. Tool versions in GCC Toolset 9 Name Version Description GCC 9.2.1 A portable compiler suite with support for C, C++, and Fortran. GDB 8.3 A command-line debugger for programs written in C, C++, and Fortran. Valgrind 3.15.0 An instrumentation framework and a number of tools to profile applications in order to detect memory errors, identify memory management problems, and report any use of improper arguments in system calls. SystemTap 4.1 A tracing and probing tool to monitor the activities of the entire system without the need to instrument, recompile, install, and reboot. Dyninst 10.1.0 A library for instrumenting and working with user-space executables during their execution. binutils 2.32 A collection of binary tools and other utilities to inspect and manipulate object files and binaries. elfutils 0.176 A collection of binary tools and other utilities to inspect and manipulate ELF files. dwz 0.12 A tool to optimize DWARF debugging information contained in ELF shared libraries and ELF executables for size. make 4.2.1 A dependency-tracking build automation tool. strace 5.1 A debugging tool to monitor system calls that a program uses and signals it receives. ltrace 0.7.91 A debugging tool to display calls to dynamic libraries that a program makes. It can also monitor system calls executed by programs. annobin 9.08 A build security checking tool. 4.2.2. C++ compatibility in GCC Toolset 9 Important The compatibility information presented here apply only to the GCC from GCC Toolset 9. The GCC compiler in GCC Toolset can use the following C++ standards: C++14 This is the default language standard setting for GCC Toolset 9, with GNU extensions, equivalent to explicitly using option -std=gnu++14 . Using the C++14 language version is supported when all C++ objects compiled with the respective flag have been built using GCC version 6 or later. C++11 This language standard is available in GCC Toolset 9. Using the C++11 language version is supported when all C++ objects compiled with the respective flag have been built using GCC version 5 or later. C++98 This language standard is available in GCC Toolset 9. Binaries, shared libraries and objects built using this standard can be freely mixed regardless of being built with GCC from GCC Toolset, Red Hat Developer Toolset, and RHEL 5, 6, 7 and 8. C++17, C++2a These language standards are available in GCC Toolset 9 only as an experimental, unstable, and unsupported capability. Additionally, compatibility of objects, binary files, and libraries built using these standards cannot be guaranteed. All of the language standards are available in both the standard compliant variant or with GNU extensions. When mixing objects built with GCC Toolset with those built with the RHEL toolchain (particularly .o or .a files), GCC Toolset toolchain should be used for any linkage. This ensures any newer library features provided only by GCC Toolset are resolved at link time. 4.2.3. Specifics of GCC in GCC Toolset 9 Static linking of libraries Certain more recent library features are statically linked into applications built with GCC Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This creates an additional minor security risk because standard Red Hat Enterprise Linux errata do not change this code. If the need arises for developers to rebuild their applications due to this risk, Red Hat will communicate this using a security erratum. Important Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons. Specify libraries after object files when linking In GCC Toolset, libraries are linked using linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the respective shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files: Using a library from GCC Toolset in this manner results in the linker error message undefined reference to symbol . To prevent this problem, follow the standard linking practice and specify the option adding the library after the options specifying the object files: Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of GCC . 4.2.4. Specifics of binutils in GCC Toolset 9 Static linking of libraries Certain more recent library features are statically linked into applications built with GCC Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This creates an additional minor security risk because standard Red Hat Enterprise Linux errata do not change this code. If the need arises for developers to rebuild their applications due to this risk, Red Hat will communicate this using a security erratum. Important Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons. Specify libraries after object files when linking In GCC Toolset, libraries are linked using linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the respective shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files: Using a library from GCC Toolset in this manner results in the linker error message undefined reference to symbol . To prevent this problem, follow the standard linking practice, and specify the option adding the library after the options specifying the object files: Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of binutils . 4.3. GCC Toolset 10 Learn about information specific to GCC Toolset version 10 and the tools contained in this version. 4.3.1. Tools and versions provided by GCC Toolset 10 GCC Toolset 10 provides the following tools and versions: Table 4.2. Tool versions in GCC Toolset 10 Name Version Description GCC 10.2.1 A portable compiler suite with support for C, C++, and Fortran. GDB 9.2 A command-line debugger for programs written in C, C++, and Fortran. Valgrind 3.16.0 An instrumentation framework and a number of tools to profile applications in order to detect memory errors, identify memory management problems, and report any use of improper arguments in system calls. SystemTap 4.4 A tracing and probing tool to monitor the activities of the entire system without the need to instrument, recompile, install, and reboot. Dyninst 10.2.1 A library for instrumenting and working with user-space executables during their execution. binutils 2.35 A collection of binary tools and other utilities to inspect and manipulate object files and binaries. elfutils 0.182 A collection of binary tools and other utilities to inspect and manipulate ELF files. dwz 0.12 A tool to optimize DWARF debugging information contained in ELF shared libraries and ELF executables for size. make 4.2.1 A dependency-tracking build automation tool. strace 5.7 A debugging tool to monitor system calls that a program uses and signals it receives. ltrace 0.7.91 A debugging tool to display calls to dynamic libraries that a program makes. It can also monitor system calls executed by programs. annobin 9.29 A build security checking tool. 4.3.2. C++ compatibility in GCC Toolset 10 Important The compatibility information presented here apply only to the GCC from GCC Toolset 10. The GCC compiler in GCC Toolset can use the following C++ standards: C++14 This is the default language standard setting for GCC Toolset 10, with GNU extensions, equivalent to explicitly using option -std=gnu++14 . Using the C++14 language version is supported when all C++ objects compiled with the respective flag have been built using GCC version 6 or later. C++11 This language standard is available in GCC Toolset 10. Using the C++11 language version is supported when all C++ objects compiled with the respective flag have been built using GCC version 5 or later. C++98 This language standard is available in GCC Toolset 10. Binaries, shared libraries and objects built using this standard can be freely mixed regardless of being built with GCC from GCC Toolset, Red Hat Developer Toolset, and RHEL 5, 6, 7 and 8. C++17 This language standard is available in GCC Toolset 10. C++20 This language standard is available in GCC Toolset 10 only as an experimental, unstable, and unsupported capability. Additionally, compatibility of objects, binary files, and libraries built using this standard cannot be guaranteed. All of the language standards are available in both the standard compliant variant or with GNU extensions. When mixing objects built with GCC Toolset with those built with the RHEL toolchain (particularly .o or .a files), GCC Toolset toolchain should be used for any linkage. This ensures any newer library features provided only by GCC Toolset are resolved at link time. 4.3.3. Specifics of GCC in GCC Toolset 10 Static linking of libraries Certain more recent library features are statically linked into applications built with GCC Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This creates an additional minor security risk because standard Red Hat Enterprise Linux errata do not change this code. If the need arises for developers to rebuild their applications due to this risk, Red Hat will communicate this using a security erratum. Important Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons. Specify libraries after object files when linking In GCC Toolset, libraries are linked using linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the respective shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files: Using a library from GCC Toolset in this manner results in the linker error message undefined reference to symbol . To prevent this problem, follow the standard linking practice and specify the option adding the library after the options specifying the object files: Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of GCC . 4.3.4. Specifics of binutils in GCC Toolset 10 Static linking of libraries Certain more recent library features are statically linked into applications built with GCC Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This creates an additional minor security risk because standard Red Hat Enterprise Linux errata do not change this code. If the need arises for developers to rebuild their applications due to this risk, Red Hat will communicate this using a security erratum. Important Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons. Specify libraries after object files when linking In GCC Toolset, libraries are linked using linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the respective shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files: Using a library from GCC Toolset in this manner results in the linker error message undefined reference to symbol . To prevent this problem, follow the standard linking practice, and specify the option adding the library after the options specifying the object files: Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of binutils . 4.4. GCC Toolset 11 Learn about information specific to GCC Toolset version 11 and the tools contained in this version. 4.4.1. Tools and versions provided by GCC Toolset 11 GCC Toolset 11 provides the following tools and versions: Table 4.3. Tool versions in GCC Toolset 11 Name Version Description GCC 11.2.1 A portable compiler suite with support for C, C++, and Fortran. GDB 10.2 A command-line debugger for programs written in C, C++, and Fortran. Valgrind 3.17.0 An instrumentation framework and a number of tools to profile applications in order to detect memory errors, identify memory management problems, and report any use of improper arguments in system calls. SystemTap 4.5 A tracing and probing tool to monitor the activities of the entire system without the need to instrument, recompile, install, and reboot. Dyninst 11.0.0 A library for instrumenting and working with user-space executables during their execution. binutils 2.36.1 A collection of binary tools and other utilities to inspect and manipulate object files and binaries. elfutils 0.185 A collection of binary tools and other utilities to inspect and manipulate ELF files. dwz 0.14 A tool to optimize DWARF debugging information contained in ELF shared libraries and ELF executables for size. make 4.3 A dependency-tracking build automation tool. strace 5.13 A debugging tool to monitor system calls that a program uses and signals it receives. ltrace 0.7.91 A debugging tool to display calls to dynamic libraries that a program makes. It can also monitor system calls executed by programs. annobin 10.23 A build security checking tool. 4.4.2. C++ compatibility in GCC Toolset 11 Important The compatibility information presented here apply only to the GCC from GCC Toolset 11. The GCC compiler in GCC Toolset can use the following C++ standards: C++14 This language standard is available in GCC Toolset 11. Using the C++14 language version is supported when all C++ objects compiled with the respective flag have been built using GCC version 6 or later. C++11 This language standard is available in GCC Toolset 11. Using the C++11 language version is supported when all C++ objects compiled with the respective flag have been built using GCC version 5 or later. C++98 This language standard is available in GCC Toolset 11. Binaries, shared libraries and objects built using this standard can be freely mixed regardless of being built with GCC from GCC Toolset, Red Hat Developer Toolset, and RHEL 5, 6, 7 and 8. C++17 This language standard is available in GCC Toolset 11. This is the default language standard setting for GCC Toolset 11, with GNU extensions, equivalent to explicitly using option -std=gnu++17 . Using the C++17 language version is supported when all C++ objects compiled with the respective flag have been built using GCC version 10 or later. C++20 and C++23 This language standard is available in GCC Toolset 11 only as an experimental, unstable, and unsupported capability. Additionally, compatibility of objects, binary files, and libraries built using this standard cannot be guaranteed. To enable C++20 support, add the command-line option -std=c++20 to your g++ command line. To enable C++23 support, add the command-line option -std=c++2b to your g++ command line. All of the language standards are available in both the standard compliant variant or with GNU extensions. When mixing objects built with GCC Toolset with those built with the RHEL toolchain (particularly .o or .a files), GCC Toolset toolchain should be used for any linkage. This ensures any newer library features provided only by GCC Toolset are resolved at link time. 4.4.3. Specifics of GCC in GCC Toolset 11 Static linking of libraries Certain more recent library features are statically linked into applications built with GCC Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This creates an additional minor security risk because standard Red Hat Enterprise Linux errata do not change this code. If the need arises for developers to rebuild their applications due to this risk, Red Hat will communicate this using a security erratum. Important Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons. Specify libraries after object files when linking In GCC Toolset, libraries are linked using linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the respective shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files: Using a library from GCC Toolset in this manner results in the linker error message undefined reference to symbol . To prevent this problem, follow the standard linking practice and specify the option adding the library after the options specifying the object files: Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of GCC . 4.4.4. Specifics of binutils in GCC Toolset 11 Static linking of libraries Certain more recent library features are statically linked into applications built with GCC Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This creates an additional minor security risk because standard Red Hat Enterprise Linux errata do not change this code. If the need arises for developers to rebuild their applications due to this risk, Red Hat will communicate this using a security erratum. Important Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons. Specify libraries after object files when linking In GCC Toolset, libraries are linked using linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the respective shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files: Using a library from GCC Toolset in this manner results in the linker error message undefined reference to symbol . To prevent this problem, follow the standard linking practice, and specify the option adding the library after the options specifying the object files: Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of binutils . 4.5. GCC Toolset 12 Learn about information specific to GCC Toolset version 12 and the tools contained in this version. 4.5.1. Tools and versions provided by GCC Toolset 12 GCC Toolset 12 provides the following tools and versions: Table 4.4. Tool versions in GCC Toolset 12 Name Version Description GCC 12.2.1 A portable compiler suite with support for C, C++, and Fortran. GDB 11.2 A command-line debugger for programs written in C, C++, and Fortran. binutils 2.38 A collection of binary tools and other utilities to inspect and manipulate object files and binaries. dwz 0.14 A tool to optimize DWARF debugging information contained in ELF shared libraries and ELF executables for size. annobin 11.08 A build security checking tool. 4.5.2. C++ compatibility in GCC Toolset 12 Important The compatibility information presented here apply only to the GCC from GCC Toolset 12. The GCC compiler in GCC Toolset can use the following C++ standards: C++14 This language standard is available in GCC Toolset 12. Using the C++14 language version is supported when all C++ objects compiled with the respective flag have been built using GCC version 6 or later. C++11 This language standard is available in GCC Toolset 12. Using the C++11 language version is supported when all C++ objects compiled with the respective flag have been built using GCC version 5 or later. C++98 This language standard is available in GCC Toolset 12. Binaries, shared libraries and objects built using this standard can be freely mixed regardless of being built with GCC from GCC Toolset, Red Hat Developer Toolset, and RHEL 5, 6, 7 and 8. C++17 This language standard is available in GCC Toolset 12. This is the default language standard setting for GCC Toolset 12, with GNU extensions, equivalent to explicitly using option -std=gnu++17 . Using the C++17 language version is supported when all C++ objects compiled with the respective flag have been built using GCC version 10 or later. C++20 and C++23 This language standard is available in GCC Toolset 12 only as an experimental, unstable, and unsupported capability. Additionally, compatibility of objects, binary files, and libraries built using this standard cannot be guaranteed. To enable C++20 support, add the command-line option -std=c++20 to your g++ command line. To enable C++23 support, add the command-line option -std=c++23 to your g++ command line. All of the language standards are available in both the standard compliant variant or with GNU extensions. When mixing objects built with GCC Toolset with those built with the RHEL toolchain (particularly .o or .a files), GCC Toolset toolchain should be used for any linkage. This ensures any newer library features provided only by GCC Toolset are resolved at link time. 4.5.3. Specifics of GCC in GCC Toolset 12 Static linking of libraries Certain more recent library features are statically linked into applications built with GCC Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This creates an additional minor security risk because standard Red Hat Enterprise Linux errata do not change this code. If the need arises for developers to rebuild their applications due to this risk, Red Hat will communicate this using a security erratum. Important Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons. Specify libraries after object files when linking In GCC Toolset, libraries are linked using linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the respective shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files: Using a library from GCC Toolset in this manner results in the linker error message undefined reference to symbol . To prevent this problem, follow the standard linking practice and specify the option adding the library after the options specifying the object files: Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of GCC . 4.5.4. Specifics of binutils in GCC Toolset 12 Static linking of libraries Certain more recent library features are statically linked into applications built with GCC Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This creates an additional minor security risk because standard Red Hat Enterprise Linux errata do not change this code. If the need arises for developers to rebuild their applications due to this risk, Red Hat will communicate this using a security erratum. Important Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons. Specify libraries after object files when linking In GCC Toolset, libraries are linked using linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the respective shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files: Using a library from GCC Toolset in this manner results in the linker error message undefined reference to symbol . To prevent this problem, follow the standard linking practice, and specify the option adding the library after the options specifying the object files: Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of binutils . 4.5.5. Specifics of annobin in GCC Toolset 12 Under some circumstances, due to a synchronization issue between annobin and gcc in GCC Toolset 12, your compilation can fail with an error message that looks similar to the following: To work around the problem, create a symbolic link in the plugin directory from the annobin.so file to the gcc-annobin.so file: Replace architecture with the architecture you use in your system: aarch64 i686 ppc64le s390x x86_64 4.6. GCC Toolset 13 Learn about information specific to GCC Toolset version 13 and the tools contained in this version. 4.6.1. Tools and versions provided by GCC Toolset 13 GCC Toolset 13 provides the following tools and versions: Table 4.5. Tool versions in GCC Toolset 13 Name Version Description GCC 13.2.1 A portable compiler suite with support for C, C++, and Fortran. GDB 12.1 A command-line debugger for programs written in C, C++, and Fortran. binutils 2.40 A collection of binary tools and other utilities to inspect and manipulate object files and binaries. dwz 0.14 A tool to optimize DWARF debugging information contained in ELF shared libraries and ELF executables for size. annobin 12.32 A build security checking tool. 4.6.2. C++ compatibility in GCC Toolset 13 Important The compatibility information presented here apply only to the GCC from GCC Toolset 13. The GCC compiler in GCC Toolset can use the following C++ standards: C++14 This language standard is available in GCC Toolset 13. Using the C++14 language version is supported when all C++ objects compiled with the respective flag have been built using GCC version 6 or later. C++11 This language standard is available in GCC Toolset 13. Using the C++11 language version is supported when all C++ objects compiled with the respective flag have been built using GCC version 5 or later. C++98 This language standard is available in GCC Toolset 13. Binaries, shared libraries and objects built using this standard can be freely mixed regardless of being built with GCC from GCC Toolset, Red Hat Developer Toolset, and RHEL 5, 6, 7 and 8. C++17 This language standard is available in GCC Toolset 13. This is the default language standard setting for GCC Toolset 13, with GNU extensions, equivalent to explicitly using option -std=gnu++17 . Using the C++17 language version is supported when all C++ objects compiled with the respective flag have been built using GCC version 10 or later. C++20 and C++23 These language standards are available in GCC Toolset 13 only as an experimental, unstable, and unsupported capability. Additionally, compatibility of objects, binary files, and libraries built using this standard cannot be guaranteed. To enable the C++20 standard, add the command-line option -std=c++20 to your g++ command line. To enable the C++23 standard, add the command-line option -std=c++23 to your g++ command line. All of the language standards are available in both the standard compliant variant or with GNU extensions. When mixing objects built with GCC Toolset with those built with the RHEL toolchain (particularly .o or .a files), GCC Toolset toolchain should be used for any linkage. This ensures any newer library features provided only by GCC Toolset are resolved at link time. 4.6.3. Specifics of GCC in GCC Toolset 13 Static linking of libraries Certain more recent library features are statically linked into applications built with GCC Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This creates an additional minor security risk because standard Red Hat Enterprise Linux errata do not change this code. If the need arises for developers to rebuild their applications due to this risk, Red Hat will communicate this using a security erratum. Important Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons. Specify libraries after object files when linking In GCC Toolset, libraries are linked using linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the respective shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files: Using a library from GCC Toolset in this manner results in the linker error message undefined reference to symbol . To prevent this problem, follow the standard linking practice and specify the option adding the library after the options specifying the object files: Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of GCC . 4.6.4. Specifics of binutils in GCC Toolset 13 Static linking of libraries Certain more recent library features are statically linked into applications built with GCC Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This creates an additional minor security risk because standard Red Hat Enterprise Linux errata do not change this code. If the need arises for developers to rebuild their applications due to this risk, Red Hat will communicate this using a security erratum. Important Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons. Specify libraries after object files when linking In GCC Toolset, libraries are linked using linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the respective shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files: Using a library from GCC Toolset in this manner results in the linker error message undefined reference to symbol . To prevent this problem, follow the standard linking practice, and specify the option adding the library after the options specifying the object files: Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of binutils . 4.6.5. Specifics of annobin in GCC Toolset 13 Under some circumstances, due to a synchronization issue between annobin and gcc in GCC Toolset 13, your compilation can fail with an error message that looks similar to the following: To work around the problem, create a symbolic link in the plugin directory from the annobin.so file to the gcc-annobin.so file: Replace architecture with the architecture you use in your system: aarch64 i686 ppc64le s390x x86_64 4.7. GCC Toolset 14 Learn about information specific to GCC Toolset version 14 and the tools contained in this version. 4.7.1. Tools and versions provided by GCC Toolset 14 GCC Toolset 14 provides the following tools and versions: Table 4.6. Tool versions in GCC Toolset 14 Name Version Description GCC 14.2.1 A portable compiler suite with support for C, C++, and Fortran. GDB 14.2 A command-line debugger for programs written in C, C++, and Fortran. binutils 2.41 A collection of binary tools and other utilities to inspect and manipulate object files and binaries. dwz 0.14 A tool to optimize DWARF debugging information contained in ELF shared libraries and ELF executables for size. annobin 12.70 A build security checking tool. 4.7.2. C++ compatibility in GCC Toolset 14 Important The compatibility information presented here apply only to the GCC from GCC Toolset 14. The GCC compiler in GCC Toolset can use the following C++ standards: C++14 This language standard is available in GCC Toolset 14. Using the C++14 language version is supported when all C++ objects compiled with the respective flag have been built using GCC version 6 or later. C++11 This language standard is available in GCC Toolset 14. Using the C++11 language version is supported when all C++ objects compiled with the respective flag have been built using GCC version 5 or later. C++98 This language standard is available in GCC Toolset 14. Binaries, shared libraries and objects built using this standard can be freely mixed regardless of being built with GCC from GCC Toolset, Red Hat Developer Toolset, and RHEL 5, 6, 7 and 8. C++17 This language standard is available in GCC Toolset 14. This is the default language standard setting for GCC Toolset 14, with GNU extensions, equivalent to explicitly using option -std=gnu++17 . Using the C++17 language version is supported when all C++ objects compiled with the respective flag have been built using GCC version 10 or later. C++20 and C++23 These language standards are available in GCC Toolset 14 only as an experimental, unstable, and unsupported capability. Additionally, compatibility of objects, binary files, and libraries built using this standard cannot be guaranteed. To enable the C++20 standard, add the command-line option -std=c++20 to your g++ command line. To enable the C++23 standard, add the command-line option -std=c++23 to your g++ command line. All of the language standards are available in both the standard compliant variant or with GNU extensions. When mixing objects built with GCC Toolset with those built with the RHEL toolchain (particularly .o or .a files), GCC Toolset toolchain should be used for any linkage. This ensures any newer library features provided only by GCC Toolset are resolved at link time. 4.7.3. Specifics of GCC in GCC Toolset 14 Static linking of libraries Certain more recent library features are statically linked into applications built with GCC Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This creates an additional minor security risk because standard Red Hat Enterprise Linux errata do not change this code. If the need arises for developers to rebuild their applications due to this risk, Red Hat will communicate this using a security erratum. Important Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons. Specify libraries after object files when linking In GCC Toolset, libraries are linked using linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the respective shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files: Using a library from GCC Toolset in this manner results in the linker error message undefined reference to symbol . To prevent this problem, follow the standard linking practice and specify the option adding the library after the options specifying the object files: Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of GCC . 4.7.4. Specifics of binutils in GCC Toolset 14 Static linking of libraries Certain more recent library features are statically linked into applications built with GCC Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This creates an additional minor security risk because standard Red Hat Enterprise Linux errata do not change this code. If the need arises for developers to rebuild their applications due to this risk, Red Hat will communicate this using a security erratum. Important Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons. Specify libraries after object files when linking In GCC Toolset, libraries are linked using linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the respective shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files: Using a library from GCC Toolset in this manner results in the linker error message undefined reference to symbol . To prevent this problem, follow the standard linking practice, and specify the option adding the library after the options specifying the object files: Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of binutils . 4.7.5. Specifics of annobin in GCC Toolset 14 Under some circumstances, due to a synchronization issue between annobin and gcc in GCC Toolset 14, your compilation can fail with an error message that looks similar to the following: To work around the problem, create a symbolic link in the plugin directory from the annobin.so file to the gcc-annobin.so file: Replace architecture with the architecture you use in your system: aarch64 i686 ppc64le s390x x86_64 4.8. Using the GCC Toolset container image Only the two latest GCC Toolset container images are supported. Container images of earlier GCC Toolset versions are unsupported. The GCC Toolset 13 and GCC Toolset 14 components are available in the GCC Toolset 13 Toolchain and GCC Toolset 14 Toolchain container images, respectively. The GCC Toolset container image is based on the rhel8 base image and is available for all architectures supported by RHEL 8: AMD and Intel 64-bit architectures The 64-bit ARM architecture IBM Power Systems, Little Endian 64-bit IBM Z 4.8.1. GCC Toolset container image contents Tools versions provided in the GCC Toolset 14 container image match the GCC Toolset 14 components versions . The GCC Toolset 14 Toolchain contents The rhel8/gcc-toolset-14-toolchain container image consists of the following components: Component Package gcc gcc-toolset-14-gcc g++ gcc-toolset-14-gcc-c++ gfortran gcc-toolset-14-gcc-gfortran gdb gcc-toolset-14-gdb 4.8.2. Accessing and running the GCC Toolset container image The following section describes how to access and run the GCC Toolset container image. Prerequisites Podman is installed. Procedure Access the Red Hat Container Registry using your Customer Portal credentials: Pull the container image you require by running a relevant command as root: Replace toolset_version with the GCC Toolset version, for example 14 . Note On RHEL 8.1 and later versions, you can set up your system to work with containers as a non-root user. For details, see Setting up rootless containers . Optional: Check that pulling was successful by running a command that lists all container images on your local system: Run a container by launching a bash shell inside a container: The -i option creates an interactive session; without this option the shell opens and instantly exits. The -t option opens a terminal session; without this option you cannot type anything to the shell. Additional resources Building, running, and managing Linux containers on RHEL 8 Understanding root inside and outside a container (Red Hat Blog article) GCC Toolset container entries in the Red Hat Ecosystem Catalog 4.8.3. Example: Using the GCC Toolset 14 Toolchain container image This example shows how to pull and start using the GCC Toolset 14 Toolchain container image. Prerequisites Podman is installed. Procedure Access the Red Hat Container Registry using your Customer Portal credentials: Pull the container image as root: Launch the container image with an interactive shell as root: Run the GCC Toolset tools as expected. For example, to verify the gcc compiler version, run: To list all packages provided in the container, run: 4.9. Compiler toolsets RHEL 8 provides the following compiler toolsets as Application Streams: LLVM Toolset provides the LLVM compiler infrastructure framework, the Clang compiler for the C and C++ languages, the LLDB debugger, and related tools for code analysis. Rust Toolset provides the Rust programming language compiler rustc , the cargo build tool and dependency manager, the cargo-vendor plugin, and required libraries. Go Toolset provides the Go programming language tools and libraries. Go is alternatively known as golang . For more details and information about usage, see the compiler toolsets user guides on the Red Hat Developer Tools page. 4.10. The Annobin project The Annobin project is an implementation of the Watermark specification project. Watermark specification project intends to add markers to Executable and Linkable Format (ELF) objects to determine their properties. The Annobin project consists of the annobin plugin and the annockeck program. The annobin plugin scans the GNU Compiler Collection (GCC) command line, the compilation state, and the compilation process, and generates the ELF notes. The ELF notes record how the binary was built and provide information for the annocheck program to perform security hardening checks. The security hardening checker is part of the annocheck program and is enabled by default. It checks the binary files to determine whether the program was built with necessary security hardening options and compiled correctly. annocheck is able to recursively scan directories, archives, and RPM packages for ELF object files. Note The files must be in ELF format. annocheck does not handle any other binary file types. The following section describes how to: Use the annobin plugin Use the annocheck program Remove redundant annobin notes 4.10.1. Using the annobin plugin The following section describes how to: Enable the annobin plugin Pass options to the annobin plugin 4.10.1.1. Enabling the annobin plugin The following section describes how to enable the annobin plugin via gcc and via clang . Procedure To enable the annobin plugin with gcc , use: If gcc does not find the annobin plugin, use: Replace /path/to/directory/containing/annobin/ with the absolute path to the directory that contains annobin . To find the directory containing the annobin plugin, use: To enable the annobin plugin with clang , use: Replace /path/to/directory/containing/annobin/ with the absolute path to the directory that contains annobin . 4.10.1.2. Passing options to the annobin plugin The following section describes how to pass options to the annobin plugin via gcc and via clang . Procedure To pass options to the annobin plugin with gcc , use: Replace option with the annobin command line arguments and replace file-name with the name of the file. Example To display additional details about what annobin it is doing, use: Replace file-name with the name of the file. To pass options to the annobin plugin with clang , use: Replace option with the annobin command line arguments and replace /path/to/directory/containing/annobin/ with the absolute path to the directory containing annobin . Example To display additional details about what annobin it is doing, use: Replace file-name with the name of the file. 4.10.2. Using the annocheck program The following section describes how to use annocheck to examine: Files Directories RPM packages annocheck extra tools Note annocheck recursively scans directories, archives, and RPM packages for ELF object files. The files have to be in the ELF format. annocheck does not handle any other binary file types. 4.10.2.1. Using annocheck to examine files The following section describes how to examine ELF files using annocheck . Procedure To examine a file, use: Replace file-name with the name of a file. Note The files must be in ELF format. annocheck does not handle any other binary file types. annocheck processes static libraries that contain ELF object files. Additional information For more information about annocheck and possible command line options, see the annocheck man page on your system. 4.10.2.2. Using annocheck to examine directories The following section describes how to examine ELF files in a directory using annocheck . Procedure To scan a directory, use: Replace directory-name with the name of a directory. annocheck automatically examines the contents of the directory, its sub-directories, and any archives and RPM packages within the directory. Note annocheck only looks for ELF files. Other file types are ignored. Additional information For more information about annocheck and possible command line options, see the annocheck man page on your system. 4.10.2.3. Using annocheck to examine RPM packages The following section describes how to examine ELF files in an RPM package using annocheck . Procedure To scan an RPM package, use: Replace rpm-package-name with the name of an RPM package. annocheck recursively scans all the ELF files inside the RPM package. Note annocheck only looks for ELF files. Other file types are ignored. To scan an RPM package with provided debug info RPM, use: Replace rpm-package-name with the name of an RPM package, and debuginfo-rpm with the name of a debug info RPM associated with the binary RPM. Additional information For more information about annocheck and possible command line options, see the annocheck man page on your system. 4.10.2.4. Using annocheck extra tools annocheck includes multiple tools for examining binary files. You can enable these tools with the command-line options. The following section describes how to enable the: built-by tool notes tool section-size tool You can enable multiple tools at the same time. Note The hardening checker is enabled by default. 4.10.2.4.1. Enabling the built-by tool You can use the annocheck built-by tool to find the name of the compiler that built the binary file. Procedure To enable the built-by tool, use: Additional information For more information about the built-by tool, see the --help command-line option. 4.10.2.4.2. Enabling the notes tool You can use the annocheck notes tool to display the notes stored inside a binary file created by the annobin plugin. Procedure To enable the notes tool, use: The notes are displayed in a sequence sorted by the address range. Additional information For more information about the notes tool, see the --help command-line option. 4.10.2.4.3. Enabling the section-size tool You can use the annocheck section-size tool display the size of the named sections. Procedure To enable the section-size tool, use: Replace name with the name of the named section. The output is restricted to specific sections. A cumulative result is produced at the end. Additional information For more information about the section-size tool, see the --help command-line option. 4.10.2.4.4. Hardening checker basics The hardening checker is enabled by default. You can disable the hardening checker with the --disable-hardened command-line option. 4.10.2.4.4.1. Hardening checker options The annocheck program checks the following options: Lazy binding is disabled using the -z now linker option. The program does not have a stack in an executable region of memory. The relocations for the GOT table are set to read only. No program segment has all three of the read, write and execute permission bits set. There are no relocations against executable code. The runpath information for locating shared libraries at runtime includes only directories rooted at /usr. The program was compiled with annobin notes enabled. The program was compiled with the -fstack-protector-strong option enabled. The program was compiled with -D_FORTIFY_SOURCE=2 . The program was compiled with -D_GLIBCXX_ASSERTIONS . The program was compiled with -fexceptions enabled. The program was compiled with -fstack-clash-protection enabled. The program was compiled at -O2 or higher. The program does not have any relocations held in a writeable. Dynamic executables have a dynamic segment. Shared libraries were compiled with -fPIC or -fPIE . Dynamic executables were compiled with -fPIE and linked with -pie . If available, the -fcf-protection=full option was used. If available, the -mbranch-protection option was used. If available, the -mstackrealign option was used. 4.10.2.4.4.2. Disabling the hardening checker The following section describes how to disable the hardening checker. Procedure To scan the notes in a file without the hardening checker, use: Replace file-name with the name of a file. 4.10.3. Removing redundant annobin notes Using annobin increases the size of binaries. To reduce the size of the binaries compiled with annobin you can remove redundant annobin notes. To remove the redundant annobin notes use the objcopy program, which is a part of the binutils package. Procedure To remove the redundant annobin notes, use: Replace file-name with the name of the file. 4.10.4. Specifics of annobin in GCC Toolset 12 Under some circumstances, due to a synchronization issue between annobin and gcc in GCC Toolset 12, your compilation can fail with an error message that looks similar to the following: To work around the problem, create a symbolic link in the plugin directory from the annobin.so file to the gcc-annobin.so file: Replace architecture with the architecture you use in your system: aarch64 i686 ppc64le s390x x86_64
[ "yum install gcc-toolset- N", "yum list available gcc-toolset- N -\\*", "yum install package_name", "yum install gcc-toolset-13-annobin-annocheck gcc-toolset-13-binutils-devel", "yum remove gcc-toolset- N \\*", "scl enable gcc-toolset- N tool", "scl enable gcc-toolset- N bash", "scl enable gcc-toolset-9 'gcc -lsomelib objfile.o'", "scl enable gcc-toolset-9 'gcc objfile.o -lsomelib'", "scl enable gcc-toolset-9 'ld -lsomelib objfile.o'", "scl enable gcc-toolset-9 'ld objfile.o -lsomelib'", "scl enable gcc-toolset-10 'gcc -lsomelib objfile.o'", "scl enable gcc-toolset-10 'gcc objfile.o -lsomelib'", "scl enable gcc-toolset-10 'ld -lsomelib objfile.o'", "scl enable gcc-toolset-10 'ld objfile.o -lsomelib'", "scl enable gcc-toolset-11 'gcc -lsomelib objfile.o'", "scl enable gcc-toolset-11 'gcc objfile.o -lsomelib'", "scl enable gcc-toolset-11 'ld -lsomelib objfile.o'", "scl enable gcc-toolset-11 'ld objfile.o -lsomelib'", "scl enable gcc-toolset-12 'gcc -lsomelib objfile.o'", "scl enable gcc-toolset-12 'gcc objfile.o -lsomelib'", "scl enable gcc-toolset-12 'ld -lsomelib objfile.o'", "scl enable gcc-toolset-12 'ld objfile.o -lsomelib'", "cc1: fatal error: inaccessible plugin file opt/rh/gcc-toolset-12/root/usr/lib/gcc/ architecture -linux-gnu/12/plugin/gcc-annobin.so expanded from short plugin name gcc-annobin: No such file or directory", "cd /opt/rh/gcc-toolset-12/root/usr/lib/gcc/ architecture -linux-gnu/12/plugin ln -s annobin.so gcc-annobin.so", "scl enable gcc-toolset-13 'gcc -lsomelib objfile.o'", "scl enable gcc-toolset-13 'gcc objfile.o -lsomelib'", "scl enable gcc-toolset-13 'ld -lsomelib objfile.o'", "scl enable gcc-toolset-13 'ld objfile.o -lsomelib'", "cc1: fatal error: inaccessible plugin file opt/rh/gcc-toolset-13/root/usr/lib/gcc/ architecture -linux-gnu/13/plugin/gcc-annobin.so expanded from short plugin name gcc-annobin: No such file or directory", "cd /opt/rh/gcc-toolset-13/root/usr/lib/gcc/ architecture -linux-gnu/13/plugin ln -s annobin.so gcc-annobin.so", "scl enable gcc-toolset-14 'gcc -lsomelib objfile.o'", "scl enable gcc-toolset-14 'gcc objfile.o -lsomelib'", "scl enable gcc-toolset-14 'ld -lsomelib objfile.o'", "scl enable gcc-toolset-14 'ld objfile.o -lsomelib'", "cc1: fatal error: inaccessible plugin file opt/rh/gcc-toolset-14/root/usr/lib/gcc/ architecture -linux-gnu/14/plugin/gcc-annobin.so expanded from short plugin name gcc-annobin: No such file or directory", "cd /opt/rh/gcc-toolset-14/root/usr/lib/gcc/ architecture -linux-gnu/14/plugin ln -s annobin.so gcc-annobin.so", "podman login registry.redhat.io Username: username Password: ********", "podman pull registry.redhat.io/rhel8/gcc-toolset- <toolset_version> -toolchain", "podman images", "podman run -it image_name /bin/bash", "podman login registry.redhat.io Username: username Password: ********", "podman pull registry.redhat.io/rhel8/gcc-toolset-14-toolchain", "podman run -it registry.redhat.io/rhel8/gcc-toolset-14-toolchain /bin/bash", "bash-4.4USD gcc -v gcc version 14.2.1 20240801 (Red Hat 14.2.1-1) (GCC)", "bash-4.4USD rpm -qa", "gcc -fplugin=annobin", "gcc -iplugindir= /path/to/directory/containing/annobin/", "gcc --print-file-name=plugin", "clang -fplugin= /path/to/directory/containing/annobin/", "gcc -fplugin=annobin -fplugin-arg-annobin- option file-name", "gcc -fplugin=annobin -fplugin-arg-annobin-verbose file-name", "clang -fplugin= /path/to/directory/containing/annobin/ -Xclang -plugin-arg-annobin -Xclang option file-name", "clang -fplugin=/usr/lib64/clang/10/lib/annobin.so -Xclang -plugin-arg-annobin -Xclang verbose file-name", "annocheck file-name", "annocheck directory-name", "annocheck rpm-package-name", "annocheck rpm-package-name --debug-rpm debuginfo-rpm", "annocheck --enable-built-by", "annocheck --enable-notes", "annocheck --section-size= name", "annocheck --enable-notes --disable-hardened file-name", "objcopy --merge-notes file-name", "cc1: fatal error: inaccessible plugin file opt/rh/gcc-toolset-12/root/usr/lib/gcc/ architecture -linux-gnu/12/plugin/gcc-annobin.so expanded from short plugin name gcc-annobin: No such file or directory", "cd /opt/rh/gcc-toolset-12/root/usr/lib/gcc/ architecture -linux-gnu/12/plugin ln -s annobin.so gcc-annobin.so" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/developing_c_and_cpp_applications_in_rhel_8/additional-toolsets-for-development_developing-applications
Chapter 8. OpenStack Cloud Controller Manager reference guide
Chapter 8. OpenStack Cloud Controller Manager reference guide 8.1. The OpenStack Cloud Controller Manager Beginning with OpenShift Container Platform 4.12, clusters that run on Red Hat OpenStack Platform (RHOSP) were switched from the legacy OpenStack cloud provider to the external OpenStack Cloud Controller Manager (CCM). This change follows the move in Kubernetes from in-tree, legacy cloud providers to external cloud providers that are implemented by using the Cloud Controller Manager . To preserve user-defined configurations for the legacy cloud provider, existing configurations are mapped to new ones as part of the migration process. It searches for a configuration called cloud-provider-config in the openshift-config namespace. Note The config map name cloud-provider-config is not statically configured. It is derived from the spec.cloudConfig.name value in the infrastructure/cluster CRD. Found configurations are synchronized to the cloud-conf config map in the openshift-cloud-controller-manager namespace. As part of this synchronization, the OpenStack CCM Operator alters the new config map such that its properties are compatible with the external cloud provider. The file is changed in the following ways: The [Global] secret-name , [Global] secret-namespace , and [Global] kubeconfig-path options are removed. They do not apply to the external cloud provider. The [Global] use-clouds , [Global] clouds-file , and [Global] cloud options are added. The entire [BlockStorage] section is removed. External cloud providers no longer perform storage operations. Block storage configuration is managed by the Cinder CSI driver. Additionally, the CCM Operator enforces a number of default options. Values for these options are always overriden as follows: [Global] use-clouds = true clouds-file = /etc/openstack/secret/clouds.yaml cloud = openstack ... [LoadBalancer] enabled = true The clouds-value value, /etc/openstack/secret/clouds.yaml , is mapped to the openstack-cloud-credentials config in the openshift-cloud-controller-manager namespace. You can modify the RHOSP cloud in this file as you do any other clouds.yaml file. 8.2. The OpenStack Cloud Controller Manager (CCM) config map An OpenStack CCM config map defines how your cluster interacts with your RHOSP cloud. By default, this configuration is stored under the cloud.conf key in the cloud-conf config map in the openshift-cloud-controller-manager namespace. Important The cloud-conf config map is generated from the cloud-provider-config config map in the openshift-config namespace. To change the settings that are described by the cloud-conf config map, modify the cloud-provider-config config map. As part of this synchronization, the CCM Operator overrides some options. For more information, see "The RHOSP Cloud Controller Manager". For example: An example cloud-conf config map apiVersion: v1 data: cloud.conf: | [Global] 1 secret-name = openstack-credentials secret-namespace = kube-system region = regionOne [LoadBalancer] enabled = True kind: ConfigMap metadata: creationTimestamp: "2022-12-20T17:01:08Z" name: cloud-conf namespace: openshift-cloud-controller-manager resourceVersion: "2519" uid: cbbeedaf-41ed-41c2-9f37-4885732d3677 1 Set global options by using a clouds.yaml file rather than modifying the config map. The following options are present in the config map. Except when indicated otherwise, they are mandatory for clusters that run on RHOSP. 8.2.1. Load balancer options CCM supports several load balancer options for deployments that use Octavia. Note Neutron-LBaaS support is deprecated. Option Description enabled Whether or not to enable the LoadBalancer type of services integration. The default value is true . floating-network-id Optional. The external network used to create floating IP addresses for load balancer virtual IP addresses (VIPs). If there are multiple external networks in the cloud, this option must be set or the user must specify loadbalancer.openstack.org/floating-network-id in the service annotation. floating-subnet-id Optional. The external network subnet used to create floating IP addresses for the load balancer VIP. Can be overridden by the service annotation loadbalancer.openstack.org/floating-subnet-id . floating-subnet Optional. A name pattern (glob or regular expression if starting with ~ ) for the external network subnet used to create floating IP addresses for the load balancer VIP. Can be overridden by the service annotation loadbalancer.openstack.org/floating-subnet . If multiple subnets match the pattern, the first one with available IP addresses is used. floating-subnet-tags Optional. Tags for the external network subnet used to create floating IP addresses for the load balancer VIP. Can be overridden by the service annotation loadbalancer.openstack.org/floating-subnet-tags . If multiple subnets match these tags, the first one with available IP addresses is used. If the RHOSP network is configured with sharing disabled, for example, with the --no-share flag used during creation, this option is unsupported. Set the network to share to use this option. lb-method The load balancing algorithm used to create the load balancer pool. For the Amphora provider the value can be ROUND_ROBIN , LEAST_CONNECTIONS , or SOURCE_IP . The default value is ROUND_ROBIN . For the OVN provider, only the SOURCE_IP_PORT algorithm is supported. For the Amphora provider, if using the LEAST_CONNECTIONS or SOURCE_IP methods, configure the create-monitor option as true in the cloud-provider-config config map on the openshift-config namespace and ETP:Local on the load-balancer type service to allow balancing algorithm enforcement in the client to service endpoint connections. lb-provider Optional. Used to specify the provider of the load balancer, for example, amphora or octavia . Only the Amphora and Octavia providers are supported. lb-version Optional. The load balancer API version. Only "v2" is supported. subnet-id The ID of the Networking service subnet on which load balancer VIPs are created. For dual stack deployments, leave this option unset. The OpenStack cloud provider automatically selects which subnet to use for a load balancer. network-id The ID of the Networking service network on which load balancer VIPs are created. Unnecessary if subnet-id is set. If this property is not set, the network is automatically selected based on the network that cluster nodes use. create-monitor Whether or not to create a health monitor for the service load balancer. A health monitor is required for services that declare externalTrafficPolicy: Local . The default value is false . This option is unsupported if you use RHOSP earlier than version 17 with the ovn provider. monitor-delay The interval in seconds by which probes are sent to members of the load balancer. The default value is 5 . monitor-max-retries The number of successful checks that are required to change the operating status of a load balancer member to ONLINE . The valid range is 1 to 10 , and the default value is 1 . monitor-timeout The time in seconds that a monitor waits to connect to the back end before it times out. The default value is 3 . internal-lb Whether or not to create an internal load balancer without floating IP addresses. The default value is false . LoadBalancerClass "ClassName" This is a config section that comprises a set of options: floating-network-id floating-subnet-id floating-subnet floating-subnet-tags network-id subnet-id The behavior of these options is the same as that of the identically named options in the load balancer section of the CCM config file. You can set the ClassName value by specifying the service annotation loadbalancer.openstack.org/class . max-shared-lb The maximum number of services that can share a load balancer. The default value is 2 . 8.2.2. Options that the Operator overrides The CCM Operator overrides the following options, which you might recognize from configuring RHOSP. Do not configure them yourself. They are included in this document for informational purposes only. Option Description auth-url The RHOSP Identity service URL. For example, http://128.110.154.166/identity . os-endpoint-type The type of endpoint to use from the service catalog. username The Identity service user name. password The Identity service user password. domain-id The Identity service user domain ID. domain-name The Identity service user domain name. tenant-id The Identity service project ID. Leave this option unset if you are using Identity service application credentials. In version 3 of the Identity API, which changed the identifier tenant to project , the value of tenant-id is automatically mapped to the project construct in the API. tenant-name The Identity service project name. tenant-domain-id The Identity service project domain ID. tenant-domain-name The Identity service project domain name. user-domain-id The Identity service user domain ID. user-domain-name The Identity service user domain name. use-clouds Whether or not to fetch authorization credentials from a clouds.yaml file. Options set in this section are prioritized over values read from the clouds.yaml file. CCM searches for the file in the following places: The value of the clouds-file option. A file path stored in the environment variable OS_CLIENT_CONFIG_FILE . The directory pkg/openstack . The directory ~/.config/openstack . The directory /etc/openstack . clouds-file The file path of a clouds.yaml file. It is used if the use-clouds option is set to true . cloud The named cloud in the clouds.yaml file that you want to use. It is used if the use-clouds option is set to true .
[ "[Global] use-clouds = true clouds-file = /etc/openstack/secret/clouds.yaml cloud = openstack [LoadBalancer] enabled = true", "apiVersion: v1 data: cloud.conf: | [Global] 1 secret-name = openstack-credentials secret-namespace = kube-system region = regionOne [LoadBalancer] enabled = True kind: ConfigMap metadata: creationTimestamp: \"2022-12-20T17:01:08Z\" name: cloud-conf namespace: openshift-cloud-controller-manager resourceVersion: \"2519\" uid: cbbeedaf-41ed-41c2-9f37-4885732d3677" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_openstack/installing-openstack-cloud-config-reference
Chapter 16. Red Hat Build of OptaPlanner on Red Hat build of Quarkus: an employee scheduler quick start guide
Chapter 16. Red Hat Build of OptaPlanner on Red Hat build of Quarkus: an employee scheduler quick start guide The employee scheduler quick start application assigns employees to shifts on various positions in an organization. For example, you can use the application to distribute shifts in a hospital between nurses, guard duty shifts across a number of locations, or shifts on an assembly line between workers. Optimal employee scheduling must take a number of variables into account. For example, different skills can be required for shifts in different positions. Also, some employees might be unavailable for some time slots or might prefer a particular time slot. Moreover, an employee can have a contract that limits the number of hours that the employee can work in a single time period. The Red Hat Build of OptaPlanner rules for this starter application use both hard and soft constraints. During an optimization, the Planner engine may not violate hard constraints, for example, if an employee is unavailable (out sick), or that an employee cannot work two spots in a single shift. The Planner engine tries to adhere to soft constraints, such as an employee's preference to not work a specific shift, but can violate them if the optimal solution requires it. Prerequisites OpenJDK 11 or later is installed. Red Hat build of Open JDK is available from the Software Downloads page in the Red Hat Customer Portal (login required). Apache Maven 3.8 or higher is installed. Maven is available from the Apache Maven Project website. An IDE, such as IntelliJ IDEA, VSCode, or Eclipse is available. 16.1. Downloading and running the OptaPlanner employee scheduler Download the OptaPlanner employee scheduler quick start archive, start it in Quarkus development mode, and view the application in a browser. Quarkus development mode enables you to make changes and update your application while it is running. Procedure Navigate to the Software Downloads page in the Red Hat Customer Portal (login required), and select the product and version from the drop-down options: Product: Red Hat Build of OptaPlanner Version: 8.38 Download Red Hat Build of OptaPlanner 8.38 Quick Starts . Extract the rhbop-8.38.0-optaplanner-quickstarts-sources.zip file. Navigate to the org.optaplanner.optaplanner-quickstarts-8.38.0.Final-redhat-00004/use-cases/employee-scheduling directory. Enter the following command to start the OptaPlanner employee scheduler in development mode: USD mvn quarkus:dev To view the OptaPlanner employee scheduler, enter the following URL in a web browser. To run the OptaPlanner employee scheduler, click Solve . Make changes to the source code then press the F5 key to refresh your browser. Notice that the changes that you made are now available. 16.2. Package and run the OptaPlanner employee scheduler When you have completed development work on the OptaPlanner employee scheduler in quarkus:dev mode, run the application as a conventional jar file. Prerequisites You have downloaded the OptaPlanner employee scheduling quick start. Procedure Navigate to the /use-cases/vaccination-scheduling directory. To compile the OptaPlanner employee scheduler, enter the following command: USD mvn package To run the compiled OptaPlanner employee scheduler, enter the following command: USD java -jar ./target/quarkus-app/quarkus-run.jar Note To run the application on port 8081, add -Dquarkus.http.port=8081 to the preceding command. To start the OptaPlanner employee scheduler, enter the following URL in a web browser.
[ "mvn quarkus:dev", "http://localhost:8080/", "mvn package", "java -jar ./target/quarkus-app/quarkus-run.jar", "http://localhost:8080/" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_optaplanner/8.38/html/developing_solvers_with_red_hat_build_of_optaplanner/assembly-optaplanner-employee-schedule_optaplanner-quickstarts
Chapter 4. Creating a Red Hat High-Availability cluster with Pacemaker
Chapter 4. Creating a Red Hat High-Availability cluster with Pacemaker Create a Red Hat High Availability two-node cluster using the pcs command-line interface with the following procedure. Configuring the cluster in this example requires that your system include the following components: 2 nodes, which will be used to create the cluster. In this example, the nodes used are z1.example.com and z2.example.com . Network switches for the private network. We recommend but do not require a private network for communication among the cluster nodes and other cluster hardware such as network power switches and Fibre Channel switches. A fencing device for each node of the cluster. This example uses two ports of the APC power switch with a host name of zapc.example.com . Note You must ensure that your configuration conforms to Red Hat's support policies. For full information about Red Hat's support policies, requirements, and limitations for RHEL High Availability clusters, see Support Policies for RHEL High Availability Clusters . 4.1. Installing cluster software Install the cluster software and configure your system for cluster creation with the following procedure. Procedure On each node in the cluster, enable the repository for high availability that corresponds to your system architecture. For example, to enable the high availability repository for an x86_64 system, you can enter the following subscription-manager command: On each node in the cluster, install the Red Hat High Availability Add-On software packages along with all available fence agents from the High Availability channel. Alternatively, you can install the Red Hat High Availability Add-On software packages along with only the fence agent that you require with the following command. The following command displays a list of the available fence agents. Warning After you install the Red Hat High Availability Add-On packages, you should ensure that your software update preferences are set so that nothing is installed automatically. Installation on a running cluster can cause unexpected behaviors. For more information, see Recommended Practices for Applying Software Updates to a RHEL High Availability or Resilient Storage Cluster . If you are running the firewalld daemon, execute the following commands to enable the ports that are required by the Red Hat High Availability Add-On. Note You can determine whether the firewalld daemon is installed on your system with the rpm -q firewalld command. If it is installed, you can determine whether it is running with the firewall-cmd --state command. Note The ideal firewall configuration for cluster components depends on the local environment, where you may need to take into account such considerations as whether the nodes have multiple network interfaces or whether off-host firewalling is present. The example here, which opens the ports that are generally required by a Pacemaker cluster, should be modified to suit local conditions. Enabling ports for the High Availability Add-On shows the ports to enable for the Red Hat High Availability Add-On and provides an explanation for what each port is used for. In order to use pcs to configure the cluster and communicate among the nodes, you must set a password on each node for the user ID hacluster , which is the pcs administration account. It is recommended that the password for user hacluster be the same on each node. Before the cluster can be configured, the pcsd daemon must be started and enabled to start up on boot on each node. This daemon works with the pcs command to manage configuration across the nodes in the cluster. On each node in the cluster, execute the following commands to start the pcsd service and to enable pcsd at system start. 4.2. Installing the pcp-zeroconf package (recommended) When you set up your cluster, it is recommended that you install the pcp-zeroconf package for the Performance Co-Pilot (PCP) tool. PCP is Red Hat's recommended resource-monitoring tool for RHEL systems. Installing the pcp-zeroconf package allows you to have PCP running and collecting performance-monitoring data for the benefit of investigations into fencing, resource failures, and other events that disrupt the cluster. Note Cluster deployments where PCP is enabled will need sufficient space available for PCP's captured data on the file system that contains /var/log/pcp/ . Typical space usage by PCP varies across deployments, but 10Gb is usually sufficient when using the pcp-zeroconf default settings, and some environments may require less. Monitoring usage in this directory over a 14-day period of typical activity can provide a more accurate usage expectation. Procedure To install the pcp-zeroconf package, run the following command. This package enables pmcd and sets up data capture at a 10-second interval. For information about reviewing PCP data, see the Red Hat Knowledgebase solution Why did a RHEL High Availability cluster node reboot - and how can I prevent it from happening again? . 4.3. Creating a high availability cluster Create a Red Hat High Availability Add-On cluster with the following procedure. This example procedure creates a cluster that consists of the nodes z1.example.com and z2.example.com . Procedure Authenticate the pcs user hacluster for each node in the cluster on the node from which you will be running pcs . The following command authenticates user hacluster on z1.example.com for both of the nodes in a two-node cluster that will consist of z1.example.com and z2.example.com . Execute the following command from z1.example.com to create the two-node cluster my_cluster that consists of nodes z1.example.com and z2.example.com . This will propagate the cluster configuration files to both nodes in the cluster. This command includes the --start option, which will start the cluster services on both nodes in the cluster. Enable the cluster services to run on each node in the cluster when the node is booted. Note For your particular environment, you may choose to leave the cluster services disabled by skipping this step. This allows you to ensure that if a node goes down, any issues with your cluster or your resources are resolved before the node rejoins the cluster. If you leave the cluster services disabled, you will need to manually start the services when you reboot a node by executing the pcs cluster start command on that node. You can display the current status of the cluster with the pcs cluster status command. Because there may be a slight delay before the cluster is up and running when you start the cluster services with the --start option of the pcs cluster setup command, you should ensure that the cluster is up and running before performing any subsequent actions on the cluster and its configuration. 4.4. Creating a high availability cluster with multiple links You can use the pcs cluster setup command to create a Red Hat High Availability cluster with multiple links by specifying all of the links for each node. The format for the basic command to create a two-node cluster with two links is as follows. For the full syntax of this command, see the pcs (8) man page. When creating a cluster with multiple links, you should take the following into account. The order of the addr= address parameters is important. The first address specified after a node name is for link0 , the second one for link1 , and so forth. By default, if link_priority is not specified for a link, the link's priority is equal to the link number. The link priorities are then 0, 1, 2, 3, and so forth, according to the order specified, with 0 being the highest link priority. The default link mode is passive , meaning the active link with the lowest-numbered link priority is used. With the default values of link_mode and link_priority , the first link specified will be used as the highest priority link, and if that link fails the link specified will be used. It is possible to specify up to eight links using the knet transport protocol, which is the default transport protocol. All nodes must have the same number of addr= parameters. It is possible to add, remove, and change links in an existing cluster using the pcs cluster link add , the pcs cluster link remove , the pcs cluster link delete , and the pcs cluster link update commands. As with single-link clusters, do not mix IPv4 and IPv6 addresses in one link, although you can have one link running IPv4 and the other running IPv6. As with single-link clusters, you can specify addresses as IP addresses or as names as long as the names resolve to IPv4 or IPv6 addresses for which IPv4 and IPv6 addresses are not mixed in one link. The following example creates a two-node cluster named my_twolink_cluster with two nodes, rh80-node1 and rh80-node2 . rh80-node1 has two interfaces, IP address 192.168.122.201 as link0 and 192.168.123.201 as link1 . rh80-node2 has two interfaces, IP address 192.168.122.202 as link0 and 192.168.123.202 as link1 . To set a link priority to a different value than the default value, which is the link number, you can set the link priority with the link_priority option of the pcs cluster setup command. Each of the following two example commands creates a two-node cluster with two interfaces where the first link, link 0, has a link priority of 1 and the second link, link 1, has a link priority of 0. Link 1 will be used first and link 0 will serve as the failover link. Since link mode is not specified, it defaults to passive. These two commands are equivalent. If you do not specify a link number following the link keyword, the pcs interface automatically adds a link number, starting with the lowest unused link number. You can set the link mode to a different value than the default value of passive with the link_mode option of the pcs cluster setup command, as in the following example. The following example sets both the link mode and the link priority. For information about adding nodes to an existing cluster with multiple links, see Adding a node to a cluster with multiple links . For information about changing the links in an existing cluster with multiple links, see Adding and modifying links in an existing cluster . 4.5. Configuring fencing You must configure a fencing device for each node in the cluster. For information about the fence configuration commands and options, see Configuring fencing in a Red Hat High Availability cluster . For general information about fencing and its importance in a Red Hat High Availability cluster, see the Red Hat Knowledgebase solution Fencing in a Red Hat High Availability Cluster . Note When configuring a fencing device, attention should be given to whether that device shares power with any nodes or devices in the cluster. If a node and its fence device do share power, then the cluster may be at risk of being unable to fence that node if the power to it and its fence device should be lost. Such a cluster should either have redundant power supplies for fence devices and nodes, or redundant fence devices that do not share power. Alternative methods of fencing such as SBD or storage fencing may also bring redundancy in the event of isolated power losses. Procedure This example uses the APC power switch with a host name of zapc.example.com to fence the nodes, and it uses the fence_apc_snmp fencing agent. Because both nodes will be fenced by the same fencing agent, you can configure both fencing devices as a single resource, using the pcmk_host_map option. You create a fencing device by configuring the device as a stonith resource with the pcs stonith create command. The following command configures a stonith resource named myapc that uses the fence_apc_snmp fencing agent for nodes z1.example.com and z2.example.com . The pcmk_host_map option maps z1.example.com to port 1, and z2.example.com to port 2. The login value and password for the APC device are both apc . By default, this device will use a monitor interval of sixty seconds for each node. Note that you can use an IP address when specifying the host name for the nodes. The following command displays the parameters of an existing fencing device. After configuring your fence device, you should test the device. For information about testing a fence device, see Testing a fence device . Note Do not test your fence device by disabling the network interface, as this will not properly test fencing. Note Once fencing is configured and a cluster has been started, a network restart will trigger fencing for the node which restarts the network even when the timeout is not exceeded. For this reason, do not restart the network service while the cluster service is running because it will trigger unintentional fencing on the node. 4.6. Backing up and restoring a cluster configuration The following commands back up a cluster configuration in a tar archive and restore the cluster configuration files on all nodes from the backup. Procedure Use the following command to back up the cluster configuration in a tar archive. If you do not specify a file name, the standard output will be used. Note The pcs config backup command backs up only the cluster configuration itself as configured in the CIB; the configuration of resource daemons is out of the scope of this command. For example if you have configured an Apache resource in the cluster, the resource settings (which are in the CIB) will be backed up, while the Apache daemon settings (as set in`/etc/httpd`) and the files it serves will not be backed up. Similarly, if there is a database resource configured in the cluster, the database itself will not be backed up, while the database resource configuration (CIB) will be. Use the following command to restore the cluster configuration files on all cluster nodes from the backup. Specifying the --local option restores the cluster configuration files only on the node from which you run this command. If you do not specify a file name, the standard input will be used. 4.7. Enabling ports for the High Availability Add-On The ideal firewall configuration for cluster components depends on the local environment, where you may need to take into account such considerations as whether the nodes have multiple network interfaces or whether off-host firewalling is present. If you are running the firewalld daemon, execute the following commands to enable the ports that are required by the Red Hat High Availability Add-On. You may need to modify which ports are open to suit local conditions. Note You can determine whether the firewalld daemon is installed on your system with the rpm -q firewalld command. If the firewalld daemon is installed, you can determine whether it is running with the firewall-cmd --state command. The following table shows the ports to enable for the Red Hat High Availability Add-On and provides an explanation for what the port is used for. Table 4.1. Ports to Enable for High Availability Add-On Port When Required TCP 2224 Default pcsd port required on all nodes (needed by the pcsd Web UI and required for node-to-node communication). You can configure the pcsd port by means of the PCSD_PORT parameter in the /etc/sysconfig/pcsd file. It is crucial to open port 2224 in such a way that pcs from any node can talk to all nodes in the cluster, including itself. When using the Booth cluster ticket manager or a quorum device you must open port 2224 on all related hosts, such as Booth arbitrators or the quorum device host. TCP 3121 Required on all nodes if the cluster has any Pacemaker Remote nodes Pacemaker's pacemaker-based daemon on the full cluster nodes will contact the pacemaker_remoted daemon on Pacemaker Remote nodes at port 3121. If a separate interface is used for cluster communication, the port only needs to be open on that interface. At a minimum, the port should open on Pacemaker Remote nodes to full cluster nodes. Because users may convert a host between a full node and a remote node, or run a remote node inside a container using the host's network, it can be useful to open the port to all nodes. It is not necessary to open the port to any hosts other than nodes. TCP 5403 Required on the quorum device host when using a quorum device with corosync-qnetd . The default value can be changed with the -p option of the corosync-qnetd command. UDP 5404-5412 Required on corosync nodes to facilitate communication between nodes. It is crucial to open ports 5404-5412 in such a way that corosync from any node can talk to all nodes in the cluster, including itself. TCP 21064 Required on all nodes if the cluster contains any resources requiring DLM (such as GFS2 ). TCP 9929, UDP 9929 Required to be open on all cluster nodes and Booth arbitrator nodes to connections from any of those same nodes when the Booth ticket manager is used to establish a multi-site cluster.
[ "subscription-manager repos --enable=rhel-9-for-x86_64-highavailability-rpms", "dnf install pcs pacemaker fence-agents-all", "dnf install pcs pacemaker fence-agents- model", "rpm -q -a | grep fence fence-agents-rhevm-4.0.2-3.el7.x86_64 fence-agents-ilo-mp-4.0.2-3.el7.x86_64 fence-agents-ipmilan-4.0.2-3.el7.x86_64", "firewall-cmd --permanent --add-service=high-availability firewall-cmd --add-service=high-availability", "passwd hacluster Changing password for user hacluster. New password: Retype new password: passwd: all authentication tokens updated successfully.", "systemctl start pcsd.service systemctl enable pcsd.service", "dnf install pcp-zeroconf", "pcs host auth z1.example.com z2.example.com Username: hacluster Password: z1.example.com: Authorized z2.example.com: Authorized", "pcs cluster setup my_cluster --start z1.example.com z2.example.com", "pcs cluster enable --all", "pcs cluster status Cluster Status: Stack: corosync Current DC: z2.example.com (version 2.0.0-10.el8-b67d8d0de9) - partition with quorum Last updated: Thu Oct 11 16:11:18 2018 Last change: Thu Oct 11 16:11:00 2018 by hacluster via crmd on z2.example.com 2 Nodes configured 0 Resources configured", "pcs cluster setup pass:quotes[ cluster_name ] pass:quotes[ node1_name ] addr=pass:quotes[ node1_link0_address ] addr=pass:quotes[ node1_link1_address ] pass:quotes[ node2_name ] addr=pass:quotes[ node2_link0_address ] addr=pass:quotes[ node2_link1_address ]", "pcs cluster setup my_twolink_cluster rh80-node1 addr=192.168.122.201 addr=192.168.123.201 rh80-node2 addr=192.168.122.202 addr=192.168.123.202", "pcs cluster setup my_twolink_cluster rh80-node1 addr=192.168.122.201 addr=192.168.123.201 rh80-node2 addr=192.168.122.202 addr=192.168.123.202 transport knet link link_priority=1 link link_priority=0 pcs cluster setup my_twolink_cluster rh80-node1 addr=192.168.122.201 addr=192.168.123.201 rh80-node2 addr=192.168.122.202 addr=192.168.123.202 transport knet link linknumber=1 link_priority=0 link link_priority=1", "pcs cluster setup my_twolink_cluster rh80-node1 addr=192.168.122.201 addr=192.168.123.201 rh80-node2 addr=192.168.122.202 addr=192.168.123.202 transport knet link_mode=active", "pcs cluster setup my_twolink_cluster rh80-node1 addr=192.168.122.201 addr=192.168.123.201 rh80-node2 addr=192.168.122.202 addr=192.168.123.202 transport knet link_mode=active link link_priority=1 link link_priority=0", "pcs stonith create myapc fence_apc_snmp ipaddr=\"zapc.example.com\" pcmk_host_map=\"z1.example.com:1;z2.example.com:2\" login=\"apc\" passwd=\"apc\"", "pcs stonith config myapc Resource: myapc (class=stonith type=fence_apc_snmp) Attributes: ipaddr=zapc.example.com pcmk_host_map=z1.example.com:1;z2.example.com:2 login=apc passwd=apc Operations: monitor interval=60s (myapc-monitor-interval-60s)", "pcs config backup filename", "pcs config restore [--local] [ filename ]", "firewall-cmd --permanent --add-service=high-availability firewall-cmd --add-service=high-availability" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_high_availability_clusters/assembly_creating-high-availability-cluster-configuring-and-managing-high-availability-clusters
Chapter 2. Avro Deserialize Action
Chapter 2. Avro Deserialize Action Deserialize payload to Avro 2.1. Configuration Options The following table summarizes the configuration options available for the avro-deserialize-action Kamelet: Property Name Description Type Default Example schema * Schema The Avro schema to use during serialization (as single-line, using JSON format) string "{\"type\": \"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}" validate Validate Indicates if the content must be validated against the schema boolean true Note Fields marked with an asterisk (*) are mandatory. 2.2. Dependencies At runtime, the avro-deserialize-action Kamelet relies upon the presence of the following dependencies: github:openshift-integration.kamelet-catalog:camel-kamelets-utils:kamelet-catalog-1.6-SNAPSHOT camel:kamelet camel:core camel:jackson-avro 2.3. Usage This section describes how you can use the avro-deserialize-action . 2.3.1. Knative Action You can use the avro-deserialize-action Kamelet as an intermediate step in a Knative binding. avro-deserialize-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: avro-deserialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: '{"first":"Ada","last":"Lovelace"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: avro-serialize-action properties: schema: "{\"type\": \"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: avro-deserialize-action properties: schema: "{\"type\": \"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-serialize-action sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 2.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 2.3.1.2. Procedure for using the cluster CLI Save the avro-deserialize-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f avro-deserialize-action-binding.yaml 2.3.1.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind --name avro-deserialize-action-binding timer-source?message='{"first":"Ada","last":"Lovelace"}' --step json-deserialize-action --step avro-serialize-action -p step-1.schema='{"type": "record", "namespace": "com.example", "name": "FullName", "fields": [{"name": "first", "type": "string"},{"name": "last", "type": "string"}]}' --step avro-deserialize-action -p step-2.schema='{"type": "record", "namespace": "com.example", "name": "FullName", "fields": [{"name": "first", "type": "string"},{"name": "last", "type": "string"}]}' --step json-serialize-action channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 2.3.2. Kafka Action You can use the avro-deserialize-action Kamelet as an intermediate step in a Kafka binding. avro-deserialize-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: avro-deserialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: '{"first":"Ada","last":"Lovelace"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: avro-serialize-action properties: schema: "{\"type\": \"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: avro-deserialize-action properties: schema: "{\"type\": \"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-serialize-action sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 2.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 2.3.2.2. Procedure for using the cluster CLI Save the avro-deserialize-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f avro-deserialize-action-binding.yaml 2.3.2.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind --name avro-deserialize-action-binding timer-source?message='{"first":"Ada","last":"Lovelace"}' --step json-deserialize-action --step avro-serialize-action -p step-1.schema='{"type": "record", "namespace": "com.example", "name": "FullName", "fields": [{"name": "first", "type": "string"},{"name": "last", "type": "string"}]}' --step avro-deserialize-action -p step-2.schema='{"type": "record", "namespace": "com.example", "name": "FullName", "fields": [{"name": "first", "type": "string"},{"name": "last", "type": "string"}]}' --step json-serialize-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 2.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/avro-deserialize-action.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: avro-deserialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: '{\"first\":\"Ada\",\"last\":\"Lovelace\"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: avro-serialize-action properties: schema: \"{\\\"type\\\": \\\"record\\\", \\\"namespace\\\": \\\"com.example\\\", \\\"name\\\": \\\"FullName\\\", \\\"fields\\\": [{\\\"name\\\": \\\"first\\\", \\\"type\\\": \\\"string\\\"},{\\\"name\\\": \\\"last\\\", \\\"type\\\": \\\"string\\\"}]}\" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: avro-deserialize-action properties: schema: \"{\\\"type\\\": \\\"record\\\", \\\"namespace\\\": \\\"com.example\\\", \\\"name\\\": \\\"FullName\\\", \\\"fields\\\": [{\\\"name\\\": \\\"first\\\", \\\"type\\\": \\\"string\\\"},{\\\"name\\\": \\\"last\\\", \\\"type\\\": \\\"string\\\"}]}\" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-serialize-action sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel", "apply -f avro-deserialize-action-binding.yaml", "kamel bind --name avro-deserialize-action-binding timer-source?message='{\"first\":\"Ada\",\"last\":\"Lovelace\"}' --step json-deserialize-action --step avro-serialize-action -p step-1.schema='{\"type\": \"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}' --step avro-deserialize-action -p step-2.schema='{\"type\": \"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}' --step json-serialize-action channel:mychannel", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: avro-deserialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: '{\"first\":\"Ada\",\"last\":\"Lovelace\"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: avro-serialize-action properties: schema: \"{\\\"type\\\": \\\"record\\\", \\\"namespace\\\": \\\"com.example\\\", \\\"name\\\": \\\"FullName\\\", \\\"fields\\\": [{\\\"name\\\": \\\"first\\\", \\\"type\\\": \\\"string\\\"},{\\\"name\\\": \\\"last\\\", \\\"type\\\": \\\"string\\\"}]}\" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: avro-deserialize-action properties: schema: \"{\\\"type\\\": \\\"record\\\", \\\"namespace\\\": \\\"com.example\\\", \\\"name\\\": \\\"FullName\\\", \\\"fields\\\": [{\\\"name\\\": \\\"first\\\", \\\"type\\\": \\\"string\\\"},{\\\"name\\\": \\\"last\\\", \\\"type\\\": \\\"string\\\"}]}\" - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-serialize-action sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic", "apply -f avro-deserialize-action-binding.yaml", "kamel bind --name avro-deserialize-action-binding timer-source?message='{\"first\":\"Ada\",\"last\":\"Lovelace\"}' --step json-deserialize-action --step avro-serialize-action -p step-1.schema='{\"type\": \"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}' --step avro-deserialize-action -p step-2.schema='{\"type\": \"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}' --step json-serialize-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/avro-deserialize-action
Chapter 1. Understanding authentication at runtime
Chapter 1. Understanding authentication at runtime When building images, you might need to define authentication in the following scenarios: Authenticating to a container registry Pulling source code from Git The authentication is done through the definition of secrets in which the required sensitive data is stored. 1.1. Build secret annotation You can add an annotation build.shipwright.io/referenced.secret: "true" to a build secret. Based on this annotation, the build controller takes a reconcile action when an event, such as create, update, or delete triggers for the build secret. The following example shows the usage of an annotation with a secret: apiVersion: v1 data: .dockerconfigjson: <pull_secret> 1 kind: Secret metadata: annotations: build.shipwright.io/referenced.secret: "true" 2 name: secret-docker type: kubernetes.io/dockerconfigjson 1 Base64-encoded pull secret. 2 The value of the build.shipwright.io/referenced.secret annotation is set to true . This annotation filters secrets which are not referenced in a build instance. For example, if a secret does not have this annotation, the build controller does not reconcile even if the event is triggered for the secret. Reconciling on triggering of events allows the build controller to re-trigger validations on the build configuration, helping you to understand if a dependency is missing. 1.2. Authentication to Git repositories You can define the following types of authentication for a Git repository: Basic authentication Secure Shell (SSH) authentication You can also configure Git secrets with both types of authentication in your Build CR. 1.2.1. Basic authentication With basic authentication, you must configure the user name and password of the Git repository. The following example shows the usage of basic authentication for Git: apiVersion: v1 kind: Secret metadata: name: secret-git-basic-auth annotations: build.shipwright.io/referenced.secret: "true" type: kubernetes.io/basic-auth 1 stringData: 2 username: <cleartext_username> password: <cleartext_password> 1 The type of the Kubernetes secret. 2 The field to store your user name and password in clear text. 1.2.2. SSH authentication With SSH authentication, you must configure the Tekton annotations to specify the hostname of the Git repository provider for use. For example, github.com for GitHub or gitlab.com for GitLab. The following example shows the usage of SSH authentication for Git: apiVersion: v1 kind: Secret metadata: name: secret-git-ssh-auth annotations: build.shipwright.io/referenced.secret: "true" type: kubernetes.io/ssh-auth 1 data: ssh-privatekey: | 2 # Insert ssh private key, base64 encoded 1 The type of the Kubernetes secret. 2 Base64 encoding of the SSH key used to authenticate into Git. You can generate this key by using the base64 ~/.ssh/id_rsa.pub command, where ~/.ssh/id_rsa.pub denotes the default location of the key that is generally used to authenticate to Git. 1.2.3. Usage of Git secret After creating a secret in the relevant namespace, you can reference it in your Build custom resource (CR). You can configure a Git secret with both types of authentication. The following example shows the usage of a Git secret with SSH authentication type: apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: buildah-golang-build spec: source: git: url: [email protected]:userjohn/newtaxi.git cloneSecret: secret-git-ssh-auth The following example shows the usage of a Git secret with basic authentication type: apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: buildah-golang-build spec: source: git: url: https://gitlab.com/userjohn/newtaxi.git cloneSecret: secret-git-basic-auth 1.3. Authentication to container registries To push images to a private container registry, you must define a secret in the respective namespace and then reference it in your Build custom resource (CR). Procedure Run the following command to generate a secret: USD oc --namespace <namespace> create secret docker-registry <container_registry_secret_name> \ --docker-server=<registry_host> \ 1 --docker-username=<username> \ 2 --docker-password=<password> \ 3 --docker-email=<email_address> 1 The <registry_host> value denotes the URL in this format https://<registry_server>/<registry_host> . 2 The <username> value is the user ID. 3 The <password> value can be your container registry password or an access token. Run the following command to annotate the secret: USD oc --namespace <namespace> annotate secrets <container_registry_secret_name> build.shipwright.io/referenced.secret='true' Set the value of the spec.output.pushSecret field to the secret name in your Build CR: apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: buildah-golang-build # ... output: image: <path_to_image> pushSecret: <container_registry_secret_name> 1.4. Role-based access control The release deployment YAML file includes two cluster-wide roles for using Builds objects. The following roles are installed by default: shpwright-build-aggregate-view : Grants you read access to the Builds resources, such as BuildStrategy , ClusterBuildStrategy , Build , and BuildRun . This role is aggregated to the Kubernetes view role. shipwright-build-aggregate-edit : Grants you write access to the Builds resources that are configured at namespace level. The build resources include BuildStrategy , Build , and BuildRun . Read access is granted to all ClusterBuildStrategy resources. This role is aggregated to the Kubernetes edit and admin roles. Only cluster administrators have write access to the ClusterBuildStrategy resources. You can change this setting by creating a separate Kubernetes ClusterRole role with these permissions and binding the role to appropriate users.
[ "apiVersion: v1 data: .dockerconfigjson: <pull_secret> 1 kind: Secret metadata: annotations: build.shipwright.io/referenced.secret: \"true\" 2 name: secret-docker type: kubernetes.io/dockerconfigjson", "apiVersion: v1 kind: Secret metadata: name: secret-git-basic-auth annotations: build.shipwright.io/referenced.secret: \"true\" type: kubernetes.io/basic-auth 1 stringData: 2 username: <cleartext_username> password: <cleartext_password>", "apiVersion: v1 kind: Secret metadata: name: secret-git-ssh-auth annotations: build.shipwright.io/referenced.secret: \"true\" type: kubernetes.io/ssh-auth 1 data: ssh-privatekey: | 2 # Insert ssh private key, base64 encoded", "apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: buildah-golang-build spec: source: git: url: [email protected]:userjohn/newtaxi.git cloneSecret: secret-git-ssh-auth", "apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: buildah-golang-build spec: source: git: url: https://gitlab.com/userjohn/newtaxi.git cloneSecret: secret-git-basic-auth", "oc --namespace <namespace> create secret docker-registry <container_registry_secret_name> --docker-server=<registry_host> \\ 1 --docker-username=<username> \\ 2 --docker-password=<password> \\ 3 --docker-email=<email_address>", "oc --namespace <namespace> annotate secrets <container_registry_secret_name> build.shipwright.io/referenced.secret='true'", "apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: buildah-golang-build # output: image: <path_to_image> pushSecret: <container_registry_secret_name>" ]
https://docs.redhat.com/en/documentation/builds_for_red_hat_openshift/1.1/html/authentication/understanding-authentication-at-runtime
Chapter 21. File Systems
Chapter 21. File Systems OverlayFS OverlayFS is a type of union file system. It allows the user to overlay one file system on top of another. Changes are recorded in the upper file system, while the lower file system remains unmodified. This allows multiple users to share a file-system image, such as a container or a DVD-ROM, where the base image is on read-only media. Refer to the kernel file Documentation/filesystems/overlayfs.txt for additional information. OverlayFS remains a Technology Preview in Red Hat Enterprise Linux 7.2 under most circumstances. As such, the kernel will log warnings when this technology is activated. Full support is available for OverlayFS when used with Docker under the following restrictions: * OverlayFS is only supported for use as a Docker graph driver. Its use can only be supported for container COW content, not for persistent storage. Any persistent storage must be placed on non-OverlayFS volumes to be supported. Only default Docker configuration can be used; that is, one level of overlay, one lowerdir, and both lower and upper levels are on the same file system. * Only XFS is currently supported for use as a lower layer file system. * SELinux must be enabled and in enforcing mode on the physical machine, but must be disabled in the container when performing container separation; that is, /etc/sysconfig/docker must not contain --selinux-enabled. SELinux support for OverlayFS is being worked on upstream, and is expected in a future release. * The OverlayFS kernel ABI and userspace behavior are not considered stable, and may see changes in future updates. * In order to make the yum and rpm utilities work properly inside the container, the user should be using the yum-plugin-ovl packages. Note that OverlayFS provides a restricted set of the POSIX standards. Test your application thoroughly before deploying it with OverlayFS. Note that XFS file systems must be created with the -n ftype=1 option enabled for use as an overlay. With the rootfs and any file systems created during system installation, set the --mkfsoptions=-n ftype=1 parameters in the Anaconda kickstart. When creating a new file system after the installation, run the # mkfs -t xfs -n ftype=1 /PATH/TO/DEVICE command. To determine whether an existing file system is eligible for use as an overlay, run the # xfs_info /PATH/TO/DEVICE | grep ftype command to see if the ftype=1 option is enabled. There are also several known issues associated with OverlayFS as of Red Hat Enterprise Linux 7.2 release. For details, see 'Non-standard behavior' in the Documentation/filesystems/overlayfs.txt file. Support for NFSv4 clients with flexible file layout Red Hat Enterprise Linux 7.2 adds support for flexible file layout on NFSv4 clients. This technology enables advanced features such as non-disruptive file mobility and client-side mirroring, providing enhanced usability in areas such as databases, big data and virtualization. See https://datatracker.ietf.org/doc/draft-ietf-nfsv4-flex-files/ for detailed information about NFS flexible file layout. Btrfs file system The Btrfs (B-Tree) file system is supported as a Technology Preview in Red Hat Enterprise Linux 7.2. This file system offers advanced management, reliability, and scalability features. It enables users to create snapshots, it enables compression and integrated device management. pNFS Block Layout Support As a Technology Preview, the upstream code has been backported to the Red Hat Enterprise Linux client to provide pNFS block layout support.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.2_release_notes/technology-preview-file_systems
Chapter 1. Support policy for Cryostat
Chapter 1. Support policy for Cryostat Red Hat supports a major version of Cryostat for a minimum of 6 months. Red Hat bases this figure on the time that the product gets released on the Red Hat Customer Portal. You can install and deploy Cryostat on Red Hat OpenShift Container Platform 4.10 or a later version that runs on an x86_64 architecture. Additional resources For more information about the Cryostat life cycle policy, see Red Hat build of Cryostat on the Red Hat OpenShift Container Platform Life Cycle Policy web page.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/release_notes_for_the_red_hat_build_of_cryostat_2.3/cryostat-support-policy_cryostat
Appendix C. Revision History
Appendix C. Revision History 0.1-5 Fri Apr 28 2023, Lucie Varakova ( [email protected] ) Added a known issue (Authentication and Interoperability). 0.1-4 Tue Mar 02 2021, Lenka Spackova ( [email protected] ) Updated a link to Upgrading from RHEL 6 to RHEL 7 . Fixed CentOS Linux name. 0.1-3 Wed Sep 2 2020, Jaroslav Klech ( [email protected] ) Added a kernel enhancement that IBPB cannot be directly disabled. 0.1-2 Tue Apr 28 2020, Lenka Spackova ( [email protected] ) Updated information about in-place upgrades. 0.1-1 Thu Mar 19 2020, Lenka Spackova ( [email protected] ) Added a known issue related to installation. 0.1-0 Thu Mar 12 2020, Lenka Spackova ( [email protected] ) Added information about the storage RHEL System Role. 0.0-9 Wed Feb 12 2020, Jaroslav Klech ( [email protected] ) Provided a complete kernel version to Architectures and New Features chapters. 0.0-8 Mon Feb 03 2020, Lenka Spackova ( [email protected] ) Added a known issue about an error message when upgrading from the RHEL 7.6 version of PCP . 0.0-7 Tue Nov 05 2019, Lenka Spackova ( [email protected] ) Updated Overview with the new supported in-place upgrade path from RHEL 7.6 to RHEL 8.1. Updated deprecated functionality. 0.0-6 Fri Oct 25 2019, Lenka Spackova ( [email protected] ) Added a note that RHEL System Roles for SAP are now available as a Technology Preview. 0.0-5 Mon Oct 7 2019, Jiri Herrman ( [email protected] ) Clarified a Technology Preview note related to OVMF. 0.0-4 Wed Aug 21 2019, Lenka Spackova ( [email protected] ) Added instructions on how to enable the Extras channel to the YUM 4 Technology Preview note (System and Subscription Management). 0.0-3 Tue Aug 20 2019, Lenka Spackova ( [email protected] ) Added a known issue related to kdump (Kernel). Updated text of a Technology Preview description (Virtualization). 0.0-2 Thu Aug 15 2019, Lenka Spackova ( [email protected] ) Added a Technology Preview related to Azure M416v2 as a host (Virtualization). Added a link to Intel(R) Omni-Path Architecture documentation (Kernel). Added an SSSD-related feature: a new default value for the fallback_homedir parameter (Authentication and Interoperability). Added a known issue related to the bnx2x driver (Kernel). Added two desktop-related bug fixes. 0.0-1 Tue Aug 06 2019, Lenka Spackova ( [email protected] ) Release of the Red Hat Enterprise Linux 7.7 Release Notes. 0.0-0 Wed Jun 05 2019, Lenka Spackova ( [email protected] ) Release of the Red Hat Enterprise Linux 7.7 Beta Release Notes.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.7_release_notes/revision_history
Chapter 4. KafkaSpec schema reference
Chapter 4. KafkaSpec schema reference Used in: Kafka Property Property type Description kafka KafkaClusterSpec Configuration of the Kafka cluster. zookeeper ZookeeperClusterSpec Configuration of the ZooKeeper cluster. This section is required when running a ZooKeeper-based Apache Kafka cluster. entityOperator EntityOperatorSpec Configuration of the Entity Operator. clusterCa CertificateAuthority Configuration of the cluster certificate authority. clientsCa CertificateAuthority Configuration of the clients certificate authority. cruiseControl CruiseControlSpec Configuration for Cruise Control deployment. Deploys a Cruise Control instance when specified. jmxTrans JmxTransSpec The jmxTrans property has been deprecated. JMXTrans is deprecated and related resources removed in Streams for Apache Kafka 2.5. As of Streams for Apache Kafka 2.5, JMXTrans is not supported anymore and this option is ignored. kafkaExporter KafkaExporterSpec Configuration of the Kafka Exporter. Kafka Exporter can provide additional metrics, for example lag of consumer group at topic/partition. maintenanceTimeWindows string array A list of time windows for maintenance tasks (that is, certificates renewal). Each time window is defined by a cron expression.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-kafkaspec-reference
Chapter 62. JSON Gson
Chapter 62. JSON Gson Gson is a Data Format which uses the Gson Library . from("activemq:My.Queue"). marshal().json(JsonLibrary.Gson). to("mqseries:Another.Queue"); 62.1. Dependencies When using json-gson with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-gson-starter</artifactId> </dependency> 62.2. Gson Options The JSON Gson dataformat supports 3 options, which are listed below. Name Default Java Type Description prettyPrint Boolean To enable pretty printing output nicely formatted. Is by default false. unmarshalType String Class name of the java type to use when unmarshalling. contentTypeHeader Boolean Whether the data format should set the Content-Type header with the type from the data format. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSON. 62.3. Spring Boot Auto-Configuration The component supports 4 options, which are listed below. Name Description Default Type camel.dataformat.json-gson.content-type-header Whether the data format should set the Content-Type header with the type from the data format. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSON. true Boolean camel.dataformat.json-gson.enabled Whether to enable auto configuration of the json-gson data format. This is enabled by default. Boolean camel.dataformat.json-gson.pretty-print To enable pretty printing output nicely formatted. Is by default false. false Boolean camel.dataformat.json-gson.unmarshal-type Class name of the java type to use when unmarshalling. String
[ "from(\"activemq:My.Queue\"). marshal().json(JsonLibrary.Gson). to(\"mqseries:Another.Queue\");", "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-gson-starter</artifactId> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-json-gson-dataformat-starter
Chapter 8. Provisioning virtual machines on KVM (libvirt)
Chapter 8. Provisioning virtual machines on KVM (libvirt) Kernel-based Virtual Machines (KVMs) use an open source virtualization daemon and API called libvirt running on Red Hat Enterprise Linux. Satellite can connect to the libvirt API on a KVM server, provision hosts on the hypervisor, and control certain virtualization functions. Only Virtual Machines created through Satellite can be managed. Virtual Machines with other than directory storage pool types are unsupported. You can use KVM provisioning to create hosts over a network connection or from an existing image. Prerequisites You can use synchronized content repositories for Red Hat Enterprise Linux. For more information, see Syncing Repositories in Managing content . Provide an activation key for host registration. For more information, see Creating An Activation Key in Managing content . A Capsule Server managing a network on the KVM server. Ensure no other DHCP services run on this network to avoid conflicts with Capsule Server. For more information about network service configuration for Capsule Servers, see Configuring Networking in Provisioning hosts . A Red Hat Enterprise Linux server running KVM virtualization tools (libvirt daemon). For more information, see the Red Hat Enterprise Linux 8 Configuring and managing virtualization . An existing virtual machine image if you want to use image-based provisioning. Ensure that this image exists in a storage pool on the KVM host. The default storage pool is usually located in /var/lib/libvirt/images . Only directory pool storage types can be managed through Satellite. Optional: The examples in these procedures use the root user for KVM. If you want to use a non-root user on the KVM server, you must add the user to the libvirt group on the KVM server: Additional resources For a list of permissions a non-admin user requires to provision hosts, see Appendix E, Permissions required to provision hosts . You can configure Satellite to remove the associated virtual machine when you delete a host. For more information, see Section 2.22, "Removing a virtual machine upon host deletion" . 8.1. Configuring Satellite Server for KVM connections Before adding the KVM connection, create an SSH key pair for the foreman user to ensure a secure connection between Satellite Server and KVM. Procedure On Satellite Server, switch to the foreman user: Generate the key pair: Copy the public key to the KVM server: Exit the bash shell for the foreman user: Install the libvirt-client package: Use the following command to test the connection to the KVM server: 8.2. Adding a KVM connection to Satellite Server Use this procedure to add KVM as a compute resource in Satellite. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources and click Create Compute Resource . In the Name field, enter a name for the new compute resource. From the Provider list, select Libvirt . In the Description field, enter a description for the compute resource. In the URL field, enter the connection URL to the KVM server. For example: From the Display type list, select either VNC or Spice . Optional: To secure console access for new hosts with a randomly generated password, select the Set a randomly generated password on the display connection checkbox. You can retrieve the password for the VNC console to access the guest virtual machine console from the output of the following command executed on the KVM server: The password is randomly generated every time the console for the virtual machine is opened, for example, with virt-manager. Click Test Connection to ensure that Satellite Server connects to the KVM server without fault. Verify that the Locations and Organizations tabs are automatically set to your current context. If you want, add additional contexts to these tabs. Click Submit to save the KVM connection. CLI procedure To create a compute resource, enter the hammer compute-resource create command: 8.3. Adding KVM images to Satellite Server To create hosts using image-based provisioning, you must add information about the image, such as access details and the image location, to your Satellite Server. Note that you can manage only directory pool storage types through Satellite. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources and click the name of the KVM connection. Click Create Image . In the Name field, enter a name for the image. From the Operating System list, select the base operating system of the image. From the Architecture list, select the operating system architecture. In the Username field, enter the SSH user name for image access. This is normally the root user. In the Password field, enter the SSH password for image access. In the Image path field, enter the full path that points to the image on the KVM server. For example: Optional: Select the User Data checkbox if the image supports user data input, such as cloud-init data. Click Submit to save the image details. CLI procedure Create the image with the hammer compute-resource image create command. Use the --uuid field to store the full path of the image location on the KVM server. 8.4. Adding KVM details to a compute profile Use this procedure to add KVM hardware settings to a compute profile. When you create a host on KVM using this compute profile, these settings are automatically populated. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Infrastructure > Compute Profiles . In the Compute Profiles window, click the name of an existing compute profile, or click Create Compute Profile , enter a Name , and click Submit . Click the name of the KVM compute resource. In the CPUs field, enter the number of CPUs to allocate to the new host. In the Memory field, enter the amount of memory to allocate to the new host. From the Image list, select the image to use if performing image-based provisioning. From the Network Interfaces list, select the network parameters for the host's network interface. You can create multiple network interfaces. However, at least one interface must point to a Capsule-managed network. In the Storage area, enter the storage parameters for the host. You can create multiple volumes for the host. Click Submit to save the settings to the compute profile. CLI procedure To create a compute profile, enter the following command: To add the values for the compute profile, enter the following command: 8.5. Creating hosts on KVM In Satellite, you can use KVM provisioning to create hosts over a network connection or from an existing image: If you want to create a host over a network connection, the new host must be able to access either Satellite Server's integrated Capsule or an external Capsule Server on a KVM virtual network, so that the host has access to PXE provisioning services. This new host entry triggers the KVM server to create and start a virtual machine. If the virtual machine detects the defined Capsule Server through the virtual network, the virtual machine boots to PXE and begins to install the chosen operating system. If you want to create a host with an existing image, the new host entry triggers the KVM server to create the virtual machine using a pre-existing image as a basis for the new volume. To use the CLI instead of the Satellite web UI, see the CLI procedure . DHCP conflicts For network-based provisioning, if you use a virtual network on the KVM server for provisioning, select a network that does not provide DHCP assignments. This causes DHCP conflicts with Satellite Server when booting new hosts. Procedure In the Satellite web UI, navigate to Hosts > Create Host . In the Name field, enter a name for the host. Optional: Click the Organization tab and change the organization context to match your requirement. Optional: Click the Location tab and change the location context to match your requirement. From the Host Group list, select a host group that you want to assign your host to. That host group will populate the form. From the Deploy on list, select the KVM connection. From the Compute Profile list, select a profile to use to automatically populate virtual machine settings. The KVM-specific fields are populated with settings from your compute profile. Modify these settings if required. Click the Interfaces tab, and on the interface of the host, click Edit . Verify that the fields are populated with values. Note in particular: Satellite automatically assigns an IP address for the new host. Ensure that the MAC address field is blank. KVM assigns a MAC address to the host during provisioning. The Name from the Host tab becomes the DNS name . Ensure that Satellite automatically selects the Managed , Primary , and Provision options for the first interface on the host. If not, select them. Click OK to save. To add another interface, click Add Interface . You can select only one interface for Provision and Primary . Click the Operating System tab, and confirm that all fields automatically contain values. Select the Provisioning Method that you want to use: For network-based provisioning, click Network Based . For image-based provisioning, click Image Based . Click Resolve in Provisioning templates to check the new host can identify the right provisioning templates to use. Click the Virtual Machine tab and confirm that these settings are populated with details from the host group and compute profile. Modify these settings to suit your needs. Click the Parameters tab, and ensure that a parameter exists that provides an activation key. If not, add an activation key. Click Submit to save the host entry. CLI procedure To use network-based provisioning, create the host with the hammer host create command and include --provision-method build . Replace the values in the following example with the appropriate values for your environment. To use image-based provisioning, create the host with the hammer host create command and include --provision-method image . Replace the values in the following example with the appropriate values for your environment. For more information about additional host creation parameters for this compute resource, enter the hammer host create --help command.
[ "usermod -a -G libvirt non_root_user", "su foreman -s /bin/bash", "ssh-keygen", "ssh-copy-id [email protected]", "exit", "satellite-maintain packages install libvirt-client", "su foreman -s /bin/bash -c 'virsh -c qemu+ssh://[email protected]/system list'", "qemu+ssh:// [email protected] /system", "virsh edit your_VM_name <graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0' passwd=' your_randomly_generated_password '>", "hammer compute-resource create --name \" My_KVM_Server \" --provider \"Libvirt\" --description \"KVM server at kvm.example.com \" --url \"qemu+ssh://root@ kvm.example.com/system \" --locations \"New York\" --organizations \" My_Organization \"", "/var/lib/libvirt/images/TestImage.qcow2", "hammer compute-resource image create --name \" KVM Image \" --compute-resource \" My_KVM_Server \" --operatingsystem \"RedHat version \" --architecture \"x86_64\" --username root --user-data false --uuid \"/var/lib/libvirt/images/ KVMimage .qcow2\" \\", "hammer compute-profile create --name \"Libvirt CP\"", "hammer compute-profile values create --compute-profile \"Libvirt CP\" --compute-resource \" My_KVM_Server \" --interface \"compute_type=network,compute_model=virtio,compute_network= examplenetwork \" --volume \"pool_name=default,capacity=20G,format_type=qcow2\" --compute-attributes \"cpus=1,memory=1073741824\"", "hammer host create --build true --compute-attributes=\"cpus=1,memory=1073741824\" --compute-resource \" My_KVM_Server \" --enabled true --hostgroup \" My_Host_Group \" --interface \"managed=true,primary=true,provision=true,compute_type=network,compute_network= examplenetwork \" --location \" My_Location \" --managed true --name \" My_Host_Name \" --organization \" My_Organization \" --provision-method \"build\" --root-password \" My_Password \" --volume=\"pool_name=default,capacity=20G,format_type=qcow2\"", "hammer host create --compute-attributes=\"cpus=1,memory=1073741824\" --compute-resource \" My_KVM_Server \" --enabled true --hostgroup \" My_Host_Group \" --image \" My_KVM_Image \" --interface \"managed=true,primary=true,provision=true,compute_type=network,compute_network=examplenetwork\" --location \" My_Location \" --managed true --name \" My_Host_Name \" --organization \" My_Organization \" --provision-method \"image\" --volume=\"pool_name=default,capacity=20G,format_type=qcow2\"" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/provisioning_hosts/Provisioning_Virtual_Machines_on_KVM_kvm-provisioning
Chapter 3. Installing the Migration Toolkit for Containers
Chapter 3. Installing the Migration Toolkit for Containers You can install the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4. Note To install MTC on OpenShift Container Platform 3, see Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3 . By default, the MTC web console and the Migration Controller pod run on the target cluster. You can configure the Migration Controller custom resource manifest to run the MTC web console and the Migration Controller pod on a remote cluster . After you have installed MTC, you must configure an object storage to use as a replication repository. To uninstall MTC, see Uninstalling MTC and deleting resources . 3.1. Compatibility guidelines You must install the Migration Toolkit for Containers (MTC) Operator that is compatible with your OpenShift Container Platform version. Definitions legacy platform OpenShift Container Platform 4.5 and earlier. modern platform OpenShift Container Platform 4.6 and later. legacy operator The MTC Operator designed for legacy platforms. modern operator The MTC Operator designed for modern platforms. control cluster The cluster that runs the MTC controller and GUI. remote cluster A source or destination cluster for a migration that runs Velero. The Control Cluster communicates with Remote clusters via the Velero API to drive migrations. You must use the compatible MTC version for migrating your OpenShift Container Platform clusters. For the migration to succeed both your source cluster and the destination cluster must use the same version of MTC. MTC 1.7 supports migrations from OpenShift Container Platform 3.11 to 4.9. MTC 1.8 only supports migrations from OpenShift Container Platform 4.10 and later. Table 3.1. MTC compatibility: Migrating from a legacy or a modern platform Details OpenShift Container Platform 3.11 OpenShift Container Platform 4.0 to 4.5 OpenShift Container Platform 4.6 to 4.9 OpenShift Container Platform 4.10 or later Stable MTC version MTC v.1.7. z MTC v.1.7. z MTC v.1.7. z MTC v.1.8. z Installation Legacy MTC v.1.7. z operator: Install manually with the operator.yml file. [ IMPORTANT ] This cluster cannot be the control cluster. Install with OLM, release channel release-v1.7 Install with OLM, release channel release-v1.8 Edge cases exist in which network restrictions prevent modern clusters from connecting to other clusters involved in the migration. For example, when migrating from an OpenShift Container Platform 3.11 cluster on premises to a modern OpenShift Container Platform cluster in the cloud, where the modern cluster cannot connect to the OpenShift Container Platform 3.11 cluster. With MTC v.1.7. z , if one of the remote clusters is unable to communicate with the control cluster because of network restrictions, use the crane tunnel-api command. With the stable MTC release, although you should always designate the most modern cluster as the control cluster, in this specific case it is possible to designate the legacy cluster as the control cluster and push workloads to the remote cluster. 3.2. Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 4.2 to 4.5 You can install the legacy Migration Toolkit for Containers Operator manually on OpenShift Container Platform versions 4.2 to 4.5. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must have access to registry.redhat.io . You must have podman installed. Procedure Log in to registry.redhat.io with your Red Hat Customer Portal credentials: USD podman login registry.redhat.io Download the operator.yml file by entering the following command: podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./ Download the controller.yml file by entering the following command: podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./ Log in to your OpenShift Container Platform source cluster. Verify that the cluster can authenticate with registry.redhat.io : USD oc run test --image registry.redhat.io/ubi9 --command sleep infinity Create the Migration Toolkit for Containers Operator object: USD oc create -f operator.yml Example output namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists 1 Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists 1 You can ignore Error from server (AlreadyExists) messages. They are caused by the Migration Toolkit for Containers Operator creating resources for earlier versions of OpenShift Container Platform 4 that are provided in later releases. Create the MigrationController object: USD oc create -f controller.yml Verify that the MTC pods are running: USD oc get pods -n openshift-migration 3.3. Installing the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.17 You install the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.17 by using the Operator Lifecycle Manager. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Use the Filter by keyword field to find the Migration Toolkit for Containers Operator . Select the Migration Toolkit for Containers Operator and click Install . Click Install . On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded . Click Migration Toolkit for Containers Operator . Under Provided APIs , locate the Migration Controller tile, and click Create Instance . Click Create . Click Workloads Pods to verify that the MTC pods are running. 3.4. Proxy configuration For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object. For OpenShift Container Platform 4.2 to 4.17, the MTC inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings. 3.4.1. Direct volume migration Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy. If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy. 3.4.1.1. TCP proxy setup for DVM You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC. 3.4.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy? You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel. Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy. 3.4.1.3. Known issue Migration fails with error Upgrade request required The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required . Workaround: Use a proxy that supports the SPDY protocol. In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required . Workaround: Ensure that the proxy forwards the Upgrade header. 3.4.2. Tuning network policies for migrations OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration. Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions. 3.4.2.1. NetworkPolicy configuration 3.4.2.1.1. Egress traffic from Rsync pods You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress 3.4.2.1.2. Ingress traffic to Rsync pods apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress 3.4.2.2. EgressNetworkPolicy configuration The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster. Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters. Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny 3.4.2.3. Choosing alternate endpoints for data transfer By default, DVM uses an OpenShift Container Platform route as an endpoint to transfer PV data to destination clusters. You can choose another type of supported endpoint, if cluster topologies allow. For each cluster, you can configure an endpoint by setting the rsync_endpoint_type variable on the appropriate destination cluster in your MigrationController CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route] 3.4.2.4. Configuring supplemental groups for Rsync pods When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Table 3.2. Supplementary groups for Rsync pods Variable Type Default Description src_supplemental_groups string Not set Comma-separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not set Comma-separated list of supplemental groups for target Rsync pods Example usage The MigrationController CR can be updated to set values for these supplemental groups: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 3.4.3. Configuring proxies Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure Get the MigrationController CR manifest: USD oc get migrationcontroller <migration_controller> -n openshift-migration Update the proxy parameters: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration ... spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2 1 Stunnel proxy URL for direct volume migration. 2 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy nor the httpsProxy field is set. Save the manifest as migration-controller.yaml . Apply the updated manifest: USD oc replace -f migration-controller.yaml -n openshift-migration For more information, see Configuring the cluster-wide proxy . 3.4.4. Running Rsync as either root or non-root OpenShift Container Platform environments have the PodSecurityAdmission controller enabled by default. This controller requires cluster administrators to enforce Pod Security Standards by means of namespace labels. All workloads in the cluster are expected to run one of the following Pod Security Standard levels: Privileged , Baseline or Restricted . Every cluster has its own default policy set. To guarantee successful data transfer in all environments, Migration Toolkit for Containers (MTC) 1.7.5 introduced changes in Rsync pods, including running Rsync pods as non-root user by default. This ensures that data transfer is possible even for workloads that do not necessarily require higher privileges. This change was made because it is best to run workloads with the lowest level of privileges possible. 3.4.4.1. Manually overriding default non-root operation for data transfer Although running Rsync pods as non-root user works in most cases, data transfer might fail when you run workloads as root user on the source side. MTC provides two ways to manually override default non-root operation for data transfer: Configure all migrations to run an Rsync pod as root on the destination cluster for all migrations. Run an Rsync pod as root on the destination cluster per migration. In both cases, you must set the following labels on the source side of any namespaces that are running workloads with higher privileges before migration: enforce , audit , and warn. To learn more about Pod Security Admission and setting values for labels, see Controlling pod security admission synchronization . 3.4.4.2. Configuring the MigrationController CR as root or non-root for all migrations By default, Rsync runs as non-root. On the destination cluster, you can configure the MigrationController CR to run Rsync as root. Procedure Configure the MigrationController CR as follows: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] migration_rsync_privileged: true This configuration will apply to all future migrations. 3.4.4.3. Configuring the MigMigration CR as root or non-root per migration On the destination cluster, you can configure the MigMigration CR to run Rsync as root or non-root, with the following non-root options: As a specific user ID (UID) As a specific group ID (GID) Procedure To run Rsync as root, configure the MigMigration CR according to this example: apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsRoot: true To run Rsync as a specific User ID (UID) or as a specific Group ID (GID), configure the MigMigration CR according to this example: apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsUser: 10010001 runAsGroup: 3 3.5. Configuring a replication repository You must configure an object storage to use as a replication repository. The Migration Toolkit for Containers (MTC) copies data from the source cluster to the replication repository, and then from the replication repository to the target cluster. MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. Select a method that is suited for your environment and is supported by your storage provider. MTC supports the following storage providers: Multicloud Object Gateway Amazon Web Services S3 Google Cloud Platform Microsoft Azure Blob Generic S3 object storage, for example, Minio or Ceph S3 3.5.1. Prerequisites All clusters must have uninterrupted network access to the replication repository. If you use a proxy server with an internally hosted replication repository, you must ensure that the proxy allows access to the replication repository. 3.5.2. Retrieving Multicloud Object Gateway credentials You must retrieve the Multicloud Object Gateway (MCG) credentials and S3 endpoint, which you need to configure MCG as a replication repository for the Migration Toolkit for Containers (MTC). You must retrieve the Multicloud Object Gateway (MCG) credentials, which you need to create a Secret custom resource (CR) for MTC. Note Although the MCG Operator is deprecated , the MCG plugin is still available for OpenShift Data Foundation. To download the plugin, browse to Download Red Hat OpenShift Data Foundation and download the appropriate MCG plugin for your operating system. Prerequisites You must deploy OpenShift Data Foundation by using the appropriate Red Hat OpenShift Data Foundation deployment guide . Procedure Obtain the S3 endpoint, AWS_ACCESS_KEY_ID , and AWS_SECRET_ACCESS_KEY by running the describe command on the NooBaa custom resource. You use these credentials to add MCG as a replication repository. 3.5.3. Configuring Amazon Web Services You configure Amazon Web Services (AWS) S3 object storage as a replication repository for the Migration Toolkit for Containers (MTC) . Prerequisites You must have the AWS CLI installed. The AWS S3 storage bucket must be accessible to the source and target clusters. If you are using the snapshot copy method: You must have access to EC2 Elastic Block Storage (EBS). The source and target clusters must be in the same region. The source and target clusters must have the same storage class. The storage class must be compatible with snapshots. Procedure Set the BUCKET variable: USD BUCKET=<your_bucket> Set the REGION variable: USD REGION=<your_region> Create an AWS S3 bucket: USD aws s3api create-bucket \ --bucket USDBUCKET \ --region USDREGION \ --create-bucket-configuration LocationConstraint=USDREGION 1 1 us-east-1 does not support a LocationConstraint . If your region is us-east-1 , omit --create-bucket-configuration LocationConstraint=USDREGION . Create an IAM user: USD aws iam create-user --user-name velero 1 1 If you want to use Velero to back up multiple clusters with multiple S3 buckets, create a unique user name for each cluster. Create a velero-policy.json file: USD cat > velero-policy.json <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeVolumes", "ec2:DescribeSnapshots", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateSnapshot", "ec2:DeleteSnapshot" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:DeleteObject", "s3:PutObject", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts" ], "Resource": [ "arn:aws:s3:::USD{BUCKET}/*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:ListBucketMultipartUploads" ], "Resource": [ "arn:aws:s3:::USD{BUCKET}" ] } ] } EOF Attach the policies to give the velero user the minimum necessary permissions: USD aws iam put-user-policy \ --user-name velero \ --policy-name velero \ --policy-document file://velero-policy.json Create an access key for the velero user: USD aws iam create-access-key --user-name velero Example output { "AccessKey": { "UserName": "velero", "Status": "Active", "CreateDate": "2017-07-31T22:24:41.576Z", "SecretAccessKey": <AWS_SECRET_ACCESS_KEY>, "AccessKeyId": <AWS_ACCESS_KEY_ID> } } Record the AWS_SECRET_ACCESS_KEY and the AWS_ACCESS_KEY_ID . You use the credentials to add AWS as a replication repository. 3.5.4. Configuring Google Cloud Platform You configure a Google Cloud Platform (GCP) storage bucket as a replication repository for the Migration Toolkit for Containers (MTC). Prerequisites You must have the gcloud and gsutil CLI tools installed. See the Google cloud documentation for details. The GCP storage bucket must be accessible to the source and target clusters. If you are using the snapshot copy method: The source and target clusters must be in the same region. The source and target clusters must have the same storage class. The storage class must be compatible with snapshots. Procedure Log in to GCP: USD gcloud auth login Set the BUCKET variable: USD BUCKET=<bucket> 1 1 Specify your bucket name. Create the storage bucket: USD gsutil mb gs://USDBUCKET/ Set the PROJECT_ID variable to your active project: USD PROJECT_ID=USD(gcloud config get-value project) Create a service account: USD gcloud iam service-accounts create velero \ --display-name "Velero service account" List your service accounts: USD gcloud iam service-accounts list Set the SERVICE_ACCOUNT_EMAIL variable to match its email value: USD SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list \ --filter="displayName:Velero service account" \ --format 'value(email)') Attach the policies to give the velero user the minimum necessary permissions: USD ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob ) Create the velero.server custom role: USD gcloud iam roles create velero.server \ --project USDPROJECT_ID \ --title "Velero Server" \ --permissions "USD(IFS=","; echo "USD{ROLE_PERMISSIONS[*]}")" Add IAM policy binding to the project: USD gcloud projects add-iam-policy-binding USDPROJECT_ID \ --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL \ --role projects/USDPROJECT_ID/roles/velero.server Update the IAM service account: USD gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET} Save the IAM service account keys to the credentials-velero file in the current directory: USD gcloud iam service-accounts keys create credentials-velero \ --iam-account USDSERVICE_ACCOUNT_EMAIL You use the credentials-velero file to add GCP as a replication repository. 3.5.5. Configuring Microsoft Azure You configure a Microsoft Azure Blob storage container as a replication repository for the Migration Toolkit for Containers (MTC). Prerequisites You must have the Azure CLI installed. The Azure Blob storage container must be accessible to the source and target clusters. If you are using the snapshot copy method: The source and target clusters must be in the same region. The source and target clusters must have the same storage class. The storage class must be compatible with snapshots. Procedure Log in to Azure: USD az login Set the AZURE_RESOURCE_GROUP variable: USD AZURE_RESOURCE_GROUP=Velero_Backups Create an Azure resource group: USD az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1 1 Specify your location. Set the AZURE_STORAGE_ACCOUNT_ID variable: USD AZURE_STORAGE_ACCOUNT_ID="veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')" Create an Azure storage account: USD az storage account create \ --name USDAZURE_STORAGE_ACCOUNT_ID \ --resource-group USDAZURE_RESOURCE_GROUP \ --sku Standard_GRS \ --encryption-services blob \ --https-only true \ --kind BlobStorage \ --access-tier Hot Set the BLOB_CONTAINER variable: USD BLOB_CONTAINER=velero Create an Azure Blob storage container: USD az storage container create \ -n USDBLOB_CONTAINER \ --public-access off \ --account-name USDAZURE_STORAGE_ACCOUNT_ID Create a service principal and credentials for velero : USD AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv` Create a service principal with the Contributor role, assigning a specific --role and --scopes : USD AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name "velero" \ --role "Contributor" \ --query 'password' -o tsv \ --scopes /subscriptions/USDAZURE_SUBSCRIPTION_ID/resourceGroups/USDAZURE_RESOURCE_GROUP` The CLI generates a password for you. Ensure you capture the password. After creating the service principal, obtain the client id. USD AZURE_CLIENT_ID=`az ad app credential list --id <your_app_id>` Note For this to be successful, you must know your Azure application ID. Save the service principal credentials in the credentials-velero file: USD cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_CLOUD_NAME=AzurePublicCloud EOF You use the credentials-velero file to add Azure as a replication repository. 3.5.6. Additional resources MTC workflow About data copy methods Adding a replication repository to the MTC web console 3.6. Uninstalling MTC and deleting resources You can uninstall the Migration Toolkit for Containers (MTC) and delete its resources to clean up the cluster. Note Deleting the velero CRDs removes Velero from the cluster. Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure Delete the MigrationController custom resource (CR) on all clusters: USD oc delete migrationcontroller <migration_controller> Uninstall the Migration Toolkit for Containers Operator on OpenShift Container Platform 4 by using the Operator Lifecycle Manager. Delete cluster-scoped resources on all clusters by running the following commands: migration custom resource definitions (CRDs): USD oc delete USD(oc get crds -o name | grep 'migration.openshift.io') velero CRDs: USD oc delete USD(oc get crds -o name | grep 'velero') migration cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io') migration-operator cluster role: USD oc delete clusterrole migration-operator velero cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'velero') migration cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io') migration-operator cluster role bindings: USD oc delete clusterrolebindings migration-operator velero cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'velero')
[ "podman login registry.redhat.io", "cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./", "cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./", "oc run test --image registry.redhat.io/ubi9 --command sleep infinity", "oc create -f operator.yml", "namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists", "oc create -f controller.yml", "oc get pods -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]", "spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"", "oc get migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2", "oc replace -f migration-controller.yaml -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] migration_rsync_privileged: true", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsRoot: true", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsUser: 10010001 runAsGroup: 3", "BUCKET=<your_bucket>", "REGION=<your_region>", "aws s3api create-bucket --bucket USDBUCKET --region USDREGION --create-bucket-configuration LocationConstraint=USDREGION 1", "aws iam create-user --user-name velero 1", "cat > velero-policy.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeVolumes\", \"ec2:DescribeSnapshots\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:DeleteObject\", \"s3:PutObject\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}/*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"s3:GetBucketLocation\", \"s3:ListBucketMultipartUploads\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}\" ] } ] } EOF", "aws iam put-user-policy --user-name velero --policy-name velero --policy-document file://velero-policy.json", "aws iam create-access-key --user-name velero", "{ \"AccessKey\": { \"UserName\": \"velero\", \"Status\": \"Active\", \"CreateDate\": \"2017-07-31T22:24:41.576Z\", \"SecretAccessKey\": <AWS_SECRET_ACCESS_KEY>, \"AccessKeyId\": <AWS_ACCESS_KEY_ID> } }", "gcloud auth login", "BUCKET=<bucket> 1", "gsutil mb gs://USDBUCKET/", "PROJECT_ID=USD(gcloud config get-value project)", "gcloud iam service-accounts create velero --display-name \"Velero service account\"", "gcloud iam service-accounts list", "SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list --filter=\"displayName:Velero service account\" --format 'value(email)')", "ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob )", "gcloud iam roles create velero.server --project USDPROJECT_ID --title \"Velero Server\" --permissions \"USD(IFS=\",\"; echo \"USD{ROLE_PERMISSIONS[*]}\")\"", "gcloud projects add-iam-policy-binding USDPROJECT_ID --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL --role projects/USDPROJECT_ID/roles/velero.server", "gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET}", "gcloud iam service-accounts keys create credentials-velero --iam-account USDSERVICE_ACCOUNT_EMAIL", "az login", "AZURE_RESOURCE_GROUP=Velero_Backups", "az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1", "AZURE_STORAGE_ACCOUNT_ID=\"veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')\"", "az storage account create --name USDAZURE_STORAGE_ACCOUNT_ID --resource-group USDAZURE_RESOURCE_GROUP --sku Standard_GRS --encryption-services blob --https-only true --kind BlobStorage --access-tier Hot", "BLOB_CONTAINER=velero", "az storage container create -n USDBLOB_CONTAINER --public-access off --account-name USDAZURE_STORAGE_ACCOUNT_ID", "AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv`", "AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name \"velero\" --role \"Contributor\" --query 'password' -o tsv --scopes /subscriptions/USDAZURE_SUBSCRIPTION_ID/resourceGroups/USDAZURE_RESOURCE_GROUP`", "AZURE_CLIENT_ID=`az ad app credential list --id <your_app_id>`", "cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_CLOUD_NAME=AzurePublicCloud EOF", "oc delete migrationcontroller <migration_controller>", "oc delete USD(oc get crds -o name | grep 'migration.openshift.io')", "oc delete USD(oc get crds -o name | grep 'velero')", "oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')", "oc delete clusterrole migration-operator", "oc delete USD(oc get clusterroles -o name | grep 'velero')", "oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')", "oc delete clusterrolebindings migration-operator", "oc delete USD(oc get clusterrolebindings -o name | grep 'velero')" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/migration_toolkit_for_containers/installing-mtc
Chapter 9. How to use dedicated worker nodes for Red Hat OpenShift Data Foundation
Chapter 9. How to use dedicated worker nodes for Red Hat OpenShift Data Foundation Any Red Hat OpenShift Container Platform subscription requires an OpenShift Data Foundation subscription. However, you can save on the OpenShift Container Platform subscription costs if you are using infrastructure nodes to schedule OpenShift Data Foundation resources. It is important to maintain consistency across environments with or without Machine API support. Because of this, it is highly recommended in all cases to have a special category of nodes labeled as either worker or infra or have both roles. See the Section 9.3, "Manual creation of infrastructure nodes" section for more information. 9.1. Anatomy of an Infrastructure node Infrastructure nodes for use with OpenShift Data Foundation have a few attributes. The infra node-role label is required to ensure the node does not consume RHOCP entitlements. The infra node-role label is responsible for ensuring only OpenShift Data Foundation entitlements are necessary for the nodes running OpenShift Data Foundation. Labeled with node-role.kubernetes.io/infra Adding an OpenShift Data Foundation taint with a NoSchedule effect is also required so that the infra node will only schedule OpenShift Data Foundation resources. Tainted with node.ocs.openshift.io/storage="true" The label identifies the RHOCP node as an infra node so that RHOCP subscription cost is not applied. The taint prevents non OpenShift Data Foundation resources to be scheduled on the tainted nodes. Note Adding storage taint on nodes might require toleration handling for the other daemonset pods such as openshift-dns daemonset . For information about how to manage the tolerations, see Knowledgebase article: Openshift-dns daemonsets doesn't include toleration to run on nodes with taints . Example of the taint and labels required on infrastructure node that will be used to run OpenShift Data Foundation services: 9.2. Machine sets for creating Infrastructure nodes If the Machine API is supported in the environment, then labels should be added to the templates for the Machine Sets that will be provisioning the infrastructure nodes. Avoid the anti-pattern of adding labels manually to nodes created by the machine API. Doing so is analogous to adding labels to pods created by a deployment. In both cases, when the pod/node fails, the replacement pod/node will not have the appropriate labels. Note In EC2 environments, you will need three machine sets, each configured to provision infrastructure nodes in a distinct availability zone (such as us-east-2a, us-east-2b, us-east-2c). Currently, OpenShift Data Foundation does not support deploying in more than three availability zones. The following Machine Set template example creates nodes with the appropriate taint and labels required for infrastructure nodes. This will be used to run OpenShift Data Foundation services. Important If you add a taint to the infrastructure nodes, you also need to add tolerations to the taint for other workloads, for example, the fluentd pods. For more information, see the Red Hat Knowledgebase solution Infrastructure Nodes in OpenShift 4 . 9.3. Manual creation of infrastructure nodes Only when the Machine API is not supported in the environment should labels be directly applied to nodes. Manual creation requires that at least 3 RHOCP worker nodes are available to schedule OpenShift Data Foundation services, and that these nodes have sufficient CPU and memory resources. To avoid the RHOCP subscription cost, the following is required: Adding a NoSchedule OpenShift Data Foundation taint is also required so that the infra node will only schedule OpenShift Data Foundation resources and repel any other non-OpenShift Data Foundation workloads. Warning Do not remove the node-role node-role.kubernetes.io/worker="" The removal of the node-role.kubernetes.io/worker="" can cause issues unless changes are made both to the OpenShift scheduler and to MachineConfig resources. If already removed, it should be added again to each infra node. Adding node-role node-role.kubernetes.io/infra="" and OpenShift Data Foundation taint is sufficient to conform to entitlement exemption requirements. 9.4. Taint a node from the user interface This section explains the procedure to taint nodes after the OpenShift Data Foundation deployment. Procedure In the OpenShift Web Console, click Compute Nodes , and then select the node which has to be tainted. In the Details page click on Edit taints . Enter the values in the Key <nodes.openshift.ocs.io/storage>, Value <true> and in the Effect <Noschedule> field. Click Save. Verification steps Follow the steps to verify that the node has tainted successfully: Navigate to Compute Nodes . Select the node to verify its status, and then click on the YAML tab. In the specs section check the values of the following parameters: Additional resources For more information, refer to Creating the OpenShift Data Foundation cluster on VMware vSphere .
[ "spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/worker: \"\" node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"", "template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: kb-s25vf machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: kb-s25vf-infra-us-west-2a spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"", "label node <node> node-role.kubernetes.io/infra=\"\" label node <node> cluster.ocs.openshift.io/openshift-storage=\"\"", "adm taint node <node> node.ocs.openshift.io/storage=\"true\":NoSchedule", "Taints: Key: node.openshift.ocs.io/storage Value: true Effect: Noschedule" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/how-to-use-dedicated-worker-nodes-for-openshift-data-foundation_osp
Chapter 1. Introduction
Chapter 1. Introduction This book describes the Logical Volume Manager (LVM), including information on running LVM in a clustered environment. 1.1. Audience This book is intended to be used by system administrators managing systems running the Linux operating system. It requires familiarity with Red Hat Enterprise Linux 6.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/ch_introduction-clvm
5.5.3. Deleting a Member from a DLM Cluster
5.5.3. Deleting a Member from a DLM Cluster To delete a member from an existing DLM cluster that is currently in operation, follow these steps: At one of the running nodes (not at a node to be deleted), start system-config-cluster (refer to Section 5.2, "Starting the Cluster Configuration Tool " ). At the Cluster Status Tool tab, under Services , disable or relocate each service that is running on the node to be deleted. Stop the cluster software on the node to be deleted by running the following commands at that node in this order: service rgmanager stop , if the cluster is running high-availability services ( rgmanager ) service gfs stop , if you are using Red Hat GFS service clvmd stop , if CLVM has been used to create clustered volumes service fenced stop service cman stop service ccsd stop At system-config-cluster (running on a node that is not to be deleted), in the Cluster Configuration Tool tab, delete the member as follows: If necessary, click the triangle icon to expand the Cluster Nodes property. Select the cluster node to be deleted. At the bottom of the right frame (labeled Properties ), click the Delete Node button. Clicking the Delete Node button causes a warning dialog box to be displayed requesting confirmation of the deletion ( Figure 5.7, "Confirm Deleting a Member" ). Figure 5.7. Confirm Deleting a Member At that dialog box, click Yes to confirm deletion. Propagate the updated configuration by clicking the Send to Cluster button. (Propagating the updated configuration automatically saves the configuration.) Stop the cluster software on the remaining running nodes by running the following commands at each node in this order: service rgmanager stop , if the cluster is running high-availability services ( rgmanager ) service gfs stop , if you are using Red Hat GFS service clvmd stop , if CLVM has been used to create clustered volumes service fenced stop service cman stop service ccsd stop Start cluster software on all remaining cluster nodes by running the following commands in this order: service ccsd start service cman start service fenced start service clvmd start , if CLVM has been used to create clustered volumes service gfs start , if you are using Red Hat GFS service rgmanager start , if the cluster is running high-availability services ( rgmanager ) At system-config-cluster (running on a node that was not deleted), in the Cluster Configuration Tool tab, verify that the configuration is correct. At the Cluster Status Tool tab verify that the nodes and services are running as expected. Note Make sure to configure other parameters that may be affected by changes in this section. Refer to Section 5.1, "Configuration Tasks" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s2-delete-member-dlm-CA
Chapter 15. Replacing storage nodes
Chapter 15. Replacing storage nodes You can choose one of the following procedures to replace storage nodes: Section 15.1, "Replacing operational nodes on Red Hat OpenStack Platform installer-provisioned infrastructure" Section 15.2, "Replacing failed nodes on Red Hat OpenStack Platform installer-provisioned infrastructure" 15.1. Replacing operational nodes on Red Hat OpenStack Platform installer-provisioned infrastructure Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the node that you need to replace. Take a note of its Machine Name . Mark the node as unschedulable: <node_name> Specify the name of node that you need to replace. Drain the node: Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Click Compute Machines . Search for the required machine. Besides the required machine, click Action menu (...) Delete Machine . Click Delete to confirm that the machine is deleted. A new machine is automatically created. Wait for the new machine to start and transition into Running state. Important This activity might take at least 5 - 10 minutes or more. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 15.2. Replacing failed nodes on Red Hat OpenStack Platform installer-provisioned infrastructure Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the faulty node, and click on its Machine Name . Click Actions Edit Annotations , and click Add More . Add machine.openshift.io/exclude-node-draining , and click Save . Click Actions Delete Machine , and click Delete . A new machine is automatically created, wait for new machine to start. Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Optional: If the failed Red Hat OpenStack Platform instance is not removed automatically, terminate the instance from Red Hat OpenStack Platform console. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support .
[ "oc adm cordon <node_name>", "oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd", "oc debug node/ <node_name>", "chroot /host", "lsblk", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd", "oc debug node/ <node_name>", "chroot /host", "lsblk" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/replacing_storage_nodes
Chapter 16. Multiple networks
Chapter 16. Multiple networks 16.1. Understanding multiple networks By default, OVN-Kubernetes serves as the Container Network Interface (CNI) of an OpenShift Container Platform cluster. With OVN-Kubernetes as the default CNI of a cluster, OpenShift Container Platform administrators or users can leverage user-defined networks (UDNs) or NetworkAttachmentDefinition (NADs) to create one, or multiple, default networks that handle all ordinary network traffic of the cluster. Both user-defined networks and Network Attachment Definitions can serve as the following network types: Primary networks : Act as the primary network for the pod. By default, all traffic passes through the primary network unless a pod route is configured to send traffic through other networks. Secondary networks : Act as additional, non-default networks for a pod. Secondary networks provide separate interfaces dedicated to specific traffic types or purposes. Only pod traffic that is explicitly configured to use a secondary network is routed through its interface. However, during cluster installation, OpenShift Container Platform administrators can configure alternative default secondary pod networks by leveraging the Multus CNI plugin. With Multus, multiple CNI plugins such as ipvlan, macvlan, or Network Attachment Definitions can be used together to serve as secondary networks for pods. Note User-defined networks are only available when OVN-Kubernetes is used as the CNI. They are not supported for use with other CNIs. You can define an additional network based on the available CNI plugins and attach one or more of these networks to your pods. You can define more than one additional network for your cluster depending on your needs. This gives you flexibility when you configure pods that deliver network functionality, such as switching or routing. For a complete list of supported CNI plugins, see "Additional networks in OpenShift Container Platform" . For information about user-defined networks, see About user-defined networks (UDNs) . For information about Network Attachment Definitions, see Creating primary networks using a NetworkAttachmentDefinition . 16.1.1. Usage scenarios for an additional network You can use an additional network in situations where network isolation is needed, including data plane and control plane separation. Isolating network traffic is useful for the following performance and security reasons: Performance Traffic management : You can send traffic on two different planes to manage how much traffic is along each plane. Security Network isolation : You can send sensitive traffic onto a network plane that is managed specifically for security considerations, and you can separate private data that must not be shared between tenants or customers. All of the pods in the cluster still use the cluster-wide default network to maintain connectivity across the cluster. Every pod has an eth0 interface that is attached to the cluster-wide pod network. You can view the interfaces for a pod by using the oc exec -it <pod_name> -- ip a command. If you add additional network interfaces that use Multus CNI, they are named net1 , net2 , ... , netN . To attach additional network interfaces to a pod, you must create configurations that define how the interfaces are attached. You specify each interface by using either a UserDefinedNetwork custom resource (CR) or a NetworkAttachmentDefinition CR. A CNI configuration inside each of these CRs defines how that interface is created. For more information about creating a UserDefinedNetwork CR, see About user-defined networks . For more information about creating a NetworkAttachmentDefinition CR, see Creating primary networks using a NetworkAttachmentDefinition . 16.1.2. Additional networks in OpenShift Container Platform OpenShift Container Platform provides the following CNI plugins for creating additional networks in your cluster: bridge : Configure a bridge-based additional network to allow pods on the same host to communicate with each other and the host. host-device : Configure a host-device additional network to allow pods access to a physical Ethernet network device on the host system. ipvlan : Configure an ipvlan-based additional network to allow pods on a host to communicate with other hosts and pods on those hosts, similar to a macvlan-based additional network. Unlike a macvlan-based additional network, each pod shares the same MAC address as the parent physical network interface. vlan : Configure a VLAN-based additional network to allow VLAN-based network isolation and connectivity for pods. macvlan : Configure a macvlan-based additional network to allow pods on a host to communicate with other hosts and pods on those hosts by using a physical network interface. Each pod that is attached to a macvlan-based additional network is provided a unique MAC address. TAP : Configure a TAP-based additional network to create a tap device inside the container namespace. A TAP device enables user space programs to send and receive network packets. SR-IOV : Configure an SR-IOV based additional network to allow pods to attach to a virtual function (VF) interface on SR-IOV capable hardware on the host system. 16.1.3. UserDefinedNetwork and NetworkAttachmentDefinition support matrix The UserDefinedNetwork and NetworkAttachmentDefinition custom resources (CRs) provide cluster administrators and users the ability to create customizable network configurations and define their own network topologies, ensure network isolation, manage IP addressing for workloads, and configure advanced network features. A third CR, ClusterUserDefinedNetwork , is also available, which allows administrators the ability to create and define additional networks spanning multiple namespaces at the cluster level. User-defined networks and network attachment definitions can serve as both the primary and secondary network interface, and each support layer2 and layer3 topologies; a third network topology, Localnet, is also supported with network attachment definitions with secondary networks. Note As of OpenShift Container Platform 4.18, the Localnet topology is unavailable for use with the UserDefinedNetwork and ClusterUserDefinedNetwork CRs. It is only available for NetworkAttachmentDefinition CRs that leverage secondary networks. The following section highlights the supported features of the UserDefinedNetwork and NetworkAttachmentDefinition CRs when they are used as either the primary or secondary network. A separate table for the ClusterUserDefinedNetwork CR is also included. Table 16.1. Primary network support matrix for UserDefinedNetwork and NetworkAttachmentDefinition CRs Network feature Layer2 topology Layer3 topology east-west traffic [✓] [✓] north-south traffic [✓] [✓] Persistent IPs [✓] X Services [✓] [✓] EgressIP resource [✓] [✓] Multicast [1] X [✓] NetworkPolicy resource [2] [✓] [✓] MultinetworkPolicy resource X X Multicast must be enabled in the namespace, and it is only available between OVN-Kubernetes network pods. For more information about multicast, see "Enabling multicast for a project". When creating a UserDefinedNetwork CR with a primary network type, network policies must be created after the UserDefinedNetwork CR. Table 16.2. Secondary network support matrix for UserDefinedNetwork and NetworkAttachmentDefinition CRs Network feature Layer2 topology Layer3 topology Localnet topology [1] east-west traffic [✓] [✓] [✓] ( NetworkAttachmentDefinition CR only) north-south traffic X X [✓] Persistent IPs [✓] X [✓] ( NetworkAttachmentDefinition CR only) Services X X X EgressIP resource X X X Multicast X X X NetworkPolicy resource X X X MultinetworkPolicy resource [✓] [✓] [✓] ( NetworkAttachmentDefinition CR only) The Localnet topology is unavailable for use with the UserDefinedNetwork CR. It is only supported on secondary networks for NetworkAttachmentDefinition CRs. Table 16.3. Support matrix for ClusterUserDefinedNetwork CRs Network feature Layer2 topology Layer3 topology east-west traffic [✓] [✓] north-south traffic [✓] [✓] Persistent IPs [✓] X Services [✓] [✓] EgressIP resource [✓] [✓] Multicast [1] X [✓] MultinetworkPolicy resource X X NetworkPolicy resource [2] [✓] [✓] Multicast must be enabled in the namespace, and it is only available between OVN-Kubernetes network pods. For more information, see "About multicast". When creating a ClusterUserDefinedNetwork CR with a primary network type, network policies must be created after the UserDefinedNetwork CR. Additional resources Enabling multicast for a project 16.2. Primary networks 16.2.1. About user-defined networks Before the implementation of user-defined networks (UDN), the OVN-Kubernetes CNI plugin for OpenShift Container Platform only supported a Layer 3 topology on the primary or main network. Due to Kubernetes design principles: all pods are attached to the main network, all pods communicate with each other by their IP addresses, and inter-pod traffic is restricted according to network policy. While the Kubernetes design is useful for simple deployments, this Layer 3 topology restricts customization of primary network segment configurations, especially for modern multi-tenant deployments. UDN improves the flexibility and segmentation capabilities of the default Layer 3 topology for a Kubernetes pod network by enabling custom Layer 2, Layer 3, and localnet network segments, where all these segments are isolated by default. These segments act as either primary or secondary networks for container pods and virtual machines that use the default OVN-Kubernetes CNI plugin. UDNs enable a wide range of network architectures and topologies, enhancing network flexibility, security, and performance. Note Nodes that use cgroupv1 Linux Control Groups (cgroup) must be reconfigured from cgroupv1 to cgroupv2 before creating a user-defined network. For more information, see Configuring Linux cgroup . A cluster administrator can use a UDN to create and define additional networks that span multiple namespaces at the cluster level by leveraging the ClusterUserDefinedNetwork custom resource (CR). Additionally, a cluster administrator or a cluster user can use a UDN to define additional networks at the namespace level with the UserDefinedNetwork CR. The following sections further emphasize the benefits and limitations of user-defined networks, the best practices when creating a ClusterUserDefinedNetwork or UserDefinedNetwork CR, how to create the CR, and additional configuration details that might be relevant to your deployment. 16.2.1.1. Benefits of a user-defined network User-defined networks provide the following benefits: Enhanced network isolation for security Tenant isolation : Namespaces can have their own isolated primary network, similar to how tenants are isolated in Red Hat OpenStack Platform (RHOSP). This improves security by reducing the risk of cross-tenant traffic. Network flexibility Layer 2 and layer 3 support : Cluster administrators can configure primary networks as layer 2 or layer 3 network types. Simplified network management Reduced network configuration complexity : With user-defined networks, the need for complex network policies are eliminated because isolation can be achieved by grouping workloads in different networks. Advanced capabilities Consistent and selectable IP addressing : Users can specify and reuse IP subnets across different namespaces and clusters, providing a consistent networking environment. Support for multiple networks : The user-defined networking feature allows administrators to connect multiple namespaces to a single network, or to create distinct networks for different sets of namespaces. Simplification of application migration from Red Hat OpenStack Platform (RHOSP) Network parity : With user-defined networking, the migration of applications from OpenStack to OpenShift Container Platform is simplified by providing similar network isolation and configuration options. Developers and administrators can create a user-defined network that is namespace scoped using the custom resource. An overview of the process is as follows: An administrator creates a namespace for a user-defined network with the k8s.ovn.org/primary-user-defined-network label. The UserDefinedNetwork CR is created by either the cluster administrator or the user. The user creates pods in the namespace. 16.2.1.2. Limitations of a user-defined network While user-defined networks (UDN) offer highly customizable network configuration options, there are limitations that cluster administrators and developers should be aware of when implementing and managing these networks. Consider the following limitations before implementing a UDN. DNS limitations : DNS lookups for pods resolve to the pod's IP address on the cluster default network. Even if a pod is part of a user-defined network, DNS lookups will not resolve to the pod's IP address on that user-defined network. However, DNS lookups for services and external entities will function as expected. When a pod is assigned to a primary UDN, it can access the Kubernetes API (KAPI) and DNS services on the cluster's default network. Initial network assignment : You must create the namespace and network before creating pods. Assigning a namespace with pods to a new network or creating a UDN in an existing namespace will not be accepted by OVN-Kubernetes. Health check limitations : Kubelet health checks are performed by the cluster default network, which does not confirm the network connectivity of the primary interface on the pod. Consequently, scenarios where a pod appears healthy by the default network, but has broken connectivity on the primary interface, are possible with user-defined networks. Network policy limitations : Network policies that enable traffic between namespaces connected to different user-defined primary networks are not effective. These traffic policies do not take effect because there is no connectivity between these isolated networks. 16.2.1.3. About the ClusterUserDefinedNetwork CR The ClusterUserDefinedNetwork (UDN) custom resource (CR) provides cluster-scoped network segmentation and isolation for administrators only. The following diagram demonstrates how a cluster administrator can use the ClusterUserDefinedNetwork CR to create network isolation between tenants. This network configuration allows a network to span across many namespaces. In the diagram, network isolation is achieved through the creation of two user-defined networks, udn-1 and udn-2 . These networks are not connected and the spec.namespaceSelector.matchLabels field is used to select different namespaces. For example, udn-1 configures and isolates communication for namespace-1 and namespace-2 , while udn-2 configures and isolates communication for namespace-3 and namespace-4 . Isolated tenants (Tenants 1 and Tenants 2) are created by separating namespaces while also allowing pods in the same namespace to communicate. Figure 16.1. Tenant isolation using a ClusterUserDefinedNetwork CR 16.2.1.3.1. Best practices for ClusterUserDefinedNetwork CRs Before setting up a ClusterUserDefinedNetwork custom resource (CR), users should consider the following information: A ClusterUserDefinedNetwork CR is intended for use by cluster administrators and should not be used by non-administrators. If used incorrectly, it might result in security issues with your deployment, cause disruptions, or break the cluster network. ClusterUserDefinedNetwork CRs should not select the default namespace. This can result in no isolation and, as a result, could introduce security risks to the cluster. ClusterUserDefinedNetwork CRs should not select openshift-* namespaces. OpenShift Container Platform administrators should be aware that all namespaces of a cluster are selected when one of the following conditions are met: The matchLabels selector is left empty. The matchExpressions selector is left empty. The namespaceSelector is initialized, but does not specify matchExpressions or matchLabel . For example: namespaceSelector: {} . For primary networks, the namespace used for the ClusterUserDefinedNetwork CR must include the k8s.ovn.org/primary-user-defined-network label. This label cannot be updated, and can only be added when the namespace is created. The following conditions apply with the k8s.ovn.org/primary-user-defined-network namespace label: If the namespace is missing the k8s.ovn.org/primary-user-defined-network label and a pod is created, the pod attaches itself to the default network. If the namespace is missing the k8s.ovn.org/primary-user-defined-network label and a primary ClusterUserDefinedNetwork CR is created that matches the namespace, an error is reported and the network is not created. If the namespace is missing the k8s.ovn.org/primary-user-defined-network label and a primary ClusterUserDefinedNetwork CR already exists, a pod in the namespace is created and attached to the default network. If the namespace has the label, and a primary ClusterUserDefinedNetwork CR does not exist, a pod in the namespace is not created until the ClusterUserDefinedNetwork CR is created. 16.2.1.3.2. Creating a ClusterUserDefinedNetwork CR by using the CLI The following procedure creates a ClusterUserDefinedNetwork custom resource (CR) by using the CLI. Based upon your use case, create your request using either the cluster-layer-two-udn.yaml example for a Layer2 topology type or the cluster-layer-three-udn.yaml example for a Layer3 topology type. Important The ClusterUserDefinedNetwork CR is intended for use by cluster administrators and should not be used by non-administrators. If used incorrectly, it might result in security issues with your deployment, cause disruptions, or break the cluster network. OpenShift Virtualization only supports the Layer2 topology. Prerequisites You have logged in as a user with cluster-admin privileges. Procedure Optional: For a ClusterUserDefinedNetwork CR that uses a primary network, create a namespace with the k8s.ovn.org/primary-user-defined-network label by entering the following command: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: <cudn_namespace_name> labels: k8s.ovn.org/primary-user-defined-network: "" EOF Create a request for either a Layer2 or Layer3 topology type cluster-wide user-defined network: Create a YAML file, such as cluster-layer-two-udn.yaml , to define your request for a Layer2 topology as in the following example: apiVersion: k8s.ovn.org/v1 kind: ClusterUserDefinedNetwork metadata: name: <cudn_name> 1 spec: namespaceSelector: 2 matchLabels: 3 - "<example_namespace_one>":"" 4 - "<example_namespace_two>":"" 5 network: 6 topology: Layer2 7 layer2: 8 role: Primary 9 subnets: - "2001:db8::/64" - "10.100.0.0/16" 10 1 Name of your ClusterUserDefinedNetwork CR. 2 A label query over the set of namespaces that the cluster UDN CR applies to. Uses the standard Kubernetes MatchLabel selector. Must not point to default or openshift-* namespaces. 3 Uses the matchLabels selector type, where terms are evaluated with an AND relationship. 4 5 Because the matchLabels selector type is used, provisions namespaces matching both <example_namespace_one> and <example_namespace_two> . 6 Describes the network configuration. 7 The topology field describes the network configuration; accepted values are Layer2 and Layer3 . Specifying a Layer2 topology type creates one logical switch that is shared by all nodes. 8 This field specifies the topology configuration. It can be layer2 or layer3 . 9 Specifies Primary or Secondary . Primary is the only role specification supported in 4.18. 10 For Layer2 topology types the following specifies config details for the subnet field: The subnets field is optional. The subnets field is of type string and accepts standard CIDR formats for both IPv4 and IPv6. The subnets field accepts one or two items. For two items, they must be of a different family. For example, subnets values of 10.100.0.0/16 and 2001:db8::/64 . Layer2 subnets can be omitted. If omitted, users must configure static IP addresses for the pods. As a consequence, port security only prevents MAC spoofing. For more information, see "Configuring pods with a static IP address". Create a YAML file, such as cluster-layer-three-udn.yaml , to define your request for a Layer3 topology as in the following example: apiVersion: k8s.ovn.org/v1 kind: ClusterUserDefinedNetwork metadata: name: <cudn_name> 1 spec: namespaceSelector: 2 matchExpressions: 3 - key: kubernetes.io/metadata.name 4 operator: In 5 values: ["<example_namespace_one>", "<example_namespace_two>"] 6 network: 7 topology: Layer3 8 layer3: 9 role: Primary 10 subnets: 11 - cidr: 10.100.0.0/16 hostSubnet: 24 1 Name of your ClusterUserDefinedNetwork CR. 2 A label query over the set of namespaces that the cluster UDN applies to. Uses the standard Kubernetes MatchLabel selector. Must not point to default or openshift-* namespaces. 3 Uses the matchExpressions selector type, where terms are evaluated with an OR relationship. 4 Specifies the label key to match. 5 Specifies the operator. Valid values include: In , NotIn , Exists , and DoesNotExist . 6 Because the matchExpressions type is used, provisions namespaces matching either <example_namespace_one> or <example_namespace_two> . 7 Describes the network configuration. 8 The topology field describes the network configuration; accepted values are Layer2 and Layer3 . Specifying a Layer3 topology type creates a layer 2 segment per node, each with a different subnet. Layer 3 routing is used to interconnect node subnets. 9 This field specifies the topology configuration. Valid values are layer2 or layer3 . 10 Specifies a Primary or Secondary role. Primary is the only role specification supported in 4.18. 11 For Layer3 topology types the following specifies config details for the subnet field: The subnets field is mandatory. The type for the subnets field is cidr and hostSubnet : cidr is the cluster subnet and accepts a string value. hostSubnet specifies the nodes subnet prefix that the cluster subnet is split to. For IPv6, only a /64 length is supported for hostSubnet . Apply your request by running the following command: USD oc create --validate=true -f <example_cluster_udn>.yaml Where <example_cluster_udn>.yaml is the name of your Layer2 or Layer3 configuration file. Verify that your request is successful by running the following command: USD oc get clusteruserdefinednetwork <cudn_name> -o yaml Where <cudn_name> is the name you created of your cluster-wide user-defined network. Example output apiVersion: k8s.ovn.org/v1 kind: ClusterUserDefinedNetwork metadata: creationTimestamp: "2024-12-05T15:53:00Z" finalizers: - k8s.ovn.org/user-defined-network-protection generation: 1 name: my-cudn resourceVersion: "47985" uid: 16ee0fcf-74d1-4826-a6b7-25c737c1a634 spec: namespaceSelector: matchExpressions: - key: custom.network.selector operator: In values: - example-namespace-1 - example-namespace-2 - example-namespace-3 network: layer3: role: Primary subnets: - cidr: 10.100.0.0/16 topology: Layer3 status: conditions: - lastTransitionTime: "2024-11-19T16:46:34Z" message: 'NetworkAttachmentDefinition has been created in following namespaces: [example-namespace-1, example-namespace-2, example-namespace-3]' reason: NetworkAttachmentDefinitionReady status: "True" type: NetworkCreated 16.2.1.3.3. Creating a ClusterUserDefinedNetwork CR by using the web console You can create a ClusterUserDefinedNetwork custom resource (CR) in the OpenShift Container Platform web console. Prerequisites You have access to the OpenShift Container Platform web console as a user with cluster-admin permissions. You have created a namespace and applied the k8s.ovn.org/primary-user-defined-network label. Procedure From the Administrator perspective, click Networking UserDefinedNetworks . Click ClusterUserDefinedNetwork . In the Name field, specify a name for the cluster-scoped UDN. Specify a value in the Subnet field. In the Project(s) Match Labels field, add the appropriate labels to select namespaces that the cluster UDN applies to. Click Create . The cluster-scoped UDN serves as the default primary network for pods located in namespaces that contain the labels that you specified in step 5. Additional resources Configuring pods with a static IP address 16.2.1.4. About the UserDefinedNetwork CR The UserDefinedNetwork (UDN) custom resource (CR) provides advanced network segmentation and isolation for users and administrators. The following diagram shows four cluster namespaces, where each namespace has a single assigned user-defined network (UDN), and each UDN has an assigned custom subnet for its pod IP allocations. The OVN-Kubernetes handles any overlapping UDN subnets. Without using the Kubernetes network policy, a pod attached to a UDN can communicate with other pods in that UDN. By default, these pods are isolated from communicating with pods that exist in other UDNs. For microsegmentation, you can apply network policy within a UDN. You can assign one or more UDNs to a namespace, with a limitation of only one primary UDN to a namespace, and one or more namespaces to a UDN. Figure 16.2. Namespace isolation using a UserDefinedNetwork CR 16.2.1.4.1. Best practices for UserDefinedNetwork CRs Before setting up a UserDefinedNetwork custom resource (CR), you should consider the following information: openshift-* namespaces should not be used to set up a UserDefinedNetwork CR. UserDefinedNetwork CRs should not be created in the default namespace. This can result in no isolation and, as a result, could introduce security risks to the cluster. For primary networks, the namespace used for the UserDefinedNetwork CR must include the k8s.ovn.org/primary-user-defined-network label. This label cannot be updated, and can only be added when the namespace is created. The following conditions apply with the k8s.ovn.org/primary-user-defined-network namespace label: If the namespace is missing the k8s.ovn.org/primary-user-defined-network label and a pod is created, the pod attaches itself to the default network. If the namespace is missing the k8s.ovn.org/primary-user-defined-network label and a primary UserDefinedNetwork CR is created that matches the namespace, a status error is reported and the network is not created. If the namespace is missing the k8s.ovn.org/primary-user-defined-network label and a primary UserDefinedNetwork CR already exists, a pod in the namespace is created and attached to the default network. If the namespace has the label, and a primary UserDefinedNetwork CR does not exist, a pod in the namespace is not created until the UserDefinedNetwork CR is created. 2 masquerade IP addresses are required for user defined networks. You must reconfigure your masquerade subnet to be large enough to hold the required number of networks. Important For OpenShift Container Platform 4.17 and later, clusters use 169.254.0.0/17 for IPv4 and fd69::/112 for IPv6 as the default masquerade subnet. These ranges should be avoided by users. For updated clusters, there is no change to the default masquerade subnet. Changing the cluster's masquerade subnet is unsupported after a user-defined network has been configured for a project. Attempting to modify the masquerade subnet after a UserDefinedNetwork CR has been set up can disrupt the network connectivity and cause configuration issues. Ensure tenants are using the UserDefinedNetwork resource and not the NetworkAttachmentDefinition (NAD) CR. This can create security risks between tenants. When creating network segmentation, you should only use the NetworkAttachmentDefinition CR if user-defined network segmentation cannot be completed using the UserDefinedNetwork CR. The cluster subnet and services CIDR for a UserDefinedNetwork CR cannot overlap with the default cluster subnet CIDR. OVN-Kubernetes network plugin uses 100.64.0.0/16 as the default join subnet for the network. You must not use that value to configure a UserDefinedNetwork CR's joinSubnets field. If the default address values are used anywhere in the network for the cluster you must override the default values by setting the joinSubnets field. For more information, see "Additional configuration details for user-defined networks". A layer 2 topology creates a virtual switch that is distributed across all nodes in a cluster. Virtual machines and pods connect to this virtual switch so that all these components can communicate with each other within the same subnet. If you decide not to specify a layer 2 subnet, then you must manually configure IP addresses for each pod in your cluster. When not specifying a layer 2 subnet, port security is limited to preventing Media Access Control (MAC) spoofing only, and does not include IP spoofing. A layer 2 topology creates a single broadcast domain that can be challenging in large network environments, whereby the topology might cause a broadcast storm that can degrade network performance. A layer 3 topology creates a unique layer 2 segment for each node in a cluster. The layer 3 routing mechanism interconnects these segments so that virtual machines and pods that are hosted on different nodes can communicate with each other. A layer 3 topology can effectively manage large broadcast domains by assigning each domain to a specific node, so that broadcast traffic has a reduced scope. To configure a layer 3 topology, you must configure cidr and hostSubnet parameters. 16.2.1.4.2. Creating a UserDefinedNetwork CR by using the CLI The following procedure creates a UserDefinedNetwork CR that is namespace scoped. Based upon your use case, create your request by using either the my-layer-two-udn.yaml example for a Layer2 topology type or the my-layer-three-udn.yaml example for a Layer3 topology type. Procedure Optional: For a UserDefinedNetwork CR that uses a primary network, create a namespace with the k8s.ovn.org/primary-user-defined-network label by entering the following command: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: <udn_namespace_name> labels: k8s.ovn.org/primary-user-defined-network: "" EOF Create a request for either a Layer2 or Layer3 topology type user-defined network: Create a YAML file, such as my-layer-two-udn.yaml , to define your request for a Layer2 topology as in the following example: apiVersion: k8s.ovn.org/v1 kind: UserDefinedNetwork metadata: name: udn-1 1 namespace: <some_custom_namespace> spec: topology: Layer2 2 layer2: 3 role: Primary 4 subnets: - "10.0.0.0/24" - "2001:db8::/60" 5 1 Name of your UserDefinedNetwork resource. This should not be default or duplicate any global namespaces created by the Cluster Network Operator (CNO). 2 The topology field describes the network configuration; accepted values are Layer2 and Layer3 . Specifying a Layer2 topology type creates one logical switch that is shared by all nodes. 3 This field specifies the topology configuration. It can be layer2 or layer3 . 4 Specifies a Primary or Secondary role. 5 For Layer2 topology types the following specifies config details for the subnet field: The subnets field is optional. The subnets field is of type string and accepts standard CIDR formats for both IPv4 and IPv6. The subnets field accepts one or two items. For two items, they must be of a different family. For example, subnets values of 10.100.0.0/16 and 2001:db8::/64 . Layer2 subnets can be omitted. If omitted, users must configure IP addresses for the pods. As a consequence, port security only prevents MAC spoofing. The Layer2 subnets field is mandatory when the ipamLifecycle field is specified. Create a YAML file, such as my-layer-three-udn.yaml , to define your request for a Layer3 topology as in the following example: apiVersion: k8s.ovn.org/v1 kind: UserDefinedNetwork metadata: name: udn-2-primary 1 namespace: <some_custom_namespace> spec: topology: Layer3 2 layer3: 3 role: Primary 4 subnets: 5 - cidr: 10.150.0.0/16 hostSubnet: 24 - cidr: 2001:db8::/60 hostSubnet: 64 # ... 1 Name of your UserDefinedNetwork resource. This should not be default or duplicate any global namespaces created by the Cluster Network Operator (CNO). 2 The topology field describes the network configuration; accepted values are Layer2 and Layer3 . Specifying a Layer3 topology type creates a layer 2 segment per node, each with a different subnet. Layer 3 routing is used to interconnect node subnets. 3 This field specifies the topology configuration. Valid values are layer2 or layer3 . 4 Specifies a Primary or Secondary role. 5 For Layer3 topology types the following specifies config details for the subnet field: The subnets field is mandatory. The type for the subnets field is cidr and hostSubnet : cidr is equivalent to the clusterNetwork configuration settings of a cluster. The IP addresses in the CIDR are distributed to pods in the user defined network. This parameter accepts a string value. hostSubnet defines the per-node subnet prefix. For IPv6, only a /64 length is supported for hostSubnet . Apply your request by running the following command: USD oc apply -f <my_layer_two_udn>.yaml Where <my_layer_two_udn>.yaml is the name of your Layer2 or Layer3 configuration file. Verify that your request is successful by running the following command: USD oc get userdefinednetworks udn-1 -n <some_custom_namespace> -o yaml Where some_custom_namespace is the namespace you created for your user-defined network. Example output apiVersion: k8s.ovn.org/v1 kind: UserDefinedNetwork metadata: creationTimestamp: "2024-08-28T17:18:47Z" finalizers: - k8s.ovn.org/user-defined-network-protection generation: 1 name: udn-1 namespace: some-custom-namespace resourceVersion: "53313" uid: f483626d-6846-48a1-b88e-6bbeb8bcde8c spec: layer2: role: Primary subnets: - 10.0.0.0/24 - 2001:db8::/60 topology: Layer2 status: conditions: - lastTransitionTime: "2024-08-28T17:18:47Z" message: NetworkAttachmentDefinition has been created reason: NetworkAttachmentDefinitionReady status: "True" type: NetworkCreated 16.2.1.4.3. Creating a UserDefinedNetwork CR by using the web console You can create a UserDefinedNetwork custom resource by using the OpenShift Container Platform web console. Prerequisites You have access to the OpenShift Container Platform web console as a user with cluster-admin permissions. You have created a namespace and applied the k8s.ovn.org/primary-user-defined-network label. Procedure From the Administrator perspective, click Networking UserDefinedNetworks . Click Create UserDefinedNetwork . From the Project name list, select the namespace that you previously created. Specify a value in the Subnet field. Click Create . The user-defined network serves as the default primary network for pods that you create in this namespace. 16.2.1.5. Additional configuration details for user-defined networks The following table explains additional configurations for ClusterUserDefinedNetwork and UserDefinedNetwork custom resources (CRs) that are optional. It is not recommended to set these fields without explicit need and understanding of OVN-Kubernetes network topology. Table 16.4. UserDefinedNetworks optional configurations Field Type Description spec.joinSubnets object When omitted, the platform sets default values for the joinSubnets field of 100.65.0.0/16 for IPv4 and fd99::/64 for IPv6. If the default address values are used anywhere in the cluster's network you must override it by setting the joinSubnets field. If you choose to set this field, ensure it does not conflict with other subnets in the cluster such as the cluster subnet, the default network cluster subnet, and the masquerade subnet. The joinSubnets field configures the routing between different segments within a user-defined network. Dual-stack clusters can set 2 subnets, one for each IP family; otherwise, only 1 subnet is allowed. This field is only allowed for the Primary network. spec.ipam.lifecycle object The spec.ipam.lifecycle field configures the IP address management system (IPAM). You might use this field for virtual workloads to ensure persistent IP addresses. The only allowed value is Persistent , which ensures that your virtual workloads have persistent IP addresses across reboots and migration. These are assigned by the container network interface (CNI) and used by OVN-Kubernetes to program pod IP addresses. You must not change this for pod annotations. Setting a value of Persistent is only supported when spec.ipam.mode is set to Enabled . spec.ipam.mode object The spec.ipam.mode field controls how much of the IP configuration is managed by OVN-Kubernetes. The following options are available: Enabled: When enabled, OVN-Kubernetes applies the IP configuration to the SDN infrastructure and assigns IP addresses from the selected subnet to the individual pods. This is the default setting. When set to Enabled , the subnets field must be defined. Enabled is the default configuration. Disabled: When disabled, OVN-Kubernetes only assigns MAC addresses and provides layer 2 communication, which allows users to configure IP addresses. Disabled is only available for layer 2 (secondary) networks. By disabling IPAM, features that rely on selecting pods by IP, for example, network policy, services, and so on, no longer function. Additionally, IP port security is also disabled for interfaces attached to this network. The subnets field must be empty when spec.ipam.mode is set to Disabled. spec.layer2.mtu and spec.layer3.mtu integer The maximum transmission units (MTU). The default value is 1400 . The boundary for IPv4 is 576 , and for IPv6 it is 1280 . 16.2.1.6. User-defined network status condition types The following tables explain the status condition types returned for ClusterUserDefinedNetwork and UserDefinedNetwork CRs when describing the resource. These conditions can be used to troubleshoot your deployment. Table 16.5. NetworkCreated condition types (ClusterDefinedNetwork and UserDefinedNetwork CRs) Condition type Status Reason and Message NetworkCreated True When True , the following reason and message is returned: Reason Message NetworkAttachmentDefinitionCreated 'NetworkAttachmentDefinition has been created in following namespaces: [example-namespace-1, example-namespace-2, example-namespace-3]'` NetworkCreated False When False , one of the following messages is returned: Reason Message SyncError failed to generate NetworkAttachmentDefinition SyncError failed to update NetworkAttachmentDefinition SyncError primary network already exist in namespace "<namespace_name>": "<primary_network_name>" SyncError failed to create NetworkAttachmentDefinition: create NAD error SyncError foreign NetworkAttachmentDefinition with the desired name already exist SyncError failed to add finalizer to UserDefinedNetwork NetworkAttachmentDefinitionDeleted NetworkAttachmentDefinition is being deleted: [<namespace>/<nad_name>] Table 16.6. NetworkAllocationSucceeded condition types (UserDefinedNetwork CRs) Condition type Status Reason and Message NetworkAllocationSucceeded True When True , the following reason and message is returned: Reason Message NetworkAllocationSucceeded Network allocation succeeded for all synced nodes. NetworkAllocationSucceeded False When False , the following message is returned: Reason Message InternalError Network allocation failed for at least one node: [<node_name>], check UDN events for more info. 16.2.1.7. Opening default network ports on user-defined network pods By default, pods on a user-defined network (UDN) are isolated from the default network. This means that default network pods, such as those running monitoring services (Prometheus or Alertmanager) or the OpenShift Container Platform image registry, cannot initiate connections to UDN pods. To allow default network pods to connect to a user-defined network pod, you can use the k8s.ovn.org/open-default-ports annotation. This annotation opens specific ports on the user-defined network pod for access from the default network. The following pod specification allows incoming TCP connections on port 80 and UDP traffic on port 53 from the default network: apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/open-default-ports: | - protocol: tcp port: 80 - protocol: udp port: 53 # ... Note Open ports are accessible on the pod's default network IP, not its UDN network IP. 16.2.2. Creating primary networks using a NetworkAttachmentDefinition The following sections explain how to create and manage additional primary networks using the NetworkAttachmentDefinition (NAD) resource. 16.2.2.1. Approaches to managing an additional network You can manage the life cycle of an additional network created by NAD with one of the following two approaches: By modifying the Cluster Network Operator (CNO) configuration. With this method, the CNO automatically creates and manages the NetworkAttachmentDefinition object. In addition to managing the object lifecycle, the CNO ensures that a DHCP is available for an additional network that uses a DHCP assigned IP address. By applying a YAML manifest. With this method, you can manage the additional network directly by creating an NetworkAttachmentDefinition object. This approach allows for the invocation of multiple CNI plugins in order to attach additional network interfaces in a pod. Each approach is mutually exclusive and you can only use one approach for managing an additional network at a time. For either approach, the additional network is managed by a Container Network Interface (CNI) plugin that you configure. Note When deploying OpenShift Container Platform nodes with multiple network interfaces on Red Hat OpenStack Platform (RHOSP) with OVN SDN, DNS configuration of the secondary interface might take precedence over the DNS configuration of the primary interface. In this case, remove the DNS nameservers for the subnet ID that is attached to the secondary interface by running the following command: USD openstack subnet set --dns-nameserver 0.0.0.0 <subnet_id> 16.2.2.2. Creating an additional network attachment with the Cluster Network Operator The Cluster Network Operator (CNO) manages additional network definitions. When you specify an additional network to create, the CNO creates the NetworkAttachmentDefinition CRD automatically. Important Do not edit the NetworkAttachmentDefinition CRDs that the Cluster Network Operator manages. Doing so might disrupt network traffic on your additional network. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Optional: Create the namespace for the additional networks: USD oc create namespace <namespace_name> To edit the CNO configuration, enter the following command: USD oc edit networks.operator.openshift.io cluster Modify the CR that you are creating by adding the configuration for the additional network that you are creating, as in the following example CR. apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # ... additionalNetworks: - name: tertiary-net namespace: namespace2 type: Raw rawCNIConfig: |- { "cniVersion": "0.3.1", "name": "tertiary-net", "type": "ipvlan", "master": "eth1", "mode": "l2", "ipam": { "type": "static", "addresses": [ { "address": "192.168.1.23/24" } ] } } Save your changes and quit the text editor to commit your changes. Verification Confirm that the CNO created the NetworkAttachmentDefinition CRD by running the following command. There might be a delay before the CNO creates the CRD. USD oc get network-attachment-definitions -n <namespace> where: <namespace> Specifies the namespace for the network attachment that you added to the CNO configuration. Example output NAME AGE test-network-1 14m 16.2.2.2.1. Configuration for an additional network attachment An additional network is configured by using the NetworkAttachmentDefinition API in the k8s.cni.cncf.io API group. The configuration for the API is described in the following table: Table 16.7. NetworkAttachmentDefinition API fields Field Type Description metadata.name string The name for the additional network. metadata.namespace string The namespace that the object is associated with. spec.config string The CNI plugin configuration in JSON format. 16.2.2.3. Creating an additional network attachment by applying a YAML manifest Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. You are working in the namespace where the NAD is to be deployed. Procedure Create a YAML file with your additional network configuration, such as in the following example: apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: -net spec: config: |- { "cniVersion": "0.3.1", "name": "work-network", "namespace": "namespace2", 1 "type": "host-device", "device": "eth1", "ipam": { "type": "dhcp" } } 1 Optional: You can specify a namespace to which the NAD is applied. If you are working in the namespace where the NAD is to be deployed, this spec is not necessary. To create the additional network, enter the following command: USD oc apply -f <file>.yaml where: <file> Specifies the name of the file contained the YAML manifest. 16.3. Secondary networks 16.3.1. Creating secondary networks on OVN-Kubernetes As a cluster administrator, you can configure an additional secondary network for your cluster using the NetworkAttachmentDefinition (NAD) resource. Note Support for user-defined networks as a secondary network will be added in a future version of OpenShift Container Platform. 16.3.1.1. Configuration for an OVN-Kubernetes additional network The Red Hat OpenShift Networking OVN-Kubernetes network plugin allows the configuration of secondary network interfaces for pods. To configure secondary network interfaces, you must define the configurations in the NetworkAttachmentDefinition custom resource definition (CRD). Note Pod and multi-network policy creation might remain in a pending state until the OVN-Kubernetes control plane agent in the nodes processes the associated network-attachment-definition CRD. You can configure an OVN-Kubernetes additional network in either layer 2 or localnet topologies. A layer 2 topology supports east-west cluster traffic, but does not allow access to the underlying physical network. A localnet topology allows connections to the physical network, but requires additional configuration of the underlying Open vSwitch (OVS) bridge on cluster nodes. The following sections provide example configurations for each of the topologies that OVN-Kubernetes currently allows for secondary networks. Note Networks names must be unique. For example, creating multiple NetworkAttachmentDefinition CRDs with different configurations that reference the same network is unsupported. 16.3.1.1.1. Supported platforms for OVN-Kubernetes additional network You can use an OVN-Kubernetes additional network with the following supported platforms: Bare metal IBM Power(R) IBM Z(R) IBM(R) LinuxONE VMware vSphere Red Hat OpenStack Platform (RHOSP) 16.3.1.1.2. OVN-Kubernetes network plugin JSON configuration table The following table describes the configuration parameters for the OVN-Kubernetes CNI network plugin: Table 16.8. OVN-Kubernetes network plugin JSON configuration table Field Type Description cniVersion string The CNI specification version. The required value is 0.3.1 . name string The name of the network. These networks are not namespaced. For example, you can have a network named l2-network referenced from two different NetworkAttachmentDefinition CRDs that exist on two different namespaces. This ensures that pods making use of the NetworkAttachmentDefinition CRD on their own different namespaces can communicate over the same secondary network. However, those two different NetworkAttachmentDefinition CRDs must also share the same network specific parameters such as topology , subnets , mtu , and excludeSubnets . type string The name of the CNI plugin to configure. This value must be set to ovn-k8s-cni-overlay . topology string The topological configuration for the network. Must be one of layer2 or localnet . subnets string The subnet to use for the network across the cluster. For "topology":"layer2" deployments, IPv6 ( 2001:DBB::/64 ) and dual-stack ( 192.168.100.0/24,2001:DBB::/64 ) subnets are supported. When omitted, the logical switch implementing the network only provides layer 2 communication, and users must configure IP addresses for the pods. Port security only prevents MAC spoofing. mtu string The maximum transmission unit (MTU). The default value, 1300 , is automatically set by the kernel. netAttachDefName string The metadata namespace and name of the network attachment definition CRD where this configuration is included. For example, if this configuration is defined in a NetworkAttachmentDefinition CRD in namespace ns1 named l2-network , this should be set to ns1/l2-network . excludeSubnets string A comma-separated list of CIDRs and IP addresses. IP addresses are removed from the assignable IP address pool and are never passed to the pods. vlanID integer If topology is set to localnet , the specified VLAN tag is assigned to traffic from this additional network. The default is to not assign a VLAN tag. 16.3.1.1.3. Compatibility with multi-network policy The multi-network policy API, which is provided by the MultiNetworkPolicy custom resource definition (CRD) in the k8s.cni.cncf.io API group, is compatible with an OVN-Kubernetes secondary network. When defining a network policy, the network policy rules that can be used depend on whether the OVN-Kubernetes secondary network defines the subnets field. Refer to the following table for details: Table 16.9. Supported multi-network policy selectors based on subnets CNI configuration subnets field specified Allowed multi-network policy selectors Yes podSelector and namespaceSelector ipBlock No ipBlock For example, the following multi-network policy is valid only if the subnets field is defined in the additional network CNI configuration for the additional network named blue2 : Example multi-network policy that uses a pod selector apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-same-namespace annotations: k8s.v1.cni.cncf.io/policy-for: blue2 spec: podSelector: ingress: - from: - podSelector: {} The following example uses the ipBlock network policy selector, which is always valid for an OVN-Kubernetes additional network: Example multi-network policy that uses an IP block selector apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: ingress-ipblock annotations: k8s.v1.cni.cncf.io/policy-for: default/flatl2net spec: podSelector: matchLabels: name: access-control policyTypes: - Ingress ingress: - from: - ipBlock: cidr: 10.200.0.0/30 16.3.1.1.4. Configuration for a localnet switched topology The switched localnet topology interconnects the workloads created as Network Attachment Definitions (NADs) through a cluster-wide logical switch to a physical network. You must map an additional network to the OVN bridge to use it as an OVN-Kubernetes additional network. Bridge mappings allow network traffic to reach the physical network. A bridge mapping associates a physical network name, also known as an interface label, to a bridge created with Open vSwitch (OVS). You can create an NodeNetworkConfigurationPolicy object, part of the nmstate.io/v1 API group, to declaratively create the mapping. This API is provided by the NMState Operator. By using this API you can apply the bridge mapping to nodes that match your specified nodeSelector expression, such as node-role.kubernetes.io/worker: '' . When attaching an additional network, you can either use the existing br-ex bridge or create a new bridge. Which approach to use depends on your specific network infrastructure. If your nodes include only a single network interface, you must use the existing bridge. This network interface is owned and managed by OVN-Kubernetes and you must not remove it from the br-ex bridge or alter the interface configuration. If you remove or alter the network interface, your cluster network will stop working correctly. If your nodes include several network interfaces, you can attach a different network interface to a new bridge, and use that for your additional network. This approach provides for traffic isolation from your primary cluster network. The localnet1 network is mapped to the br-ex bridge in the following example: Example mapping for sharing a bridge apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: mapping 1 spec: nodeSelector: node-role.kubernetes.io/worker: '' 2 desiredState: ovn: bridge-mappings: - localnet: localnet1 3 bridge: br-ex 4 state: present 5 1 The name for the configuration object. 2 A node selector that specifies the nodes to apply the node network configuration policy to. 3 The name for the additional network from which traffic is forwarded to the OVS bridge. This additional network must match the name of the spec.config.name field of the NetworkAttachmentDefinition CRD that defines the OVN-Kubernetes additional network. 4 The name of the OVS bridge on the node. This value is required only if you specify state: present . 5 The state for the mapping. Must be either present to add the bridge or absent to remove the bridge. The default value is present . In the following example, the localnet2 network interface is attached to the ovs-br1 bridge. Through this attachment, the network interface is available to the OVN-Kubernetes network plugin as an additional network. Example mapping for nodes with multiple interfaces apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ovs-br1-multiple-networks 1 spec: nodeSelector: node-role.kubernetes.io/worker: '' 2 desiredState: interfaces: - name: ovs-br1 3 description: |- A dedicated OVS bridge with eth1 as a port allowing all VLANs and untagged traffic type: ovs-bridge state: up bridge: allow-extra-patch-ports: true options: stp: false port: - name: eth1 4 ovn: bridge-mappings: - localnet: localnet2 5 bridge: ovs-br1 6 state: present 7 1 The name for the configuration object. 2 A node selector that specifies the nodes to apply the node network configuration policy to. 3 A new OVS bridge, separate from the default bridge used by OVN-Kubernetes for all cluster traffic. 4 A network device on the host system to associate with this new OVS bridge. 5 The name for the additional network from which traffic is forwarded to the OVS bridge. This additional network must match the name of the spec.config.name field of the NetworkAttachmentDefinition CRD that defines the OVN-Kubernetes additional network. 6 The name of the OVS bridge on the node. This value is required only if you specify state: present . 7 The state for the mapping. Must be either present to add the bridge or absent to remove the bridge. The default value is present . This declarative approach is recommended because the NMState Operator applies additional network configuration to all nodes specified by the node selector automatically and transparently. The following JSON example configures a localnet secondary network: { "cniVersion": "0.3.1", "name": "ns1-localnet-network", "type": "ovn-k8s-cni-overlay", "topology":"localnet", "subnets": "202.10.130.112/28", "vlanID": 33, "mtu": 1500, "netAttachDefName": "ns1/localnet-network" "excludeSubnets": "10.100.200.0/29" } 16.3.1.1.4.1. Configuration for a layer 2 switched topology The switched (layer 2) topology networks interconnect the workloads through a cluster-wide logical switch. This configuration can be used for IPv6 and dual-stack deployments. Note Layer 2 switched topology networks only allow for the transfer of data packets between pods within a cluster. The following JSON example configures a switched secondary network: { "cniVersion": "0.3.1", "name": "l2-network", "type": "ovn-k8s-cni-overlay", "topology":"layer2", "subnets": "10.100.200.0/24", "mtu": 1300, "netAttachDefName": "ns1/l2-network", "excludeSubnets": "10.100.200.0/29" } 16.3.1.1.5. Configuring pods for additional networks You must specify the secondary network attachments through the k8s.v1.cni.cncf.io/networks annotation. The following example provisions a pod with two secondary attachments, one for each of the attachment configurations presented in this guide. apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: l2-network name: tinypod namespace: ns1 spec: containers: - args: - pause image: k8s.gcr.io/e2e-test-images/agnhost:2.36 imagePullPolicy: IfNotPresent name: agnhost-container 16.3.1.1.6. Configuring pods with a static IP address The following example provisions a pod with a static IP address. Note You can specify the IP address for the secondary network attachment of a pod only when the additional network attachment, a namespaced-scoped object, uses a layer 2 or localnet topology. Specifying a static IP address for the pod is only possible when the attachment configuration does not feature subnets. apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "l2-network", 1 "mac": "02:03:04:05:06:07", 2 "interface": "myiface1", 3 "ips": [ "192.0.2.20/24" ] 4 } ]' name: tinypod namespace: ns1 spec: containers: - args: - pause image: k8s.gcr.io/e2e-test-images/agnhost:2.36 imagePullPolicy: IfNotPresent name: agnhost-container 1 The name of the network. This value must be unique across all NetworkAttachmentDefinition CRDs. 2 The MAC address to be assigned for the interface. 3 The name of the network interface to be created for the pod. 4 The IP addresses to be assigned to the network interface. 16.3.2. Creating secondary networks with other CNI plugins The specific configuration fields for additional networks are described in the following sections. 16.3.2.1. Configuration for a bridge additional network The following object describes the configuration parameters for the Bridge CNI plugin: Table 16.10. Bridge CNI plugin JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plugin to configure: bridge . ipam object The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. bridge string Optional: Specify the name of the virtual bridge to use. If the bridge interface does not exist on the host, it is created. The default value is cni0 . ipMasq boolean Optional: Set to true to enable IP masquerading for traffic that leaves the virtual network. The source IP address for all traffic is rewritten to the bridge's IP address. If the bridge does not have an IP address, this setting has no effect. The default value is false . isGateway boolean Optional: Set to true to assign an IP address to the bridge. The default value is false . isDefaultGateway boolean Optional: Set to true to configure the bridge as the default gateway for the virtual network. The default value is false . If isDefaultGateway is set to true , then isGateway is also set to true automatically. forceAddress boolean Optional: Set to true to allow assignment of a previously assigned IP address to the virtual bridge. When set to false , if an IPv4 address or an IPv6 address from overlapping subsets is assigned to the virtual bridge, an error occurs. The default value is false . hairpinMode boolean Optional: Set to true to allow the virtual bridge to send an Ethernet frame back through the virtual port it was received on. This mode is also known as reflective relay . The default value is false . promiscMode boolean Optional: Set to true to enable promiscuous mode on the bridge. The default value is false . vlan string Optional: Specify a virtual LAN (VLAN) tag as an integer value. By default, no VLAN tag is assigned. preserveDefaultVlan string Optional: Indicates whether the default vlan must be preserved on the veth end connected to the bridge. Defaults to true. vlanTrunk list Optional: Assign a VLAN trunk tag. The default value is none . mtu integer Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. enabledad boolean Optional: Enables duplicate address detection for the container side veth . The default value is false . macspoofchk boolean Optional: Enables mac spoof check, limiting the traffic originating from the container to the mac address of the interface. The default value is false . Note The VLAN parameter configures the VLAN tag on the host end of the veth and also enables the vlan_filtering feature on the bridge interface. Note To configure an uplink for an L2 network, you must allow the VLAN on the uplink interface by using the following command: USD bridge vlan add vid VLAN_ID dev DEV 16.3.2.1.1. Bridge CNI plugin configuration example The following example configures an additional network named bridge-net : { "cniVersion": "0.3.1", "name": "bridge-net", "type": "bridge", "isGateway": true, "vlan": 2, "ipam": { "type": "dhcp" } } 16.3.2.2. Configuration for a host device additional network Note Specify your network device by setting only one of the following parameters: device , hwaddr , kernelpath , or pciBusID . The following object describes the configuration parameters for the host-device CNI plugin: Table 16.11. Host device CNI plugin JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plugin to configure: host-device . device string Optional: The name of the device, such as eth0 . hwaddr string Optional: The device hardware MAC address. kernelpath string Optional: The Linux kernel device path, such as /sys/devices/pci0000:00/0000:00:1f.6 . pciBusID string Optional: The PCI address of the network device, such as 0000:00:1f.6 . 16.3.2.2.1. host-device configuration example The following example configures an additional network named hostdev-net : { "cniVersion": "0.3.1", "name": "hostdev-net", "type": "host-device", "device": "eth1" } 16.3.2.3. Configuration for a VLAN additional network The following object describes the configuration parameters for the VLAN, vlan , CNI plugin: Table 16.12. VLAN CNI plugin JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plugin to configure: vlan . master string The Ethernet interface to associate with the network attachment. If a master is not specified, the interface for the default network route is used. vlanId integer Set the ID of the vlan . ipam object The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. mtu integer Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. dns integer Optional: DNS information to return. For example, a priority-ordered list of DNS nameservers. linkInContainer boolean Optional: Specifies whether the master interface is in the container network namespace or the main network namespace. Set the value to true to request the use of a container namespace master interface. Important A NetworkAttachmentDefinition custom resource definition (CRD) with a vlan configuration can be used only on a single pod in a node because the CNI plugin cannot create multiple vlan subinterfaces with the same vlanId on the same master interface. 16.3.2.3.1. VLAN configuration example The following example demonstrates a vlan configuration with an additional network that is named vlan-net : { "name": "vlan-net", "cniVersion": "0.3.1", "type": "vlan", "master": "eth0", "mtu": 1500, "vlanId": 5, "linkInContainer": false, "ipam": { "type": "host-local", "subnet": "10.1.1.0/24" }, "dns": { "nameservers": [ "10.1.1.1", "8.8.8.8" ] } } 16.3.2.4. Configuration for an IPVLAN additional network The following object describes the configuration parameters for the IPVLAN, ipvlan , CNI plugin: Table 16.13. IPVLAN CNI plugin JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plugin to configure: ipvlan . ipam object The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. This is required unless the plugin is chained. mode string Optional: The operating mode for the virtual network. The value must be l2 , l3 , or l3s . The default value is l2 . master string Optional: The Ethernet interface to associate with the network attachment. If a master is not specified, the interface for the default network route is used. mtu integer Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. linkInContainer boolean Optional: Specifies whether the master interface is in the container network namespace or the main network namespace. Set the value to true to request the use of a container namespace master interface. Important The ipvlan object does not allow virtual interfaces to communicate with the master interface. Therefore the container is not able to reach the host by using the ipvlan interface. Be sure that the container joins a network that provides connectivity to the host, such as a network supporting the Precision Time Protocol ( PTP ). A single master interface cannot simultaneously be configured to use both macvlan and ipvlan . For IP allocation schemes that cannot be interface agnostic, the ipvlan plugin can be chained with an earlier plugin that handles this logic. If the master is omitted, then the result must contain a single interface name for the ipvlan plugin to enslave. If ipam is omitted, then the result is used to configure the ipvlan interface. 16.3.2.4.1. IPVLAN CNI plugin configuration example The following example configures an additional network named ipvlan-net : { "cniVersion": "0.3.1", "name": "ipvlan-net", "type": "ipvlan", "master": "eth1", "linkInContainer": false, "mode": "l3", "ipam": { "type": "static", "addresses": [ { "address": "192.168.10.10/24" } ] } } 16.3.2.5. Configuration for a MACVLAN additional network The following object describes the configuration parameters for the MAC Virtual LAN (MACVLAN) Container Network Interface (CNI) plugin: Table 16.14. MACVLAN CNI plugin JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plugin to configure: macvlan . ipam object The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. mode string Optional: Configures traffic visibility on the virtual network. Must be either bridge , passthru , private , or vepa . If a value is not provided, the default value is bridge . master string Optional: The host network interface to associate with the newly created macvlan interface. If a value is not specified, then the default route interface is used. mtu integer Optional: The maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. linkInContainer boolean Optional: Specifies whether the master interface is in the container network namespace or the main network namespace. Set the value to true to request the use of a container namespace master interface. Note If you specify the master key for the plugin configuration, use a different physical network interface than the one that is associated with your primary network plugin to avoid possible conflicts. 16.3.2.5.1. MACVLAN CNI plugin configuration example The following example configures an additional network named macvlan-net : { "cniVersion": "0.3.1", "name": "macvlan-net", "type": "macvlan", "master": "eth1", "linkInContainer": false, "mode": "bridge", "ipam": { "type": "dhcp" } } 16.3.2.6. Configuration for a TAP additional network The following object describes the configuration parameters for the TAP CNI plugin: Table 16.15. TAP CNI plugin JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plugin to configure: tap . mac string Optional: Request the specified MAC address for the interface. mtu integer Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. selinuxcontext string Optional: The SELinux context to associate with the tap device. Note The value system_u:system_r:container_t:s0 is required for OpenShift Container Platform. multiQueue boolean Optional: Set to true to enable multi-queue. owner integer Optional: The user owning the tap device. group integer Optional: The group owning the tap device. bridge string Optional: Set the tap device as a port of an already existing bridge. 16.3.2.6.1. Tap configuration example The following example configures an additional network named mynet : { "name": "mynet", "cniVersion": "0.3.1", "type": "tap", "mac": "00:11:22:33:44:55", "mtu": 1500, "selinuxcontext": "system_u:system_r:container_t:s0", "multiQueue": true, "owner": 0, "group": 0 "bridge": "br1" } 16.3.2.6.2. Setting SELinux boolean for the TAP CNI plugin To create the tap device with the container_t SELinux context, enable the container_use_devices boolean on the host by using the Machine Config Operator (MCO). Prerequisites You have installed the OpenShift CLI ( oc ). Procedure Create a new YAML file named, such as setsebool-container-use-devices.yaml , with the following details: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: setsebool.service contents: | [Unit] Description=Set SELinux boolean for the TAP CNI plugin Before=kubelet.service [Service] Type=oneshot ExecStart=/usr/sbin/setsebool container_use_devices=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target Create the new MachineConfig object by running the following command: USD oc apply -f setsebool-container-use-devices.yaml Note Applying any changes to the MachineConfig object causes all affected nodes to gracefully reboot after the change is applied. This update can take some time to be applied. Verify the change is applied by running the following command: USD oc get machineconfigpools Expected output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-e5e0c8e8be9194e7c5a882e047379cfa True False False 3 3 3 0 7d2h worker rendered-worker-d6c9ca107fba6cd76cdcbfcedcafa0f2 True False False 3 3 3 0 7d Note All nodes should be in the updated and ready state. Additional resources For more information about enabling an SELinux boolean on a node, see Setting SELinux booleans . 16.3.3. Attaching a pod to an additional network As a cluster user you can attach a pod to an additional network. 16.3.3.1. Adding a pod to an additional network You can add a pod to an additional network. The pod continues to send normal cluster-related network traffic over the default network. When a pod is created additional networks are attached to it. However, if a pod already exists, you cannot attach additional networks to it. The pod must be in the same namespace as the additional network. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster. Procedure Add an annotation to the Pod object. Only one of the following annotation formats can be used: To attach an additional network without any customization, add an annotation with the following format. Replace <network> with the name of the additional network to associate with the pod: metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1 1 To specify more than one additional network, separate each network with a comma. Do not include whitespace between the comma. If you specify the same additional network multiple times, that pod will have multiple network interfaces attached to that network. To attach an additional network with customizations, add an annotation with the following format: metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "<network>", 1 "namespace": "<namespace>", 2 "default-route": ["<default-route>"] 3 } ] 1 Specify the name of the additional network defined by a NetworkAttachmentDefinition object. 2 Specify the namespace where the NetworkAttachmentDefinition object is defined. 3 Optional: Specify an override for the default route, such as 192.168.17.1 . To create the pod, enter the following command. Replace <name> with the name of the pod. USD oc create -f <name>.yaml Optional: To Confirm that the annotation exists in the Pod CR, enter the following command, replacing <name> with the name of the pod. USD oc get pod <name> -o yaml In the following example, the example-pod pod is attached to the net1 additional network: USD oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/network-status: |- 1 [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.128.2.14" ], "default": true, "dns": {} },{ "name": "macvlan-bridge", "interface": "net1", "ips": [ "20.2.2.100" ], "mac": "22:2f:60:a5:f8:00", "dns": {} }] name: example-pod namespace: default spec: ... status: ... 1 The k8s.v1.cni.cncf.io/network-status parameter is a JSON array of objects. Each object describes the status of an additional network attached to the pod. The annotation value is stored as a plain text value. 16.3.3.1.1. Specifying pod-specific addressing and routing options When attaching a pod to an additional network, you may want to specify further properties about that network in a particular pod. This allows you to change some aspects of routing, as well as specify static IP addresses and MAC addresses. To accomplish this, you can use the JSON formatted annotations. Prerequisites The pod must be in the same namespace as the additional network. Install the OpenShift CLI ( oc ). You must log in to the cluster. Procedure To add a pod to an additional network while specifying addressing and/or routing options, complete the following steps: Edit the Pod resource definition. If you are editing an existing Pod resource, run the following command to edit its definition in the default editor. Replace <name> with the name of the Pod resource to edit. USD oc edit pod <name> In the Pod resource definition, add the k8s.v1.cni.cncf.io/networks parameter to the pod metadata mapping. The k8s.v1.cni.cncf.io/networks accepts a JSON string of a list of objects that reference the name of NetworkAttachmentDefinition custom resource (CR) names in addition to specifying additional properties. metadata: annotations: k8s.v1.cni.cncf.io/networks: '[<network>[,<network>,...]]' 1 1 Replace <network> with a JSON object as shown in the following examples. The single quotes are required. In the following example the annotation specifies which network attachment will have the default route, using the default-route parameter. apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "net1" }, { "name": "net2", 1 "default-route": ["192.0.2.1"] 2 }]' spec: containers: - name: example-pod command: ["/bin/bash", "-c", "sleep 2000000000000"] image: centos/tools 1 The name key is the name of the additional network to associate with the pod. 2 The default-route key specifies a value of a gateway for traffic to be routed over if no other routing entry is present in the routing table. If more than one default-route key is specified, this will cause the pod to fail to become active. The default route will cause any traffic that is not specified in other routes to be routed to the gateway. Important Setting the default route to an interface other than the default network interface for OpenShift Container Platform may cause traffic that is anticipated for pod-to-pod traffic to be routed over another interface. To verify the routing properties of a pod, the oc command may be used to execute the ip command within a pod. USD oc exec -it <pod_name> -- ip route Note You may also reference the pod's k8s.v1.cni.cncf.io/network-status to see which additional network has been assigned the default route, by the presence of the default-route key in the JSON-formatted list of objects. To set a static IP address or MAC address for a pod you can use the JSON formatted annotations. This requires you create networks that specifically allow for this functionality. This can be specified in a rawCNIConfig for the CNO. Edit the CNO CR by running the following command: USD oc edit networks.operator.openshift.io cluster The following YAML describes the configuration parameters for the CNO: Cluster Network Operator YAML configuration name: <name> 1 namespace: <namespace> 2 rawCNIConfig: '{ 3 ... }' type: Raw 1 Specify a name for the additional network attachment that you are creating. The name must be unique within the specified namespace . 2 Specify the namespace to create the network attachment in. If you do not specify a value, then the default namespace is used. 3 Specify the CNI plugin configuration in JSON format, which is based on the following template. The following object describes the configuration parameters for utilizing static MAC address and IP address using the macvlan CNI plugin: macvlan CNI plugin JSON configuration object using static IP and MAC address { "cniVersion": "0.3.1", "name": "<name>", 1 "plugins": [{ 2 "type": "macvlan", "capabilities": { "ips": true }, 3 "master": "eth0", 4 "mode": "bridge", "ipam": { "type": "static" } }, { "capabilities": { "mac": true }, 5 "type": "tuning" }] } 1 Specifies the name for the additional network attachment to create. The name must be unique within the specified namespace . 2 Specifies an array of CNI plugin configurations. The first object specifies a macvlan plugin configuration and the second object specifies a tuning plugin configuration. 3 Specifies that a request is made to enable the static IP address functionality of the CNI plugin runtime configuration capabilities. 4 Specifies the interface that the macvlan plugin uses. 5 Specifies that a request is made to enable the static MAC address functionality of a CNI plugin. The above network attachment can be referenced in a JSON formatted annotation, along with keys to specify which static IP and MAC address will be assigned to a given pod. Edit the pod with: USD oc edit pod <name> macvlan CNI plugin JSON configuration object using static IP and MAC address apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "<name>", 1 "ips": [ "192.0.2.205/24" ], 2 "mac": "CA:FE:C0:FF:EE:00" 3 } ]' 1 Use the <name> as provided when creating the rawCNIConfig above. 2 Provide an IP address including the subnet mask. 3 Provide the MAC address. Note Static IP addresses and MAC addresses do not have to be used at the same time, you may use them individually, or together. To verify the IP address and MAC properties of a pod with additional networks, use the oc command to execute the ip command within a pod. USD oc exec -it <pod_name> -- ip a 16.3.4. Configuring multi-network policy Administrators can use the MultiNetworkPolicy API to create multiple network policies that manage traffic for pods attached to secondary networks. For example, you can create policies that allow or deny traffic based on specific ports, IPs/ranges, or labels. Multi-network policies can be used to manage traffic on secondary networks in the cluster. They cannot managed the cluster's default network or primary network of user-defined networks. As a cluster administrator, you can configure a multi-network policy for any of the following network types: Single-Root I/O Virtualization (SR-IOV) MAC Virtual Local Area Network (MacVLAN) IP Virtual Local Area Network (IPVLAN) Bond Container Network Interface (CNI) over SR-IOV OVN-Kubernetes additional networks Note Support for configuring multi-network policies for SR-IOV additional networks is only supported with kernel network interface controllers (NICs). SR-IOV is not supported for Data Plane Development Kit (DPDK) applications. 16.3.4.1. Differences between multi-network policy and network policy Although the MultiNetworkPolicy API implements the NetworkPolicy API, there are several important differences: You must use the MultiNetworkPolicy API: apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy You must use the multi-networkpolicy resource name when using the CLI to interact with multi-network policies. For example, you can view a multi-network policy object with the oc get multi-networkpolicy <name> command where <name> is the name of a multi-network policy. You must specify an annotation with the name of the network attachment definition that defines the additional network: apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> where: <network_name> Specifies the name of a network attachment definition. 16.3.4.2. Enabling multi-network policy for the cluster As a cluster administrator, you can enable multi-network policy support on your cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure Create the multinetwork-enable-patch.yaml file with the following YAML: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: useMultiNetworkPolicy: true Configure the cluster to enable multi-network policy: USD oc patch network.operator.openshift.io cluster --type=merge --patch-file=multinetwork-enable-patch.yaml Example output network.operator.openshift.io/cluster patched 16.3.4.3. Supporting multi-network policies in IPv6 networks The ICMPv6 Neighbor Discovery Protocol (NDP) is a set of messages and processes that enable devices to discover and maintain information about neighboring nodes. NDP plays a crucial role in IPv6 networks, facilitating the interaction between devices on the same link. The Cluster Network Operator (CNO) deploys the iptables implementation of multi-network policy when the useMultiNetworkPolicy parameter is set to true . To support multi-network policies in IPv6 networks the Cluster Network Operator deploys the following set of rules in every pod affected by a multi-network policy: Multi-network policy custom rules kind: ConfigMap apiVersion: v1 metadata: name: multi-networkpolicy-custom-rules namespace: openshift-multus data: custom-v6-rules.txt: | # accept NDP -p icmpv6 --icmpv6-type neighbor-solicitation -j ACCEPT 1 -p icmpv6 --icmpv6-type neighbor-advertisement -j ACCEPT 2 # accept RA/RS -p icmpv6 --icmpv6-type router-solicitation -j ACCEPT 3 -p icmpv6 --icmpv6-type router-advertisement -j ACCEPT 4 1 This rule allows incoming ICMPv6 neighbor solicitation messages, which are part of the neighbor discovery protocol (NDP). These messages help determine the link-layer addresses of neighboring nodes. 2 This rule allows incoming ICMPv6 neighbor advertisement messages, which are part of NDP and provide information about the link-layer address of the sender. 3 This rule permits incoming ICMPv6 router solicitation messages. Hosts use these messages to request router configuration information. 4 This rule allows incoming ICMPv6 router advertisement messages, which give configuration information to hosts. Note You cannot edit these predefined rules. These rules collectively enable essential ICMPv6 traffic for correct network functioning, including address resolution and router communication in an IPv6 environment. With these rules in place and a multi-network policy denying traffic, applications are not expected to experience connectivity issues. 16.3.4.4. Working with multi-network policy As a cluster administrator, you can create, edit, view, and delete multi-network policies. 16.3.4.4.1. Prerequisites You have enabled multi-network policy support for your cluster. 16.3.4.4.2. Creating a multi-network policy using the CLI To define granular rules describing ingress or egress network traffic allowed for namespaces in your cluster, you can create a multi-network policy. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin, with mode: NetworkPolicy set. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace that the multi-network policy applies to. Procedure Create a policy rule: Create a <policy_name>.yaml file: USD touch <policy_name>.yaml where: <policy_name> Specifies the multi-network policy file name. Define a multi-network policy in the file that you just created, such as in the following examples: Deny ingress from all pods in all namespaces This is a fundamental policy, blocking all cross-pod networking other than cross-pod traffic allowed by the configuration of other Network Policies. apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: deny-by-default annotations: k8s.v1.cni.cncf.io/policy-for:<namespace_name>/<network_name> spec: podSelector: {} policyTypes: - Ingress ingress: [] where: <network_name> Specifies the name of a network attachment definition. Allow ingress from all pods in the same namespace apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-same-namespace annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: ingress: - from: - podSelector: {} where: <network_name> Specifies the name of a network attachment definition. Allow ingress traffic to one pod from a particular namespace This policy allows traffic to pods labelled pod-a from pods running in namespace-y . apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-traffic-pod annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: pod: pod-a policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: namespace-y where: <network_name> Specifies the name of a network attachment definition. Restrict traffic to a service This policy when applied ensures every pod with both labels app=bookstore and role=api can only be accessed by pods with label app=bookstore . In this example the application could be a REST API server, marked with labels app=bookstore and role=api . This example addresses the following use cases: Restricting the traffic to a service to only the other microservices that need to use it. Restricting the connections to a database to only permit the application using it. apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: api-allow annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: bookstore role: api ingress: - from: - podSelector: matchLabels: app: bookstore where: <network_name> Specifies the name of a network attachment definition. To create the multi-network policy object, enter the following command: USD oc apply -f <policy_name>.yaml -n <namespace> where: <policy_name> Specifies the multi-network policy file name. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Example output multinetworkpolicy.k8s.cni.cncf.io/deny-by-default created Note If you log in to the web console with cluster-admin privileges, you have a choice of creating a network policy in any namespace in the cluster directly in YAML or from a form in the web console. 16.3.4.4.3. Editing a multi-network policy You can edit a multi-network policy in a namespace. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin, with mode: NetworkPolicy set. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace where the multi-network policy exists. Procedure Optional: To list the multi-network policy objects in a namespace, enter the following command: USD oc get multi-networkpolicy where: <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Edit the multi-network policy object. If you saved the multi-network policy definition in a file, edit the file and make any necessary changes, and then enter the following command. USD oc apply -n <namespace> -f <policy_file>.yaml where: <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. <policy_file> Specifies the name of the file containing the network policy. If you need to update the multi-network policy object directly, enter the following command: USD oc edit multi-networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Confirm that the multi-network policy object is updated. USD oc describe multi-networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the multi-network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Note If you log in to the web console with cluster-admin privileges, you have a choice of editing a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu. 16.3.4.4.4. Viewing multi-network policies using the CLI You can examine the multi-network policies in a namespace. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace where the multi-network policy exists. Procedure List multi-network policies in a namespace: To view multi-network policy objects defined in a namespace, enter the following command: USD oc get multi-networkpolicy Optional: To examine a specific multi-network policy, enter the following command: USD oc describe multi-networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the multi-network policy to inspect. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Note If you log in to the web console with cluster-admin privileges, you have a choice of viewing a network policy in any namespace in the cluster directly in YAML or from a form in the web console. 16.3.4.4.5. Deleting a multi-network policy using the CLI You can delete a multi-network policy in a namespace. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin, with mode: NetworkPolicy set. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace where the multi-network policy exists. Procedure To delete a multi-network policy object, enter the following command: USD oc delete multi-networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the multi-network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Example output multinetworkpolicy.k8s.cni.cncf.io/default-deny deleted Note If you log in to the web console with cluster-admin privileges, you have a choice of deleting a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu. 16.3.4.4.6. Creating a default deny all multi-network policy This is a fundamental policy, blocking all cross-pod networking other than network traffic allowed by the configuration of other deployed network policies. This procedure enforces a default deny-by-default policy. Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin, with mode: NetworkPolicy set. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace that the multi-network policy applies to. Procedure Create the following YAML that defines a deny-by-default policy to deny ingress from all pods in all namespaces. Save the YAML in the deny-by-default.yaml file: apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: deny-by-default namespace: default 1 annotations: k8s.v1.cni.cncf.io/policy-for: <namespace_name>/<network_name> 2 spec: podSelector: {} 3 policyTypes: 4 - Ingress 5 ingress: [] 6 1 namespace: default deploys this policy to the default namespace. 2 network_name : specifies the name of a network attachment definition. 3 podSelector: is empty, this means it matches all the pods. Therefore, the policy applies to all pods in the default namespace. 4 policyTypes: a list of rule types that the NetworkPolicy relates to. 5 Specifies as Ingress only policyType . 6 There are no ingress rules specified. This causes incoming traffic to be dropped to all pods. Apply the policy by entering the following command: USD oc apply -f deny-by-default.yaml Example output multinetworkpolicy.k8s.cni.cncf.io/deny-by-default created 16.3.4.4.7. Creating a multi-network policy to allow traffic from external clients With the deny-by-default policy in place you can proceed to configure a policy that allows traffic from external clients to a pod with the label app=web . Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows external service from the public Internet directly or by using a Load Balancer to access the pod. Traffic is only allowed to a pod with the label app=web . Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin, with mode: NetworkPolicy set. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace that the multi-network policy applies to. Procedure Create a policy that allows traffic from the public Internet directly or by using a load balancer to access the pod. Save the YAML in the web-allow-external.yaml file: apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-external namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: policyTypes: - Ingress podSelector: matchLabels: app: web ingress: - {} Apply the policy by entering the following command: USD oc apply -f web-allow-external.yaml Example output multinetworkpolicy.k8s.cni.cncf.io/web-allow-external created This policy allows traffic from all resources, including external traffic as illustrated in the following diagram: 16.3.4.4.8. Creating a multi-network policy allowing traffic to an application from all namespaces Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows traffic from all pods in all namespaces to a particular application. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin, with mode: NetworkPolicy set. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace that the multi-network policy applies to. Procedure Create a policy that allows traffic from all pods in all namespaces to a particular application. Save the YAML in the web-allow-all-namespaces.yaml file: apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-all-namespaces namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: {} 2 1 Applies the policy only to app:web pods in default namespace. 2 Selects all pods in all namespaces. Note By default, if you omit specifying a namespaceSelector it does not select any namespaces, which means the policy allows traffic only from the namespace the network policy is deployed to. Apply the policy by entering the following command: USD oc apply -f web-allow-all-namespaces.yaml Example output multinetworkpolicy.k8s.cni.cncf.io/web-allow-all-namespaces created Verification Start a web service in the default namespace by entering the following command: USD oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80 Run the following command to deploy an alpine image in the secondary namespace and to start a shell: USD oc run test-USDRANDOM --namespace=secondary --rm -i -t --image=alpine -- sh Run the following command in the shell and observe that the request is allowed: # wget -qO- --timeout=2 http://web.default Expected output <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> 16.3.4.4.9. Creating a multi-network policy allowing traffic to an application from a namespace Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows traffic to a pod with the label app=web from a particular namespace. You might want to do this to: Restrict traffic to a production database only to namespaces where production workloads are deployed. Enable monitoring tools deployed to a particular namespace to scrape metrics from the current namespace. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin, with mode: NetworkPolicy set. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace that the multi-network policy applies to. Procedure Create a policy that allows traffic from all pods in a particular namespaces with a label purpose=production . Save the YAML in the web-allow-prod.yaml file: apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-prod namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production 2 1 Applies the policy only to app:web pods in the default namespace. 2 Restricts traffic to only pods in namespaces that have the label purpose=production . Apply the policy by entering the following command: USD oc apply -f web-allow-prod.yaml Example output multinetworkpolicy.k8s.cni.cncf.io/web-allow-prod created Verification Start a web service in the default namespace by entering the following command: USD oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80 Run the following command to create the prod namespace: USD oc create namespace prod Run the following command to label the prod namespace: USD oc label namespace/prod purpose=production Run the following command to create the dev namespace: USD oc create namespace dev Run the following command to label the dev namespace: USD oc label namespace/dev purpose=testing Run the following command to deploy an alpine image in the dev namespace and to start a shell: USD oc run test-USDRANDOM --namespace=dev --rm -i -t --image=alpine -- sh Run the following command in the shell and observe that the request is blocked: # wget -qO- --timeout=2 http://web.default Expected output wget: download timed out Run the following command to deploy an alpine image in the prod namespace and start a shell: USD oc run test-USDRANDOM --namespace=prod --rm -i -t --image=alpine -- sh Run the following command in the shell and observe that the request is allowed: # wget -qO- --timeout=2 http://web.default Expected output <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> 16.3.4.5. Additional resources About network policy Understanding multiple networks Configuring a macvlan network Configuring an SR-IOV network device 16.3.5. Removing a pod from an additional network As a cluster user you can remove a pod from an additional network. 16.3.5.1. Removing a pod from an additional network You can remove a pod from an additional network only by deleting the pod. Prerequisites An additional network is attached to the pod. Install the OpenShift CLI ( oc ). Log in to the cluster. Procedure To delete the pod, enter the following command: USD oc delete pod <name> -n <namespace> <name> is the name of the pod. <namespace> is the namespace that contains the pod. 16.3.6. Editing an additional network As a cluster administrator you can modify the configuration for an existing additional network. 16.3.6.1. Modifying an additional network attachment definition As a cluster administrator, you can make changes to an existing additional network. Any existing pods attached to the additional network will not be updated. Prerequisites You have configured an additional network for your cluster. Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure To edit an additional network for your cluster, complete the following steps: Run the following command to edit the Cluster Network Operator (CNO) CR in your default text editor: USD oc edit networks.operator.openshift.io cluster In the additionalNetworks collection, update the additional network with your changes. Save your changes and quit the text editor to commit your changes. Optional: Confirm that the CNO updated the NetworkAttachmentDefinition object by running the following command. Replace <network-name> with the name of the additional network to display. There might be a delay before the CNO updates the NetworkAttachmentDefinition object to reflect your changes. USD oc get network-attachment-definitions <network-name> -o yaml For example, the following console output displays a NetworkAttachmentDefinition object that is named net1 : USD oc get network-attachment-definitions net1 -o go-template='{{printf "%s\n" .spec.config}}' { "cniVersion": "0.3.1", "type": "macvlan", "master": "ens5", "mode": "bridge", "ipam": {"type":"static","routes":[{"dst":"0.0.0.0/0","gw":"10.128.2.1"}],"addresses":[{"address":"10.128.2.100/23","gateway":"10.128.2.1"}],"dns":{"nameservers":["172.30.0.10"],"domain":"us-west-2.compute.internal","search":["us-west-2.compute.internal"]}} } 16.3.7. Configuring IP address assignment on secondary networks The following sections give instructions and information for how to configure IP address assignments for secondary networks. 16.3.7.1. Configuration of IP address assignment for a network attachment For additional networks, IP addresses can be assigned using an IP Address Management (IPAM) CNI plugin, which supports various assignment methods, including Dynamic Host Configuration Protocol (DHCP) and static assignment. The DHCP IPAM CNI plugin responsible for dynamic assignment of IP addresses operates with two distinct components: CNI Plugin : Responsible for integrating with the Kubernetes networking stack to request and release IP addresses. DHCP IPAM CNI Daemon : A listener for DHCP events that coordinates with existing DHCP servers in the environment to handle IP address assignment requests. This daemon is not a DHCP server itself. For networks requiring type: dhcp in their IPAM configuration, ensure the following: A DHCP server is available and running in the environment. The DHCP server is external to the cluster and is expected to be part of the customer's existing network infrastructure. The DHCP server is appropriately configured to serve IP addresses to the nodes. In cases where a DHCP server is unavailable in the environment, it is recommended to use the Whereabouts IPAM CNI plugin instead. The Whereabouts CNI provides similar IP address management capabilities without the need for an external DHCP server. Note Use the Whereabouts CNI plugin when there is no external DHCP server or where static IP address management is preferred. The Whereabouts plugin includes a reconciler daemon to manage stale IP address allocations. A DHCP lease must be periodically renewed throughout the container's lifetime, so a separate daemon, the DHCP IPAM CNI Daemon, is required. To deploy the DHCP IPAM CNI daemon, modify the Cluster Network Operator (CNO) configuration to trigger the deployment of this daemon as part of the additional network setup. 16.3.7.1.1. Static IP address assignment configuration The following table describes the configuration for static IP address assignment: Table 16.16. ipam static configuration object Field Type Description type string The IPAM address type. The value static is required. addresses array An array of objects specifying IP addresses to assign to the virtual interface. Both IPv4 and IPv6 IP addresses are supported. routes array An array of objects specifying routes to configure inside the pod. dns array Optional: An array of objects specifying the DNS configuration. The addresses array requires objects with the following fields: Table 16.17. ipam.addresses[] array Field Type Description address string An IP address and network prefix that you specify. For example, if you specify 10.10.21.10/24 , then the additional network is assigned an IP address of 10.10.21.10 and the netmask is 255.255.255.0 . gateway string The default gateway to route egress network traffic to. Table 16.18. ipam.routes[] array Field Type Description dst string The IP address range in CIDR format, such as 192.168.17.0/24 or 0.0.0.0/0 for the default route. gw string The gateway where network traffic is routed. Table 16.19. ipam.dns object Field Type Description nameservers array An array of one or more IP addresses for to send DNS queries to. domain array The default domain to append to a hostname. For example, if the domain is set to example.com , a DNS lookup query for example-host is rewritten as example-host.example.com . search array An array of domain names to append to an unqualified hostname, such as example-host , during a DNS lookup query. Static IP address assignment configuration example { "ipam": { "type": "static", "addresses": [ { "address": "191.168.1.7/24" } ] } } 16.3.7.1.2. Dynamic IP address (DHCP) assignment configuration A pod obtains its original DHCP lease when it is created. The lease must be periodically renewed by a minimal DHCP server deployment running on the cluster. Important For an Ethernet network attachment, the SR-IOV Network Operator does not create a DHCP server deployment; the Cluster Network Operator is responsible for creating the minimal DHCP server deployment. To trigger the deployment of the DHCP server, you must create a shim network attachment by editing the Cluster Network Operator configuration, as in the following example: Example shim network attachment definition apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { "name": "dhcp-shim", "cniVersion": "0.3.1", "type": "bridge", "ipam": { "type": "dhcp" } } # ... The following table describes the configuration parameters for dynamic IP address address assignment with DHCP. Table 16.20. ipam DHCP configuration object Field Type Description type string The IPAM address type. The value dhcp is required. The following JSON example describes the configuration p for dynamic IP address address assignment with DHCP. Dynamic IP address (DHCP) assignment configuration example { "ipam": { "type": "dhcp" } } 16.3.7.1.3. Dynamic IP address assignment configuration with Whereabouts The Whereabouts CNI plugin allows the dynamic assignment of an IP address to an additional network without the use of a DHCP server. The Whereabouts CNI plugin also supports overlapping IP address ranges and configuration of the same CIDR range multiple times within separate NetworkAttachmentDefinition CRDs. This provides greater flexibility and management capabilities in multi-tenant environments. 16.3.7.1.3.1. Dynamic IP address configuration objects The following table describes the configuration objects for dynamic IP address assignment with Whereabouts: Table 16.21. ipam whereabouts configuration object Field Type Description type string The IPAM address type. The value whereabouts is required. range string An IP address and range in CIDR notation. IP addresses are assigned from within this range of addresses. exclude array Optional: A list of zero or more IP addresses and ranges in CIDR notation. IP addresses within an excluded address range are not assigned. network_name string Optional: Helps ensure that each group or domain of pods gets its own set of IP addresses, even if they share the same range of IP addresses. Setting this field is important for keeping networks separate and organized, notably in multi-tenant environments. 16.3.7.1.3.2. Dynamic IP address assignment configuration that uses Whereabouts The following example shows a dynamic address assignment configuration that uses Whereabouts: Whereabouts dynamic IP address assignment { "ipam": { "type": "whereabouts", "range": "192.0.2.192/27", "exclude": [ "192.0.2.192/30", "192.0.2.196/32" ] } } 16.3.7.1.3.3. Dynamic IP address assignment that uses Whereabouts with overlapping IP address ranges The following example shows a dynamic IP address assignment that uses overlapping IP address ranges for multi-tenant networks. NetworkAttachmentDefinition 1 { "ipam": { "type": "whereabouts", "range": "192.0.2.192/29", "network_name": "example_net_common", 1 } } 1 Optional. If set, must match the network_name of NetworkAttachmentDefinition 2 . NetworkAttachmentDefinition 2 { "ipam": { "type": "whereabouts", "range": "192.0.2.192/24", "network_name": "example_net_common", 1 } } 1 Optional. If set, must match the network_name of NetworkAttachmentDefinition 1 . 16.3.7.1.4. Creating a whereabouts-reconciler daemon set The Whereabouts reconciler is responsible for managing dynamic IP address assignments for the pods within a cluster by using the Whereabouts IP Address Management (IPAM) solution. It ensures that each pod gets a unique IP address from the specified IP address range. It also handles IP address releases when pods are deleted or scaled down. Note You can also use a NetworkAttachmentDefinition custom resource definition (CRD) for dynamic IP address assignment. The whereabouts-reconciler daemon set is automatically created when you configure an additional network through the Cluster Network Operator. It is not automatically created when you configure an additional network from a YAML manifest. To trigger the deployment of the whereabouts-reconciler daemon set, you must manually create a whereabouts-shim network attachment by editing the Cluster Network Operator custom resource (CR) file. Use the following procedure to deploy the whereabouts-reconciler daemon set. Procedure Edit the Network.operator.openshift.io custom resource (CR) by running the following command: USD oc edit network.operator.openshift.io cluster Include the additionalNetworks section shown in this example YAML extract within the spec definition of the custom resource (CR): apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster # ... spec: additionalNetworks: - name: whereabouts-shim namespace: default rawCNIConfig: |- { "name": "whereabouts-shim", "cniVersion": "0.3.1", "type": "bridge", "ipam": { "type": "whereabouts" } } type: Raw # ... Save the file and exit the text editor. Verify that the whereabouts-reconciler daemon set deployed successfully by running the following command: USD oc get all -n openshift-multus | grep whereabouts-reconciler Example output pod/whereabouts-reconciler-jnp6g 1/1 Running 0 6s pod/whereabouts-reconciler-k76gg 1/1 Running 0 6s pod/whereabouts-reconciler-k86t9 1/1 Running 0 6s pod/whereabouts-reconciler-p4sxw 1/1 Running 0 6s pod/whereabouts-reconciler-rvfdv 1/1 Running 0 6s pod/whereabouts-reconciler-svzw9 1/1 Running 0 6s daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 6s 16.3.7.1.5. Configuring the Whereabouts IP reconciler schedule The Whereabouts IPAM CNI plugin runs the IP reconciler daily. This process cleans up any stranded IP allocations that might result in exhausting IPs and therefore prevent new pods from getting an IP allocated to them. Use this procedure to change the frequency at which the IP reconciler runs. Prerequisites You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have deployed the whereabouts-reconciler daemon set, and the whereabouts-reconciler pods are up and running. Procedure Run the following command to create a ConfigMap object named whereabouts-config in the openshift-multus namespace with a specific cron expression for the IP reconciler: USD oc create configmap whereabouts-config -n openshift-multus --from-literal=reconciler_cron_expression="*/15 * * * *" This cron expression indicates the IP reconciler runs every 15 minutes. Adjust the expression based on your specific requirements. Note The whereabouts-reconciler daemon set can only consume a cron expression pattern that includes five asterisks. The sixth, which is used to denote seconds, is currently not supported. Retrieve information about resources related to the whereabouts-reconciler daemon set and pods within the openshift-multus namespace by running the following command: USD oc get all -n openshift-multus | grep whereabouts-reconciler Example output pod/whereabouts-reconciler-2p7hw 1/1 Running 0 4m14s pod/whereabouts-reconciler-76jk7 1/1 Running 0 4m14s pod/whereabouts-reconciler-94zw6 1/1 Running 0 4m14s pod/whereabouts-reconciler-mfh68 1/1 Running 0 4m14s pod/whereabouts-reconciler-pgshz 1/1 Running 0 4m14s pod/whereabouts-reconciler-xn5xz 1/1 Running 0 4m14s daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 4m16s Run the following command to verify that the whereabouts-reconciler pod runs the IP reconciler with the configured interval: USD oc -n openshift-multus logs whereabouts-reconciler-2p7hw Example output 2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/..2024_02_02_16_33_54.1375928161": CREATE 2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/..2024_02_02_16_33_54.1375928161": CHMOD 2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/..data_tmp": RENAME 2024-02-02T16:33:54Z [verbose] using expression: */15 * * * * 2024-02-02T16:33:54Z [verbose] configuration updated to file "/cron-schedule/..data". New cron expression: */15 * * * * 2024-02-02T16:33:54Z [verbose] successfully updated CRON configuration id "00c2d1c9-631d-403f-bb86-73ad104a6817" - new cron expression: */15 * * * * 2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/config": CREATE 2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/..2024_02_02_16_26_17.3874177937": REMOVE 2024-02-02T16:45:00Z [verbose] starting reconciler run 2024-02-02T16:45:00Z [debug] NewReconcileLooper - inferred connection data 2024-02-02T16:45:00Z [debug] listing IP pools 2024-02-02T16:45:00Z [debug] no IP addresses to cleanup 2024-02-02T16:45:00Z [verbose] reconciler success 16.3.7.1.6. Creating a configuration for assignment of dual-stack IP addresses dynamically Dual-stack IP address assignment can be configured with the ipRanges parameter for: IPv4 addresses IPv6 addresses multiple IP address assignment Procedure Set type to whereabouts . Use ipRanges to allocate IP addresses as shown in the following example: cniVersion: operator.openshift.io/v1 kind: Network =metadata: name: cluster spec: additionalNetworks: - name: whereabouts-shim namespace: default type: Raw rawCNIConfig: |- { "name": "whereabouts-dual-stack", "cniVersion": "0.3.1, "type": "bridge", "ipam": { "type": "whereabouts", "ipRanges": [ {"range": "192.168.10.0/24"}, {"range": "2001:db8::/64"} ] } } Attach network to a pod. For more information, see "Adding a pod to an additional network". Verify that all IP addresses are assigned. Run the following command to ensure the IP addresses are assigned as metadata. USD oc exec -it mypod -- ip a 16.3.8. Configuring the master interface in the container network namespace The following section provides instructions and information for how to create and manage a MAC-VLAN, IP-VLAN, and VLAN subinterface based on a master interface. 16.3.8.1. About configuring the master interface in the container network namespace You can create a MAC-VLAN, an IP-VLAN, or a VLAN subinterface that is based on a master interface that exists in a container namespace. You can also create a master interface as part of the pod network configuration in a separate network attachment definition CRD. To use a container namespace master interface, you must specify true for the linkInContainer parameter that exists in the subinterface configuration of the NetworkAttachmentDefinition CRD. 16.3.8.1.1. Creating multiple VLANs on SR-IOV VFs An example use case for utilizing this feature is to create multiple VLANs based on SR-IOV VFs. To do so, begin by creating an SR-IOV network and then define the network attachments for the VLAN interfaces. The following example shows how to configure the setup illustrated in this diagram. Figure 16.3. Creating VLANs Prerequisites You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have installed the SR-IOV Network Operator. Procedure Create a dedicated container namespace where you want to deploy your pod by using the following command: USD oc new-project test-namespace Create an SR-IOV node policy: Create an SriovNetworkNodePolicy object, and then save the YAML in the sriov-node-network-policy.yaml file: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriovnic namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: false needVhostNet: true nicSelector: vendor: "15b3" 1 deviceID: "101b" 2 rootDevices: ["00:05.0"] numVfs: 10 priority: 99 resourceName: sriovnic nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" Note The SR-IOV network node policy configuration example, with the setting deviceType: netdevice , is tailored specifically for Mellanox Network Interface Cards (NICs). 1 The vendor hexadecimal code of the SR-IOV network device. The value 15b3 is associated with a Mellanox NIC. 2 The device hexadecimal code of the SR-IOV network device. Apply the YAML by running the following command: USD oc apply -f sriov-node-network-policy.yaml Note Applying this might take some time due to the node requiring a reboot. Create an SR-IOV network: Create the SriovNetwork custom resource (CR) for the additional SR-IOV network attachment as in the following example CR. Save the YAML as the file sriov-network-attachment.yaml : apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-network namespace: openshift-sriov-network-operator spec: networkNamespace: test-namespace resourceName: sriovnic spoofChk: "off" trust: "on" Apply the YAML by running the following command: USD oc apply -f sriov-network-attachment.yaml Create the VLAN additional network: Using the following YAML example, create a file named vlan100-additional-network-configuration.yaml : apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: vlan-100 namespace: test-namespace spec: config: | { "cniVersion": "0.4.0", "name": "vlan-100", "plugins": [ { "type": "vlan", "master": "ext0", 1 "mtu": 1500, "vlanId": 100, "linkInContainer": true, 2 "ipam": {"type": "whereabouts", "ipRanges": [{"range": "1.1.1.0/24"}]} } ] } 1 The VLAN configuration needs to specify the master name. This can be configured in the pod networks annotation. 2 The linkInContainer parameter must be specified. Apply the YAML file by running the following command: USD oc apply -f vlan100-additional-network-configuration.yaml Create a pod definition by using the earlier specified networks: Using the following YAML example, create a file named pod-a.yaml file: Note The manifest below includes 2 resources: Namespace with security labels Pod definition with appropriate network annotation apiVersion: v1 kind: Namespace metadata: name: test-namespace labels: pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged security.openshift.io/scc.podSecurityLabelSync: "false" --- apiVersion: v1 kind: Pod metadata: name: nginx-pod namespace: test-namespace annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "sriov-network", "namespace": "test-namespace", "interface": "ext0" 1 }, { "name": "vlan-100", "namespace": "test-namespace", "interface": "ext0.100" } ]' spec: securityContext: runAsNonRoot: true containers: - name: nginx-container image: nginxinc/nginx-unprivileged:latest securityContext: allowPrivilegeEscalation: false capabilities: drop: ["ALL"] ports: - containerPort: 80 seccompProfile: type: "RuntimeDefault" 1 The name to be used as the master for the VLAN interface. Apply the YAML file by running the following command: USD oc apply -f pod-a.yaml Get detailed information about the nginx-pod within the test-namespace by running the following command: USD oc describe pods nginx-pod -n test-namespace Example output Name: nginx-pod Namespace: test-namespace Priority: 0 Node: worker-1/10.46.186.105 Start Time: Mon, 14 Aug 2023 16:23:13 -0400 Labels: <none> Annotations: k8s.ovn.org/pod-networks: {"default":{"ip_addresses":["10.131.0.26/23"],"mac_address":"0a:58:0a:83:00:1a","gateway_ips":["10.131.0.1"],"routes":[{"dest":"10.128.0.0... k8s.v1.cni.cncf.io/network-status: [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.131.0.26" ], "mac": "0a:58:0a:83:00:1a", "default": true, "dns": {} },{ "name": "test-namespace/sriov-network", "interface": "ext0", "mac": "6e:a7:5e:3f:49:1b", "dns": {}, "device-info": { "type": "pci", "version": "1.0.0", "pci": { "pci-address": "0000:d8:00.2" } } },{ "name": "test-namespace/vlan-100", "interface": "ext0.100", "ips": [ "1.1.1.1" ], "mac": "6e:a7:5e:3f:49:1b", "dns": {} }] k8s.v1.cni.cncf.io/networks: [ { "name": "sriov-network", "namespace": "test-namespace", "interface": "ext0" }, { "name": "vlan-100", "namespace": "test-namespace", "i... openshift.io/scc: privileged Status: Running IP: 10.131.0.26 IPs: IP: 10.131.0.26 16.3.8.1.2. Creating a subinterface based on a bridge master interface in a container namespace You can create a subinterface based on a bridge master interface that exists in a container namespace. Creating a subinterface can be applied to other types of interfaces. Prerequisites You have installed the OpenShift CLI ( oc ). You are logged in to the OpenShift Container Platform cluster as a user with cluster-admin privileges. Procedure Create a dedicated container namespace where you want to deploy your pod by entering the following command: USD oc new-project test-namespace Using the following YAML example, create a bridge NetworkAttachmentDefinition custom resource definition (CRD) file named bridge-nad.yaml : apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: bridge-network spec: config: '{ "cniVersion": "0.4.0", "name": "bridge-network", "type": "bridge", "bridge": "br-001", "isGateway": true, "ipMasq": true, "hairpinMode": true, "ipam": { "type": "host-local", "subnet": "10.0.0.0/24", "routes": [{"dst": "0.0.0.0/0"}] } }' Run the following command to apply the NetworkAttachmentDefinition CRD to your OpenShift Container Platform cluster: USD oc apply -f bridge-nad.yaml Verify that you successfully created a NetworkAttachmentDefinition CRD by entering the following command: USD oc get network-attachment-definitions Example output NAME AGE bridge-network 15s Using the following YAML example, create a file named ipvlan-additional-network-configuration.yaml for the IPVLAN additional network configuration: apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: ipvlan-net namespace: test-namespace spec: config: '{ "cniVersion": "0.3.1", "name": "ipvlan-net", "type": "ipvlan", "master": "ext0", 1 "mode": "l3", "linkInContainer": true, 2 "ipam": {"type": "whereabouts", "ipRanges": [{"range": "10.0.0.0/24"}]} }' 1 Specifies the ethernet interface to associate with the network attachment. This is subsequently configured in the pod networks annotation. 2 Specifies that the master interface is in the container network namespace. Apply the YAML file by running the following command: USD oc apply -f ipvlan-additional-network-configuration.yaml Verify that the NetworkAttachmentDefinition CRD has been created successfully by running the following command: USD oc get network-attachment-definitions Example output NAME AGE bridge-network 87s ipvlan-net 9s Using the following YAML example, create a file named pod-a.yaml for the pod definition: apiVersion: v1 kind: Pod metadata: name: pod-a namespace: test-namespace annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "bridge-network", "interface": "ext0" 1 }, { "name": "ipvlan-net", "interface": "ext1" } ]' spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-pod image: quay.io/openshifttest/hello-sdn@sha256:c89445416459e7adea9a5a416b3365ed3d74f2491beb904d61dc8d1eb89a72a4 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] 1 Specifies the name to be used as the master for the IPVLAN interface. Apply the YAML file by running the following command: USD oc apply -f pod-a.yaml Verify that the pod is running by using the following command: USD oc get pod -n test-namespace Example output NAME READY STATUS RESTARTS AGE pod-a 1/1 Running 0 2m36s Show network interface information about the pod-a resource within the test-namespace by running the following command: USD oc exec -n test-namespace pod-a -- ip a Example output 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 3: eth0@if105: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default link/ether 0a:58:0a:d9:00:5d brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.217.0.93/23 brd 10.217.1.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::488b:91ff:fe84:a94b/64 scope link valid_lft forever preferred_lft forever 4: ext0@if107: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether be:da:bd:7e:f4:37 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.0.0.2/24 brd 10.0.0.255 scope global ext0 valid_lft forever preferred_lft forever inet6 fe80::bcda:bdff:fe7e:f437/64 scope link valid_lft forever preferred_lft forever 5: ext1@ext0: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default link/ether be:da:bd:7e:f4:37 brd ff:ff:ff:ff:ff:ff inet 10.0.0.1/24 brd 10.0.0.255 scope global ext1 valid_lft forever preferred_lft forever inet6 fe80::beda:bd00:17e:f437/64 scope link valid_lft forever preferred_lft forever This output shows that the network interface ext1 is associated with the physical interface ext0 . 16.3.9. Removing an additional network As a cluster administrator you can remove an additional network attachment. 16.3.9.1. Removing an additional network attachment definition As a cluster administrator, you can remove an additional network from your OpenShift Container Platform cluster. The additional network is not removed from any pods it is attached to. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure To remove an additional network from your cluster, complete the following steps: Edit the Cluster Network Operator (CNO) in your default text editor by running the following command: USD oc edit networks.operator.openshift.io cluster Modify the CR by removing the configuration from the additionalNetworks collection for the network attachment definition you are removing. apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: [] 1 1 If you are removing the configuration mapping for the only additional network attachment definition in the additionalNetworks collection, you must specify an empty collection. Save your changes and quit the text editor to commit your changes. Optional: Confirm that the additional network CR was deleted by running the following command: USD oc get network-attachment-definition --all-namespaces 16.4. Virtual routing and forwarding 16.4.1. About virtual routing and forwarding Virtual routing and forwarding (VRF) devices combined with IP rules provide the ability to create virtual routing and forwarding domains. VRF reduces the number of permissions needed by CNF, and provides increased visibility of the network topology of secondary networks. VRF is used to provide multi-tenancy functionality, for example, where each tenant has its own unique routing tables and requires different default gateways. Processes can bind a socket to the VRF device. Packets through the binded socket use the routing table associated with the VRF device. An important feature of VRF is that it impacts only OSI model layer 3 traffic and above so L2 tools, such as LLDP, are not affected. This allows higher priority IP rules such as policy based routing to take precedence over the VRF device rules directing specific traffic. 16.4.1.1. Benefits of secondary networks for pods for telecommunications operators In telecommunications use cases, each CNF can potentially be connected to multiple different networks sharing the same address space. These secondary networks can potentially conflict with the cluster's main network CIDR. Using the CNI VRF plugin, network functions can be connected to different customers' infrastructure using the same IP address, keeping different customers isolated. IP addresses are overlapped with OpenShift Container Platform IP space. The CNI VRF plugin also reduces the number of permissions needed by CNF and increases the visibility of network topologies of secondary networks. 16.5. Assigning a secondary network to a VRF As a cluster administrator, you can configure an additional network for a virtual routing and forwarding (VRF) domain by using the CNI VRF plugin. The virtual network that this plugin creates is associated with the physical interface that you specify. Using a secondary network with a VRF instance has the following advantages: Workload isolation Isolate workload traffic by configuring a VRF instance for the additional network. Improved security Enable improved security through isolated network paths in the VRF domain. Multi-tenancy support Support multi-tenancy through network segmentation with a unique routing table in the VRF domain for each tenant. Note Applications that use VRFs must bind to a specific device. The common usage is to use the SO_BINDTODEVICE option for a socket. The SO_BINDTODEVICE option binds the socket to the device that is specified in the passed interface name, for example, eth1 . To use the SO_BINDTODEVICE option, the application must have CAP_NET_RAW capabilities. Using a VRF through the ip vrf exec command is not supported in OpenShift Container Platform pods. To use VRF, bind applications directly to the VRF interface. Additional resources About virtual routing and forwarding 16.5.1. Creating an additional network attachment with the CNI VRF plugin The Cluster Network Operator (CNO) manages additional network definitions. When you specify an additional network to create, the CNO creates the NetworkAttachmentDefinition custom resource (CR) automatically. Note Do not edit the NetworkAttachmentDefinition CRs that the Cluster Network Operator manages. Doing so might disrupt network traffic on your additional network. To create an additional network attachment with the CNI VRF plugin, perform the following procedure. Prerequisites Install the OpenShift Container Platform CLI (oc). Log in to the OpenShift cluster as a user with cluster-admin privileges. Procedure Create the Network custom resource (CR) for the additional network attachment and insert the rawCNIConfig configuration for the additional network, as in the following example CR. Save the YAML as the file additional-network-attachment.yaml . apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: test-network-1 namespace: additional-network-1 type: Raw rawCNIConfig: '{ "cniVersion": "0.3.1", "name": "macvlan-vrf", "plugins": [ 1 { "type": "macvlan", "master": "eth1", "ipam": { "type": "static", "addresses": [ { "address": "191.168.1.23/24" } ] } }, { "type": "vrf", 2 "vrfname": "vrf-1", 3 "table": 1001 4 }] }' 1 plugins must be a list. The first item in the list must be the secondary network underpinning the VRF network. The second item in the list is the VRF plugin configuration. 2 type must be set to vrf . 3 vrfname is the name of the VRF that the interface is assigned to. If it does not exist in the pod, it is created. 4 Optional. table is the routing table ID. By default, the tableid parameter is used. If it is not specified, the CNI assigns a free routing table ID to the VRF. Note VRF functions correctly only when the resource is of type netdevice . Create the Network resource: USD oc create -f additional-network-attachment.yaml Confirm that the CNO created the NetworkAttachmentDefinition CR by running the following command. Replace <namespace> with the namespace that you specified when configuring the network attachment, for example, additional-network-1 . USD oc get network-attachment-definitions -n <namespace> Example output NAME AGE additional-network-1 14m Note There might be a delay before the CNO creates the CR. Verification Create a pod and assign it to the additional network with the VRF instance: Create a YAML file that defines the Pod resource: Example pod-additional-net.yaml file apiVersion: v1 kind: Pod metadata: name: pod-additional-net annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "test-network-1" 1 } ]' spec: containers: - name: example-pod-1 command: ["/bin/bash", "-c", "sleep 9000000"] image: centos:8 1 Specify the name of the additional network with the VRF instance. Create the Pod resource by running the following command: USD oc create -f pod-additional-net.yaml Example output pod/test-pod created Verify that the pod network attachment is connected to the VRF additional network. Start a remote session with the pod and run the following command: USD ip vrf show Example output Name Table ----------------------- vrf-1 1001 Confirm that the VRF interface is the controller for the additional interface: USD ip link Example output 5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode
[ "cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: <cudn_namespace_name> labels: k8s.ovn.org/primary-user-defined-network: \"\" EOF", "apiVersion: k8s.ovn.org/v1 kind: ClusterUserDefinedNetwork metadata: name: <cudn_name> 1 spec: namespaceSelector: 2 matchLabels: 3 - \"<example_namespace_one>\":\"\" 4 - \"<example_namespace_two>\":\"\" 5 network: 6 topology: Layer2 7 layer2: 8 role: Primary 9 subnets: - \"2001:db8::/64\" - \"10.100.0.0/16\" 10", "apiVersion: k8s.ovn.org/v1 kind: ClusterUserDefinedNetwork metadata: name: <cudn_name> 1 spec: namespaceSelector: 2 matchExpressions: 3 - key: kubernetes.io/metadata.name 4 operator: In 5 values: [\"<example_namespace_one>\", \"<example_namespace_two>\"] 6 network: 7 topology: Layer3 8 layer3: 9 role: Primary 10 subnets: 11 - cidr: 10.100.0.0/16 hostSubnet: 24", "oc create --validate=true -f <example_cluster_udn>.yaml", "oc get clusteruserdefinednetwork <cudn_name> -o yaml", "apiVersion: k8s.ovn.org/v1 kind: ClusterUserDefinedNetwork metadata: creationTimestamp: \"2024-12-05T15:53:00Z\" finalizers: - k8s.ovn.org/user-defined-network-protection generation: 1 name: my-cudn resourceVersion: \"47985\" uid: 16ee0fcf-74d1-4826-a6b7-25c737c1a634 spec: namespaceSelector: matchExpressions: - key: custom.network.selector operator: In values: - example-namespace-1 - example-namespace-2 - example-namespace-3 network: layer3: role: Primary subnets: - cidr: 10.100.0.0/16 topology: Layer3 status: conditions: - lastTransitionTime: \"2024-11-19T16:46:34Z\" message: 'NetworkAttachmentDefinition has been created in following namespaces: [example-namespace-1, example-namespace-2, example-namespace-3]' reason: NetworkAttachmentDefinitionReady status: \"True\" type: NetworkCreated", "cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: <udn_namespace_name> labels: k8s.ovn.org/primary-user-defined-network: \"\" EOF", "apiVersion: k8s.ovn.org/v1 kind: UserDefinedNetwork metadata: name: udn-1 1 namespace: <some_custom_namespace> spec: topology: Layer2 2 layer2: 3 role: Primary 4 subnets: - \"10.0.0.0/24\" - \"2001:db8::/60\" 5", "apiVersion: k8s.ovn.org/v1 kind: UserDefinedNetwork metadata: name: udn-2-primary 1 namespace: <some_custom_namespace> spec: topology: Layer3 2 layer3: 3 role: Primary 4 subnets: 5 - cidr: 10.150.0.0/16 hostSubnet: 24 - cidr: 2001:db8::/60 hostSubnet: 64", "oc apply -f <my_layer_two_udn>.yaml", "oc get userdefinednetworks udn-1 -n <some_custom_namespace> -o yaml", "apiVersion: k8s.ovn.org/v1 kind: UserDefinedNetwork metadata: creationTimestamp: \"2024-08-28T17:18:47Z\" finalizers: - k8s.ovn.org/user-defined-network-protection generation: 1 name: udn-1 namespace: some-custom-namespace resourceVersion: \"53313\" uid: f483626d-6846-48a1-b88e-6bbeb8bcde8c spec: layer2: role: Primary subnets: - 10.0.0.0/24 - 2001:db8::/60 topology: Layer2 status: conditions: - lastTransitionTime: \"2024-08-28T17:18:47Z\" message: NetworkAttachmentDefinition has been created reason: NetworkAttachmentDefinitionReady status: \"True\" type: NetworkCreated", "apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/open-default-ports: | - protocol: tcp port: 80 - protocol: udp port: 53", "openstack subnet set --dns-nameserver 0.0.0.0 <subnet_id>", "oc create namespace <namespace_name>", "oc edit networks.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # additionalNetworks: - name: tertiary-net namespace: namespace2 type: Raw rawCNIConfig: |- { \"cniVersion\": \"0.3.1\", \"name\": \"tertiary-net\", \"type\": \"ipvlan\", \"master\": \"eth1\", \"mode\": \"l2\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"192.168.1.23/24\" } ] } }", "oc get network-attachment-definitions -n <namespace>", "NAME AGE test-network-1 14m", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: next-net spec: config: |- { \"cniVersion\": \"0.3.1\", \"name\": \"work-network\", \"namespace\": \"namespace2\", 1 \"type\": \"host-device\", \"device\": \"eth1\", \"ipam\": { \"type\": \"dhcp\" } }", "oc apply -f <file>.yaml", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-same-namespace annotations: k8s.v1.cni.cncf.io/policy-for: blue2 spec: podSelector: ingress: - from: - podSelector: {}", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: ingress-ipblock annotations: k8s.v1.cni.cncf.io/policy-for: default/flatl2net spec: podSelector: matchLabels: name: access-control policyTypes: - Ingress ingress: - from: - ipBlock: cidr: 10.200.0.0/30", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: mapping 1 spec: nodeSelector: node-role.kubernetes.io/worker: '' 2 desiredState: ovn: bridge-mappings: - localnet: localnet1 3 bridge: br-ex 4 state: present 5", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ovs-br1-multiple-networks 1 spec: nodeSelector: node-role.kubernetes.io/worker: '' 2 desiredState: interfaces: - name: ovs-br1 3 description: |- A dedicated OVS bridge with eth1 as a port allowing all VLANs and untagged traffic type: ovs-bridge state: up bridge: allow-extra-patch-ports: true options: stp: false port: - name: eth1 4 ovn: bridge-mappings: - localnet: localnet2 5 bridge: ovs-br1 6 state: present 7", "{ \"cniVersion\": \"0.3.1\", \"name\": \"ns1-localnet-network\", \"type\": \"ovn-k8s-cni-overlay\", \"topology\":\"localnet\", \"subnets\": \"202.10.130.112/28\", \"vlanID\": 33, \"mtu\": 1500, \"netAttachDefName\": \"ns1/localnet-network\" \"excludeSubnets\": \"10.100.200.0/29\" }", "{ \"cniVersion\": \"0.3.1\", \"name\": \"l2-network\", \"type\": \"ovn-k8s-cni-overlay\", \"topology\":\"layer2\", \"subnets\": \"10.100.200.0/24\", \"mtu\": 1300, \"netAttachDefName\": \"ns1/l2-network\", \"excludeSubnets\": \"10.100.200.0/29\" }", "apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: l2-network name: tinypod namespace: ns1 spec: containers: - args: - pause image: k8s.gcr.io/e2e-test-images/agnhost:2.36 imagePullPolicy: IfNotPresent name: agnhost-container", "apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"l2-network\", 1 \"mac\": \"02:03:04:05:06:07\", 2 \"interface\": \"myiface1\", 3 \"ips\": [ \"192.0.2.20/24\" ] 4 } ]' name: tinypod namespace: ns1 spec: containers: - args: - pause image: k8s.gcr.io/e2e-test-images/agnhost:2.36 imagePullPolicy: IfNotPresent name: agnhost-container", "bridge vlan add vid VLAN_ID dev DEV", "{ \"cniVersion\": \"0.3.1\", \"name\": \"bridge-net\", \"type\": \"bridge\", \"isGateway\": true, \"vlan\": 2, \"ipam\": { \"type\": \"dhcp\" } }", "{ \"cniVersion\": \"0.3.1\", \"name\": \"hostdev-net\", \"type\": \"host-device\", \"device\": \"eth1\" }", "{ \"name\": \"vlan-net\", \"cniVersion\": \"0.3.1\", \"type\": \"vlan\", \"master\": \"eth0\", \"mtu\": 1500, \"vlanId\": 5, \"linkInContainer\": false, \"ipam\": { \"type\": \"host-local\", \"subnet\": \"10.1.1.0/24\" }, \"dns\": { \"nameservers\": [ \"10.1.1.1\", \"8.8.8.8\" ] } }", "{ \"cniVersion\": \"0.3.1\", \"name\": \"ipvlan-net\", \"type\": \"ipvlan\", \"master\": \"eth1\", \"linkInContainer\": false, \"mode\": \"l3\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"192.168.10.10/24\" } ] } }", "{ \"cniVersion\": \"0.3.1\", \"name\": \"macvlan-net\", \"type\": \"macvlan\", \"master\": \"eth1\", \"linkInContainer\": false, \"mode\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } }", "{ \"name\": \"mynet\", \"cniVersion\": \"0.3.1\", \"type\": \"tap\", \"mac\": \"00:11:22:33:44:55\", \"mtu\": 1500, \"selinuxcontext\": \"system_u:system_r:container_t:s0\", \"multiQueue\": true, \"owner\": 0, \"group\": 0 \"bridge\": \"br1\" }", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: setsebool.service contents: | [Unit] Description=Set SELinux boolean for the TAP CNI plugin Before=kubelet.service [Service] Type=oneshot ExecStart=/usr/sbin/setsebool container_use_devices=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target", "oc apply -f setsebool-container-use-devices.yaml", "oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-e5e0c8e8be9194e7c5a882e047379cfa True False False 3 3 3 0 7d2h worker rendered-worker-d6c9ca107fba6cd76cdcbfcedcafa0f2 True False False 3 3 3 0 7d", "metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1", "metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"<network>\", 1 \"namespace\": \"<namespace>\", 2 \"default-route\": [\"<default-route>\"] 3 } ]", "oc create -f <name>.yaml", "oc get pod <name> -o yaml", "oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/network-status: |- 1 [{ \"name\": \"ovn-kubernetes\", \"interface\": \"eth0\", \"ips\": [ \"10.128.2.14\" ], \"default\": true, \"dns\": {} },{ \"name\": \"macvlan-bridge\", \"interface\": \"net1\", \"ips\": [ \"20.2.2.100\" ], \"mac\": \"22:2f:60:a5:f8:00\", \"dns\": {} }] name: example-pod namespace: default spec: status:", "oc edit pod <name>", "metadata: annotations: k8s.v1.cni.cncf.io/networks: '[<network>[,<network>,...]]' 1", "apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"net1\" }, { \"name\": \"net2\", 1 \"default-route\": [\"192.0.2.1\"] 2 }]' spec: containers: - name: example-pod command: [\"/bin/bash\", \"-c\", \"sleep 2000000000000\"] image: centos/tools", "oc exec -it <pod_name> -- ip route", "oc edit networks.operator.openshift.io cluster", "name: <name> 1 namespace: <namespace> 2 rawCNIConfig: '{ 3 }' type: Raw", "{ \"cniVersion\": \"0.3.1\", \"name\": \"<name>\", 1 \"plugins\": [{ 2 \"type\": \"macvlan\", \"capabilities\": { \"ips\": true }, 3 \"master\": \"eth0\", 4 \"mode\": \"bridge\", \"ipam\": { \"type\": \"static\" } }, { \"capabilities\": { \"mac\": true }, 5 \"type\": \"tuning\" }] }", "oc edit pod <name>", "apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"<name>\", 1 \"ips\": [ \"192.0.2.205/24\" ], 2 \"mac\": \"CA:FE:C0:FF:EE:00\" 3 } ]'", "oc exec -it <pod_name> -- ip a", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: annotations: k8s.v1.cni.cncf.io/policy-for: <network_name>", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: useMultiNetworkPolicy: true", "oc patch network.operator.openshift.io cluster --type=merge --patch-file=multinetwork-enable-patch.yaml", "network.operator.openshift.io/cluster patched", "kind: ConfigMap apiVersion: v1 metadata: name: multi-networkpolicy-custom-rules namespace: openshift-multus data: custom-v6-rules.txt: | # accept NDP -p icmpv6 --icmpv6-type neighbor-solicitation -j ACCEPT 1 -p icmpv6 --icmpv6-type neighbor-advertisement -j ACCEPT 2 # accept RA/RS -p icmpv6 --icmpv6-type router-solicitation -j ACCEPT 3 -p icmpv6 --icmpv6-type router-advertisement -j ACCEPT 4", "touch <policy_name>.yaml", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: deny-by-default annotations: k8s.v1.cni.cncf.io/policy-for:<namespace_name>/<network_name> spec: podSelector: {} policyTypes: - Ingress ingress: []", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-same-namespace annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: ingress: - from: - podSelector: {}", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-traffic-pod annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: pod: pod-a policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: namespace-y", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: api-allow annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: bookstore role: api ingress: - from: - podSelector: matchLabels: app: bookstore", "oc apply -f <policy_name>.yaml -n <namespace>", "multinetworkpolicy.k8s.cni.cncf.io/deny-by-default created", "oc get multi-networkpolicy", "oc apply -n <namespace> -f <policy_file>.yaml", "oc edit multi-networkpolicy <policy_name> -n <namespace>", "oc describe multi-networkpolicy <policy_name> -n <namespace>", "oc get multi-networkpolicy", "oc describe multi-networkpolicy <policy_name> -n <namespace>", "oc delete multi-networkpolicy <policy_name> -n <namespace>", "multinetworkpolicy.k8s.cni.cncf.io/default-deny deleted", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: deny-by-default namespace: default 1 annotations: k8s.v1.cni.cncf.io/policy-for: <namespace_name>/<network_name> 2 spec: podSelector: {} 3 policyTypes: 4 - Ingress 5 ingress: [] 6", "oc apply -f deny-by-default.yaml", "multinetworkpolicy.k8s.cni.cncf.io/deny-by-default created", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-external namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: policyTypes: - Ingress podSelector: matchLabels: app: web ingress: - {}", "oc apply -f web-allow-external.yaml", "multinetworkpolicy.k8s.cni.cncf.io/web-allow-external created", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-all-namespaces namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: {} 2", "oc apply -f web-allow-all-namespaces.yaml", "multinetworkpolicy.k8s.cni.cncf.io/web-allow-all-namespaces created", "oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80", "oc run test-USDRANDOM --namespace=secondary --rm -i -t --image=alpine -- sh", "wget -qO- --timeout=2 http://web.default", "<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-prod namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production 2", "oc apply -f web-allow-prod.yaml", "multinetworkpolicy.k8s.cni.cncf.io/web-allow-prod created", "oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80", "oc create namespace prod", "oc label namespace/prod purpose=production", "oc create namespace dev", "oc label namespace/dev purpose=testing", "oc run test-USDRANDOM --namespace=dev --rm -i -t --image=alpine -- sh", "wget -qO- --timeout=2 http://web.default", "wget: download timed out", "oc run test-USDRANDOM --namespace=prod --rm -i -t --image=alpine -- sh", "wget -qO- --timeout=2 http://web.default", "<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>", "oc delete pod <name> -n <namespace>", "oc edit networks.operator.openshift.io cluster", "oc get network-attachment-definitions <network-name> -o yaml", "oc get network-attachment-definitions net1 -o go-template='{{printf \"%s\\n\" .spec.config}}' { \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens5\", \"mode\": \"bridge\", \"ipam\": {\"type\":\"static\",\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.128.2.1\"}],\"addresses\":[{\"address\":\"10.128.2.100/23\",\"gateway\":\"10.128.2.1\"}],\"dns\":{\"nameservers\":[\"172.30.0.10\"],\"domain\":\"us-west-2.compute.internal\",\"search\":[\"us-west-2.compute.internal\"]}} }", "{ \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.7/24\" } ] } }", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"dhcp-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } } #", "{ \"ipam\": { \"type\": \"dhcp\" } }", "{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/27\", \"exclude\": [ \"192.0.2.192/30\", \"192.0.2.196/32\" ] } }", "{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/29\", \"network_name\": \"example_net_common\", 1 } }", "{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/24\", \"network_name\": \"example_net_common\", 1 } }", "oc edit network.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: whereabouts-shim namespace: default rawCNIConfig: |- { \"name\": \"whereabouts-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\" } } type: Raw", "oc get all -n openshift-multus | grep whereabouts-reconciler", "pod/whereabouts-reconciler-jnp6g 1/1 Running 0 6s pod/whereabouts-reconciler-k76gg 1/1 Running 0 6s pod/whereabouts-reconciler-k86t9 1/1 Running 0 6s pod/whereabouts-reconciler-p4sxw 1/1 Running 0 6s pod/whereabouts-reconciler-rvfdv 1/1 Running 0 6s pod/whereabouts-reconciler-svzw9 1/1 Running 0 6s daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 6s", "oc create configmap whereabouts-config -n openshift-multus --from-literal=reconciler_cron_expression=\"*/15 * * * *\"", "oc get all -n openshift-multus | grep whereabouts-reconciler", "pod/whereabouts-reconciler-2p7hw 1/1 Running 0 4m14s pod/whereabouts-reconciler-76jk7 1/1 Running 0 4m14s pod/whereabouts-reconciler-94zw6 1/1 Running 0 4m14s pod/whereabouts-reconciler-mfh68 1/1 Running 0 4m14s pod/whereabouts-reconciler-pgshz 1/1 Running 0 4m14s pod/whereabouts-reconciler-xn5xz 1/1 Running 0 4m14s daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 4m16s", "oc -n openshift-multus logs whereabouts-reconciler-2p7hw", "2024-02-02T16:33:54Z [debug] event not relevant: \"/cron-schedule/..2024_02_02_16_33_54.1375928161\": CREATE 2024-02-02T16:33:54Z [debug] event not relevant: \"/cron-schedule/..2024_02_02_16_33_54.1375928161\": CHMOD 2024-02-02T16:33:54Z [debug] event not relevant: \"/cron-schedule/..data_tmp\": RENAME 2024-02-02T16:33:54Z [verbose] using expression: */15 * * * * 2024-02-02T16:33:54Z [verbose] configuration updated to file \"/cron-schedule/..data\". New cron expression: */15 * * * * 2024-02-02T16:33:54Z [verbose] successfully updated CRON configuration id \"00c2d1c9-631d-403f-bb86-73ad104a6817\" - new cron expression: */15 * * * * 2024-02-02T16:33:54Z [debug] event not relevant: \"/cron-schedule/config\": CREATE 2024-02-02T16:33:54Z [debug] event not relevant: \"/cron-schedule/..2024_02_02_16_26_17.3874177937\": REMOVE 2024-02-02T16:45:00Z [verbose] starting reconciler run 2024-02-02T16:45:00Z [debug] NewReconcileLooper - inferred connection data 2024-02-02T16:45:00Z [debug] listing IP pools 2024-02-02T16:45:00Z [debug] no IP addresses to cleanup 2024-02-02T16:45:00Z [verbose] reconciler success", "cniVersion: operator.openshift.io/v1 kind: Network =metadata: name: cluster spec: additionalNetworks: - name: whereabouts-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"whereabouts-dual-stack\", \"cniVersion\": \"0.3.1, \"type\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"ipRanges\": [ {\"range\": \"192.168.10.0/24\"}, {\"range\": \"2001:db8::/64\"} ] } }", "oc exec -it mypod -- ip a", "oc new-project test-namespace", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriovnic namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: false needVhostNet: true nicSelector: vendor: \"15b3\" 1 deviceID: \"101b\" 2 rootDevices: [\"00:05.0\"] numVfs: 10 priority: 99 resourceName: sriovnic nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\"", "oc apply -f sriov-node-network-policy.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-network namespace: openshift-sriov-network-operator spec: networkNamespace: test-namespace resourceName: sriovnic spoofChk: \"off\" trust: \"on\"", "oc apply -f sriov-network-attachment.yaml", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: vlan-100 namespace: test-namespace spec: config: | { \"cniVersion\": \"0.4.0\", \"name\": \"vlan-100\", \"plugins\": [ { \"type\": \"vlan\", \"master\": \"ext0\", 1 \"mtu\": 1500, \"vlanId\": 100, \"linkInContainer\": true, 2 \"ipam\": {\"type\": \"whereabouts\", \"ipRanges\": [{\"range\": \"1.1.1.0/24\"}]} } ] }", "oc apply -f vlan100-additional-network-configuration.yaml", "apiVersion: v1 kind: Namespace metadata: name: test-namespace labels: pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged security.openshift.io/scc.podSecurityLabelSync: \"false\" --- apiVersion: v1 kind: Pod metadata: name: nginx-pod namespace: test-namespace annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"sriov-network\", \"namespace\": \"test-namespace\", \"interface\": \"ext0\" 1 }, { \"name\": \"vlan-100\", \"namespace\": \"test-namespace\", \"interface\": \"ext0.100\" } ]' spec: securityContext: runAsNonRoot: true containers: - name: nginx-container image: nginxinc/nginx-unprivileged:latest securityContext: allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] ports: - containerPort: 80 seccompProfile: type: \"RuntimeDefault\"", "oc apply -f pod-a.yaml", "oc describe pods nginx-pod -n test-namespace", "Name: nginx-pod Namespace: test-namespace Priority: 0 Node: worker-1/10.46.186.105 Start Time: Mon, 14 Aug 2023 16:23:13 -0400 Labels: <none> Annotations: k8s.ovn.org/pod-networks: {\"default\":{\"ip_addresses\":[\"10.131.0.26/23\"],\"mac_address\":\"0a:58:0a:83:00:1a\",\"gateway_ips\":[\"10.131.0.1\"],\"routes\":[{\"dest\":\"10.128.0.0 k8s.v1.cni.cncf.io/network-status: [{ \"name\": \"ovn-kubernetes\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.26\" ], \"mac\": \"0a:58:0a:83:00:1a\", \"default\": true, \"dns\": {} },{ \"name\": \"test-namespace/sriov-network\", \"interface\": \"ext0\", \"mac\": \"6e:a7:5e:3f:49:1b\", \"dns\": {}, \"device-info\": { \"type\": \"pci\", \"version\": \"1.0.0\", \"pci\": { \"pci-address\": \"0000:d8:00.2\" } } },{ \"name\": \"test-namespace/vlan-100\", \"interface\": \"ext0.100\", \"ips\": [ \"1.1.1.1\" ], \"mac\": \"6e:a7:5e:3f:49:1b\", \"dns\": {} }] k8s.v1.cni.cncf.io/networks: [ { \"name\": \"sriov-network\", \"namespace\": \"test-namespace\", \"interface\": \"ext0\" }, { \"name\": \"vlan-100\", \"namespace\": \"test-namespace\", \"i openshift.io/scc: privileged Status: Running IP: 10.131.0.26 IPs: IP: 10.131.0.26", "oc new-project test-namespace", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: bridge-network spec: config: '{ \"cniVersion\": \"0.4.0\", \"name\": \"bridge-network\", \"type\": \"bridge\", \"bridge\": \"br-001\", \"isGateway\": true, \"ipMasq\": true, \"hairpinMode\": true, \"ipam\": { \"type\": \"host-local\", \"subnet\": \"10.0.0.0/24\", \"routes\": [{\"dst\": \"0.0.0.0/0\"}] } }'", "oc apply -f bridge-nad.yaml", "oc get network-attachment-definitions", "NAME AGE bridge-network 15s", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: ipvlan-net namespace: test-namespace spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"ipvlan-net\", \"type\": \"ipvlan\", \"master\": \"ext0\", 1 \"mode\": \"l3\", \"linkInContainer\": true, 2 \"ipam\": {\"type\": \"whereabouts\", \"ipRanges\": [{\"range\": \"10.0.0.0/24\"}]} }'", "oc apply -f ipvlan-additional-network-configuration.yaml", "oc get network-attachment-definitions", "NAME AGE bridge-network 87s ipvlan-net 9s", "apiVersion: v1 kind: Pod metadata: name: pod-a namespace: test-namespace annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"bridge-network\", \"interface\": \"ext0\" 1 }, { \"name\": \"ipvlan-net\", \"interface\": \"ext1\" } ]' spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-pod image: quay.io/openshifttest/hello-sdn@sha256:c89445416459e7adea9a5a416b3365ed3d74f2491beb904d61dc8d1eb89a72a4 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc apply -f pod-a.yaml", "oc get pod -n test-namespace", "NAME READY STATUS RESTARTS AGE pod-a 1/1 Running 0 2m36s", "oc exec -n test-namespace pod-a -- ip a", "1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 3: eth0@if105: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default link/ether 0a:58:0a:d9:00:5d brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.217.0.93/23 brd 10.217.1.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::488b:91ff:fe84:a94b/64 scope link valid_lft forever preferred_lft forever 4: ext0@if107: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether be:da:bd:7e:f4:37 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.0.0.2/24 brd 10.0.0.255 scope global ext0 valid_lft forever preferred_lft forever inet6 fe80::bcda:bdff:fe7e:f437/64 scope link valid_lft forever preferred_lft forever 5: ext1@ext0: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default link/ether be:da:bd:7e:f4:37 brd ff:ff:ff:ff:ff:ff inet 10.0.0.1/24 brd 10.0.0.255 scope global ext1 valid_lft forever preferred_lft forever inet6 fe80::beda:bd00:17e:f437/64 scope link valid_lft forever preferred_lft forever", "oc edit networks.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: [] 1", "oc get network-attachment-definition --all-namespaces", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: test-network-1 namespace: additional-network-1 type: Raw rawCNIConfig: '{ \"cniVersion\": \"0.3.1\", \"name\": \"macvlan-vrf\", \"plugins\": [ 1 { \"type\": \"macvlan\", \"master\": \"eth1\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.23/24\" } ] } }, { \"type\": \"vrf\", 2 \"vrfname\": \"vrf-1\", 3 \"table\": 1001 4 }] }'", "oc create -f additional-network-attachment.yaml", "oc get network-attachment-definitions -n <namespace>", "NAME AGE additional-network-1 14m", "apiVersion: v1 kind: Pod metadata: name: pod-additional-net annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"test-network-1\" 1 } ]' spec: containers: - name: example-pod-1 command: [\"/bin/bash\", \"-c\", \"sleep 9000000\"] image: centos:8", "oc create -f pod-additional-net.yaml", "pod/test-pod created", "ip vrf show", "Name Table ----------------------- vrf-1 1001", "ip link", "5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/networking/multiple-networks
Eclipse Temurin 8.0.442 release notes
Eclipse Temurin 8.0.442 release notes Red Hat build of OpenJDK 8 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/eclipse_temurin_8.0.442_release_notes/index
3.5. Execution Properties
3.5. Execution Properties The following table provides a list of execution properties as defined in org.teiid.jdbc.ExecutionProperties . These can be modified using the SET statement. Table 3.2. Execution Properties Constant Identifier String Value Description ANSI_QUOTED_IDENTIFIERS ansiQuotedIdentifiers See Section 1.9, "Connection Properties for the Driver and Data Source Classes" . DISABLE_LOCAL_TRANSACTIONS disableLocalTxn See Section 1.9, "Connection Properties for the Driver and Data Source Classes" . JDBC4COLUMNNAME ANDLABELSEMANTICS useJDBC4ColumnName AndLabelSemantics See Section 1.9, "Connection Properties for the Driver and Data Source Classes" . NOEXEC See Section 1.9, "Connection Properties for the Driver and Data Source Classes" . PROP_FETCH_SIZE fetchSize See Section 1.9, "Connection Properties for the Driver and Data Source Classes" . PROP_PARTIAL_RESULTS_MODE partialResultsMode See Section 1.9, "Connection Properties for the Driver and Data Source Classes" and Section 3.12, "Partial Results Mode" . PROP_TXN_AUTO_WRAP autoCommitTxn See Section 1.9, "Connection Properties for the Driver and Data Source Classes" . PROP_XML_FORMAT XMLFormat Determines the formatting of XML documents returned by XML document models. Can be one of XML_COMPACT_FORMAT or XML_TREE_FORMAT . See Section 3.7, "XML Document Formatting" for more information. PROP_XML_VALIDATION XMLValidation Determines whether XML documents returned by XML document models will be validated against their schema after processing. See Section 3.8, "XML Schema Validation" and topics on "XML SELECT" in the JBoss Data Virtualization Development Guide: Reference Material for more information. QUERYTIMEOUT QUERYTIMEOUT See Section 1.9, "Connection Properties for the Driver and Data Source Classes" . RESULT_SET_CACHE_MODE resultSetCacheMode See Section 1.9, "Connection Properties for the Driver and Data Source Classes" . SQL_OPTION_SHOWPLAN SHOWPLAN See Section 1.9, "Connection Properties for the Driver and Data Source Classes" . See Also: Section 3.9, "The SET Statement"
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/Execution_Properties1
Chapter 5. Secure Installation
Chapter 5. Secure Installation Security begins with the first time you put that CD or DVD into your disk drive to install Red Hat Enterprise Linux. Configuring your system securely from the beginning makes it easier to implement additional security settings later. 5.1. Disk Partitions Red Hat recommends creating separate partitions for /boot , / , /home , /tmp/ , and /var/tmp/ . If the root partition ( / ) becomes corrupt, your data could be lost forever. By using separate partitions, the data is slightly more protected. You can also target these partition for frequent backups. The purpose for each partition is different and we will address each partition. /boot - This partition is the first partition that is read by the system during the boot. The boot loader and kernel images that are used to boot your system into Red Hat Enterprise Linux are stored in this partition. This partition should not be encrypted. If this partition is included in / and that partition is encrypted or otherwise becomes unavailable, your system will not be able to boot. /home - When user data ( /home ) is stored in / instead of a separate partition, the partition can fill up causing the operating system to become unstable. Also, when upgrading your system to the version of Red Hat Enterprise Linux, it is a lot easier if you can keep your data in the /home partition as it will not be overwritten during installation. /tmp and /var/tmp/ - Both the /tmp and the /var/tmp/ directories are used to store data that does not need to be stored for a long period of time. However, if a lot of data floods one of these directories, it can consume all of your storage space. If this happens and these directories are stored within / , your system could become unstable and crash. For this reason, moving these directories into their own partitions is a good idea.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/chap-security_guide-secure_installation
Chapter 14. Uninstalling Logging
Chapter 14. Uninstalling Logging You can remove logging from your OpenShift Container Platform cluster by removing installed Operators and related custom resources (CRs). 14.1. Uninstalling the logging You can stop aggregating logs by deleting the Red Hat OpenShift Logging Operator and the ClusterLogging custom resource (CR). Prerequisites You have administrator permissions. You have access to the Administrator perspective of the OpenShift Container Platform web console. Procedure Go to the Administration Custom Resource Definitions page, and click ClusterLogging . On the Custom Resource Definition Details page, click Instances . Click the options menu to the instance, and click Delete ClusterLogging . Go to the Administration Custom Resource Definitions page. Click the options menu to ClusterLogging , and select Delete Custom Resource Definition . Warning Deleting the ClusterLogging CR does not remove the persistent volume claims (PVCs). To delete the remaining PVCs, persistent volumes (PVs), and associated data, you must take further action. Releasing or deleting PVCs can delete PVs and cause data loss. If you have created a ClusterLogForwarder CR, click the options menu to ClusterLogForwarder , and then click Delete Custom Resource Definition . Go to the Operators Installed Operators page. Click the options menu to the Red Hat OpenShift Logging Operator, and then click Uninstall Operator . Optional: Delete the openshift-logging project. Warning Deleting the openshift-logging project deletes everything in that namespace, including any persistent volume claims (PVCs). If you want to preserve logging data, do not delete the openshift-logging project. Go to the Home Projects page. Click the options menu to the openshift-logging project, and then click Delete Project . Confirm the deletion by typing openshift-logging in the dialog box, and then click Delete . 14.2. Deleting logging PVCs To keep persistent volume claims (PVCs) for reuse with other pods, keep the labels or PVC names that you need to reclaim the PVCs. If you do not want to keep the PVCs, you can delete them. If you want to recover storage space, you can also delete the persistent volumes (PVs). Prerequisites You have administrator permissions. You have access to the Administrator perspective of the OpenShift Container Platform web console. Procedure Go to the Storage Persistent Volume Claims page. Click the options menu to each PVC, and select Delete Persistent Volume Claim . 14.3. Uninstalling Loki Prerequisites You have administrator permissions. You have access to the Administrator perspective of the OpenShift Container Platform web console. If you have not already removed the Red Hat OpenShift Logging Operator and related resources, you have removed references to LokiStack from the ClusterLogging custom resource. Procedure Go to the Administration Custom Resource Definitions page, and click LokiStack . On the Custom Resource Definition Details page, click Instances . Click the options menu to the instance, and then click Delete LokiStack . Go to the Administration Custom Resource Definitions page. Click the options menu to LokiStack , and select Delete Custom Resource Definition . Delete the object storage secret. Go to the Operators Installed Operators page. Click the options menu to the Loki Operator, and then click Uninstall Operator . Optional: Delete the openshift-operators-redhat project. Important Do not delete the openshift-operators-redhat project if other global Operators are installed in this namespace. Go to the Home Projects page. Click the options menu to the openshift-operators-redhat project, and then click Delete Project . Confirm the deletion by typing openshift-operators-redhat in the dialog box, and then click Delete . 14.4. Uninstalling Elasticsearch Prerequisites You have administrator permissions. You have access to the Administrator perspective of the OpenShift Container Platform web console. If you have not already removed the Red Hat OpenShift Logging Operator and related resources, you must remove references to Elasticsearch from the ClusterLogging custom resource. Procedure Go to the Administration Custom Resource Definitions page, and click Elasticsearch . On the Custom Resource Definition Details page, click Instances . Click the options menu to the instance, and then click Delete Elasticsearch . Go to the Administration Custom Resource Definitions page. Click the options menu to Elasticsearch , and select Delete Custom Resource Definition . Delete the object storage secret. Go to the Operators Installed Operators page. Click the options menu to the OpenShift Elasticsearch Operator, and then click Uninstall Operator . Optional: Delete the openshift-operators-redhat project. Important Do not delete the openshift-operators-redhat project if other global Operators are installed in this namespace. Go to the Home Projects page. Click the options menu to the openshift-operators-redhat project, and then click Delete Project . Confirm the deletion by typing openshift-operators-redhat in the dialog box, and then click Delete . 14.5. Deleting Operators from a cluster using the CLI Cluster administrators can delete installed Operators from a selected namespace by using the CLI. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. oc command installed on workstation. Procedure Ensure the latest version of the subscribed operator (for example, serverless-operator ) is identified in the currentCSV field. USD oc get subscription.operators.coreos.com serverless-operator -n openshift-serverless -o yaml | grep currentCSV Example output currentCSV: serverless-operator.v1.28.0 Delete the subscription (for example, serverless-operator ): USD oc delete subscription.operators.coreos.com serverless-operator -n openshift-serverless Example output subscription.operators.coreos.com "serverless-operator" deleted Delete the CSV for the Operator in the target namespace using the currentCSV value from the step: USD oc delete clusterserviceversion serverless-operator.v1.28.0 -n openshift-serverless Example output clusterserviceversion.operators.coreos.com "serverless-operator.v1.28.0" deleted Additional resources Reclaiming a persistent volume manually
[ "oc get subscription.operators.coreos.com serverless-operator -n openshift-serverless -o yaml | grep currentCSV", "currentCSV: serverless-operator.v1.28.0", "oc delete subscription.operators.coreos.com serverless-operator -n openshift-serverless", "subscription.operators.coreos.com \"serverless-operator\" deleted", "oc delete clusterserviceversion serverless-operator.v1.28.0 -n openshift-serverless", "clusterserviceversion.operators.coreos.com \"serverless-operator.v1.28.0\" deleted" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/logging/cluster-logging-uninstall
Installing Red Hat Developer Hub on OpenShift Container Platform
Installing Red Hat Developer Hub on OpenShift Container Platform Red Hat Developer Hub 1.4 Red Hat Customer Content Services
[ "global: auth: backend: enabled: true clusterRouterBase: apps.<clusterName>.com # other Red Hat Developer Hub Helm Chart configurations", "Loaded config from app-config-from-configmap.yaml, env 2023-07-24T19:44:46.223Z auth info Configuring \"database\" as KeyStore provider type=plugin Backend failed to start up Error: Missing required config value at 'backend.database.client'", "NAMESPACE=<emphasis><rhdh></emphasis> new-project USD{NAMESPACE} || oc project USD{NAMESPACE}", "helm upgrade redhat-developer-hub -i https://github.com/openshift-helm-charts/charts/releases/download/redhat-redhat-developer-hub-1.4.2/redhat-developer-hub-1.4.2.tgz", "PASSWORD=USD(oc get secret redhat-developer-hub-postgresql -o jsonpath=\"{.data.password}\" | base64 -d) CLUSTER_ROUTER_BASE=USD(oc get route console -n openshift-console -o=jsonpath='{.spec.host}' | sed 's/^[^.]*\\.//') helm upgrade redhat-developer-hub -i \"https://github.com/openshift-helm-charts/charts/releases/download/redhat-redhat-developer-hub-1.4.2/redhat-developer-hub-1.4.2.tgz\" --set global.clusterRouterBase=\"USDCLUSTER_ROUTER_BASE\" --set global.postgresql.auth.password=\"USDPASSWORD\"", "echo \"https://redhat-developer-hub-USDNAMESPACE.USDCLUSTER_ROUTER_BASE\"" ]
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html-single/installing_red_hat_developer_hub_on_openshift_container_platform/index
Chapter 68. role
Chapter 68. role This chapter describes the commands under the role command. 68.1. role add Adds a role assignment to a user or group on the system, a domain, or a project Usage: Table 68.1. Positional arguments Value Summary <role> Role to add to <user> (name or id) Table 68.2. Command arguments Value Summary -h, --help Show this help message and exit --system <system> Include <system> (all) --domain <domain> Include <domain> (name or id) --project <project> Include <project> (name or id) --user <user> Include <user> (name or id) --group <group> Include <group> (name or id) --group-domain <group-domain> Domain the group belongs to (name or id). this can be used in case collisions between group names exist. --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --user-domain <user-domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. --inherited Specifies if the role grant is inheritable to the sub projects --role-domain <role-domain> Domain the role belongs to (name or id). this must be specified when the name of a domain specific role is used. 68.2. role assignment list List role assignments Usage: Table 68.3. Command arguments Value Summary -h, --help Show this help message and exit --effective Returns only effective role assignments --role <role> Role to filter (name or id) --role-domain <role-domain> Domain the role belongs to (name or id). this must be specified when the name of a domain specific role is used. --names Display names instead of ids --user <user> User to filter (name or id) --user-domain <user-domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. --group <group> Group to filter (name or id) --group-domain <group-domain> Domain the group belongs to (name or id). this can be used in case collisions between group names exist. --domain <domain> Domain to filter (name or id) --project <project> Project to filter (name or id) --system <system> Filter based on system role assignments --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --inherited Specifies if the role grant is inheritable to the sub projects --auth-user Only list assignments for the authenticated user --auth-project Only list assignments for the project to which the authenticated user's token is scoped Table 68.4. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 68.5. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 68.6. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 68.7. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 68.3. role create Create new role Usage: Table 68.8. Positional arguments Value Summary <role-name> New role name Table 68.9. Command arguments Value Summary -h, --help Show this help message and exit --description <description> Add description about the role --domain <domain> Domain the role belongs to (name or id) --or-show Return existing role --immutable Make resource immutable. an immutable project may not be deleted or modified except to remove the immutable flag --no-immutable Make resource mutable (default) Table 68.10. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 68.11. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 68.12. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 68.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 68.4. role delete Delete role(s) Usage: Table 68.14. Positional arguments Value Summary <role> Role(s) to delete (name or id) Table 68.15. Command arguments Value Summary -h, --help Show this help message and exit --domain <domain> Domain the role belongs to (name or id) 68.5. role list List roles Usage: Table 68.16. Command arguments Value Summary -h, --help Show this help message and exit --domain <domain> Include <domain> (name or id) Table 68.17. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 68.18. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 68.19. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 68.20. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 68.6. role remove Removes a role assignment from system/domain/project : user/group Usage: Table 68.21. Positional arguments Value Summary <role> Role to remove (name or id) Table 68.22. Command arguments Value Summary -h, --help Show this help message and exit --system <system> Include <system> (all) --domain <domain> Include <domain> (name or id) --project <project> Include <project> (name or id) --user <user> Include <user> (name or id) --group <group> Include <group> (name or id) --group-domain <group-domain> Domain the group belongs to (name or id). this can be used in case collisions between group names exist. --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --user-domain <user-domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. --inherited Specifies if the role grant is inheritable to the sub projects --role-domain <role-domain> Domain the role belongs to (name or id). this must be specified when the name of a domain specific role is used. 68.7. role set Set role properties Usage: Table 68.23. Positional arguments Value Summary <role> Role to modify (name or id) Table 68.24. Command arguments Value Summary -h, --help Show this help message and exit --description <description> Add description about the role --domain <domain> Domain the role belongs to (name or id) --name <name> Set role name --immutable Make resource immutable. an immutable project may not be deleted or modified except to remove the immutable flag --no-immutable Make resource mutable (default) 68.8. role show Display role details Usage: Table 68.25. Positional arguments Value Summary <role> Role to display (name or id) Table 68.26. Command arguments Value Summary -h, --help Show this help message and exit --domain <domain> Domain the role belongs to (name or id) Table 68.27. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 68.28. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 68.29. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 68.30. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack role add [-h] [--system <system> | --domain <domain> | --project <project>] [--user <user> | --group <group>] [--group-domain <group-domain>] [--project-domain <project-domain>] [--user-domain <user-domain>] [--inherited] [--role-domain <role-domain>] <role>", "openstack role assignment list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--effective] [--role <role>] [--role-domain <role-domain>] [--names] [--user <user>] [--user-domain <user-domain>] [--group <group>] [--group-domain <group-domain>] [--domain <domain> | --project <project> | --system <system>] [--project-domain <project-domain>] [--inherited] [--auth-user] [--auth-project]", "openstack role create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--description <description>] [--domain <domain>] [--or-show] [--immutable | --no-immutable] <role-name>", "openstack role delete [-h] [--domain <domain>] <role> [<role> ...]", "openstack role list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--domain <domain>]", "openstack role remove [-h] [--system <system> | --domain <domain> | --project <project>] [--user <user> | --group <group>] [--group-domain <group-domain>] [--project-domain <project-domain>] [--user-domain <user-domain>] [--inherited] [--role-domain <role-domain>] <role>", "openstack role set [-h] [--description <description>] [--domain <domain>] [--name <name>] [--immutable | --no-immutable] <role>", "openstack role show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--domain <domain>] <role>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/role
Chapter 10. Directory Design Examples
Chapter 10. Directory Design Examples The design of the directory service depends on the size and nature of the enterprise. This chapter provides a couple of examples of how a directory can be applied within a variety of different settings. These examples are a starting point for developing a real-life directory service deployment plan. 10.1. Design Example: A Local Enterprise Example Corp., an automobile parts manufacturer, is a small company that consists of 500 employees. Example Corp. decides to deploy Red Hat Directory Server to support the directory-enabled applications it uses. 10.1.1. Local Enterprise Data Design Example Corp. first decides the type of data it will store in the directory. To do this, Example Corp. creates a deployment team that performs a site survey to determine how the directory will be used. The deployment team determines the following: Example Corp.'s directory will be used by a messaging server, a web server, a calendar server, a human resources application, and a white pages application. The messaging server performs exact searches on attributes such as uid , mailServerName , and mailAddress . To improve database performance, Example Corp. will maintain indexes for these attributes to support searches by the messaging server. For more information on using indexes, see Section 6.4, "Using Indexes to Improve Database Performance" . The white pages application frequently searches for user names and phone numbers. The directory therefore needs to be capable of frequent substring, wildcard, and fuzzy searches, which return large sets of results. Example Corp. decides to maintain presence, equality, approximate, and substring indexes for the cn , sn , and givenName attributes and presence, equality, and substring indexes for the telephoneNumber attribute. Example Corp.'s directory maintains user and group information to support an LDAP server-based intranet deployed throughout the organization. Most of Example Corp.'s user and group information will be centrally managed by a group of directory administrators. However, Example Corp. also wants email information to be managed by a separate group of mail administrators. Example Corp. plans to support public key infrastructure (PKI) applications in the future, such as S/MIME email, so it needs to be prepared to store users' public key certificates in the directory. 10.1.2. Local Enterprise Schema Design Example Corp.'s deployment team decides to use the inetOrgPerson object class to represent the entries in the directory. This object class is appealing because it allows the userCertificate and uid (userID) attributes, both of which are needed by the applications supported by Example Corp.'s directory. Example Corp. also wants to customize the default directory schema. Example Corp. creates the examplePerson object class to represent employees of Example Corp. It derives this object class from the inetOrgPerson object class. The examplePerson object class allows one attribute, the exampleID attribute. This attribute contains the special employee number assigned to each Example Corp. employee. In the future, Example Corp. can add new attributes to the examplePerson object class as needed. 10.1.3. Local Enterprise Directory Tree Design Based on the data and schema design described in the preceding sections, Example Corp. creates the following directory tree: The root of the directory tree is Example Corp.'s Internet domain name: dc=example,dc=com . The directory tree has four branch points: ou=people , ou=groups , ou=roles , and ou=resources . All of Example Corp.'s people entries are created under the ou=people branch. The people entries are all members of the person , organizationalPerson , inetOrgPerson , and examplePerson object classes. The uid attribute uniquely identifies each entry's DN. For example, Example Corp. contains entries for Babs Jensen ( uid=bjensen) and Emily Stanton ( uid=estanton ). They create three roles, one for each department in Example Corp.: sales, marketing, and accounting. Each person entry contains a role attribute which identifies the department to which the person belongs. Example Corp. can now create ACIs based on these roles. For more information about roles, see Section 4.3.2, "About Roles" . They create two group branches under the ou=groups branch. The first group, cn=administrators , contains entries for the directory administrators, who manage the directory contents. The second group, cn=messaging admin , contains entries for the mail administrators, who manage mail accounts. This group corresponds to the administrators group used by the messaging server. Example Corp. ensures that the group it configures for the messaging server is different from the group it creates for Directory Server. They create two branches under the ou=resources branch, one for conference rooms ( ou=conference rooms ) and one for offices ( ou=offices ). They create a class of service (CoS) that provides values for the mailquota attribute depending on whether an entry belongs to the administrative group. This CoS gives administrators a mail quota of 100GB while ordinary Example Corp. employees have a mail quota of 5GB. See Section 5.3, "About Classes of Service" for more information about class of service. The following diagram illustrates the directory tree resulting from the design steps listed above: Figure 10.1. Directory Tree for Example Corp. 10.1.4. Local Enterprise Topology Design At this point, Example Corp. needs to design its database and server topologies. The following sections describe each topology in detail. 10.1.4.1. Database Topology The company designs a database topology in which the people branch is stored in one database (DB1), the groups branch is stored in another database (DB2), and the resources branch, roles branch, and the root suffix information are stored in a third database (DB3). This is illustrated in Figure 10.2, "Database Topology for Example Corp." . Figure 10.2. Database Topology for Example Corp. Each of the two supplier servers updates all three consumer servers in Example Corp.'s deployment of Directory Server. These consumers supply data to one messaging server and to the other unified user management products. Figure 10.3. Server Topology for Example Corp. Modify requests from compatible servers are routed to the appropriate consumer server. The consumer server uses smart referrals to route the request to the supplier server responsible for the main copy of the data being modified. 10.1.5. Local Enterprise Replication Design Example Corp. decides to use a multi-supplier replication design to ensure the high availability of its directory data. For more information about multi-supplier replication, see Section 7.2.2, "Multi-Supplier Replication" . The following sections provide more details about the supplier server architecture and the supplier-consumer server topology. 10.1.5.1. Supplier Architecture Example Corp. uses two supplier servers in a multi-supplier replication architecture. The suppliers update one another so that the directory data remains consistent. The supplier architecture for Example Corp. is described below: Figure 10.4. Supplier Architecture for Example Corp. 10.1.5.2. Supplier Consumer Architecture The following diagram describes how the supplier servers replicate to each consumer in the Example Corp. deployment of the directory. Each of the three consumer servers is updated by the two supplier servers. This ensures that the consumers will not be affected if there is a failure in one of the supplier servers. Figure 10.5. Supplier and Consumer Architecture for Example Corp. 10.1.6. Local Enterprise Security Design Example Corp. decides on the following security design to protect its directory data: They create an ACI that allows employees to modify their own entries. Users can modify all attributes except the uid , manager and department attributes. To protect the privacy of employee data, they create an ACI that allows only the employee and their manager to see the employee's home address and phone number. They create an ACI at the root of the directory tree that allows the two administrator groups the appropriate directory permissions. The directory administrators group needs full access to the directory. The messaging administrators group needs write and delete access to the mailRecipient and mailGroup object classes and the attributes contained on those object classes, as well as the mail attribute. Example Corp. also grants the messaging administrators group write , delete , and add permissions to the group subdirectory for creation of mail groups. They create a general ACI at the root of the directory tree that allows anonymous access for read, search, and compare access. This ACI denies anonymous write access to password information. To protect the server from denial of service attacks and inappropriate use, they set resource limits based on the DN used by directory clients to bind. Example Corp. allows anonymous users to receive 100 entries at a time in response to search requests, messaging administrative users to receive 1,000 entries, and directory administrators to receive an unlimited number of entries. For more information about setting resource limits based on the bind DN, see the "User Account Management" chapter in the Red Hat Directory Server Administrator's Guide . They create a password policy which specifies that passwords must be at least eight characters in length and expire after 90 days. For more information about password policies, see Section 9.6, "Designing a Password Policy" . They create an ACI that gives members of the accounting role access to all payroll information. 10.1.7. Local Enterprise Operations Decisions The company makes the following decisions regarding the day-to-day operation of its directory: Back up the databases every night. Use SNMP to monitor the server status. For more information about SNMP, see the Red Hat Directory Server Administrator's Guide . Auto-rotate the access and error logs. Monitor the error log to ensure that the server is performing as expected. Monitor the access log to screen for searches that should be indexed. For more information about the access, error, and audit logs, see the "Monitoring Server and Database Activity" chapter in the Red Hat Directory Server Administrator's Guide .
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/directory_design_examples
Chapter 5. Inventory File Importing
Chapter 5. Inventory File Importing With automation controller you can select an inventory file from source control, rather than creating one from scratch. The files are non-editable, and as inventories are updated at the source, the inventories within the projects are also updated accordingly, including the group_vars and host_vars files or directory associated with them. SCM types can consume both inventory files and scripts. Both inventory files and custom inventory types use scripts. Imported hosts have a description of imported by default. This can be overridden by setting the _awx_description variable on a given host. For example, if importing from a sourced .ini file, you can add the following host variables: [main] 127.0.0.1 _awx_description="my host 1" 127.0.0.2 _awx_description="my host 2" Similarly, group descriptions also default to imported , but can also be overridden by _awx_description . To use old inventory scripts in source control, see Export old inventory scripts in Using automation execution . 5.1. Source control management Inventory Source Fields The source fields used are: source_project : the project to use. source_path : the relative path inside the project indicating a directory or a file. If left blank, "" is still a relative path indicating the root directory of the project. source_vars : if set on a "file" type inventory source then they are passed to the environment variables when running. Additionally: An update of the project automatically triggers an inventory update where it is used. An update of the project is scheduled immediately after creation of the inventory source. Neither inventory nor project updates are blocked while a related job is running. In cases where you have a large project (around 10 GB), disk space on /tmp can be an issue. You can specify a location manually in the automation controller UI from the Add source page of an inventory. Refer to Adding a source for instructions on creating an inventory source. When you update a project, refresh the listing to use the latest source control management (SCM) information. If no inventory sources use a project as an SCM inventory source, then the inventory listing might not be refreshed on update. For inventories with SCM sources, the job Details page for inventory updates displays a status indicator for the project update and the name of the project. The status indicator links to the project update job. The project name links to the project. You can perform an inventory update while a related job is running. 5.1.1. Supported File Syntax Automation controller uses the ansible-inventory module from Ansible to process inventory files, and supports all valid inventory syntax that automation controller requires. Important You do not need to write inventory scripts in Python. You can enter any executable file in the source field and must run chmod +x for that file and check it into Git. The following is a working example of JSON output that automation controller can read for the import: { "_meta": { "hostvars": { "host1": { "fly_rod": true } } }, "all": { "children": [ "groupA", "ungrouped" ] }, "groupA": { "hosts": [ "host1", "host10", "host11", "host12", "host13", "host14", "host15", "host16", "host17", "host18", "host19", "host2", "host20", "host21", "host22", "host23", "host24", "host25", "host3", "host4", "host5", "host6", "host7", "host8", "host9" ] } } Additional resources For examples of inventory files, see test-playbooks/inventories . For an example of an inventory script inside of that, see inventories/changes.py . For information about how to implement the inventory script, see the support article, How to migrate inventory scripts from Red Hat Ansible tower to Red Hat Ansible Automation Platform? .
[ "[main] 127.0.0.1 _awx_description=\"my host 1\" 127.0.0.2 _awx_description=\"my host 2\"", "{ \"_meta\": { \"hostvars\": { \"host1\": { \"fly_rod\": true } } }, \"all\": { \"children\": [ \"groupA\", \"ungrouped\" ] }, \"groupA\": { \"hosts\": [ \"host1\", \"host10\", \"host11\", \"host12\", \"host13\", \"host14\", \"host15\", \"host16\", \"host17\", \"host18\", \"host19\", \"host2\", \"host20\", \"host21\", \"host22\", \"host23\", \"host24\", \"host25\", \"host3\", \"host4\", \"host5\", \"host6\", \"host7\", \"host8\", \"host9\" ] } }" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/configuring_automation_execution/assembly-inventory-file-importing
11.5.2. Software RAID
11.5.2. Software RAID You can use the Red Hat Enterprise Linux installation program to create Linux software RAID arrays, where RAID functions are controlled by the operating system rather than dedicated hardware. These functions are explained in detail in Section 16.17, " Creating a Custom Layout or Modifying the Default Layout " .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sn-partitioning-raid-sw-ppc
Chapter 24. Downgrading AMQ Streams
Chapter 24. Downgrading AMQ Streams If you are encountering issues with the version of AMQ Streams you upgraded to, you can revert your installation to the version. If you used the YAML installation files to install AMQ Streams, you can use the YAML installation files from the release to perform the following downgrade procedures: Section 24.1, "Downgrading the Cluster Operator to a version" Section 24.2, "Downgrading Kafka" If the version of AMQ Streams does not support the version of Kafka you are using, you can also downgrade Kafka as long as the log message format versions appended to messages match. Warning The following downgrade instructions are only suitable if you installed AMQ Streams using the installation files. If you installed AMQ Streams using another method, like OperatorHub, downgrade may not be supported by that method unless specified in their documentation. To ensure a successful downgrade process, it is essential to use a supported approach. 24.1. Downgrading the Cluster Operator to a version If you are encountering issues with AMQ Streams, you can revert your installation. This procedure describes how to downgrade a Cluster Operator deployment to a version. Prerequisites An existing Cluster Operator deployment is available. You have downloaded the installation files for the version . Before you begin Check the downgrade requirements of the AMQ Streams feature gates . If a feature gate is permanently enabled, you may need to downgrade to a version that allows you to disable it before downgrading to your target version. Procedure Take note of any configuration changes made to the existing Cluster Operator resources (in the /install/cluster-operator directory). Any changes will be overwritten by the version of the Cluster Operator. Revert your custom resources to reflect the supported configuration options available for the version of AMQ Streams you are downgrading to. Update the Cluster Operator. Modify the installation files for the version according to the namespace the Cluster Operator is running in. On Linux, use: On MacOS, use: If you modified one or more environment variables in your existing Cluster Operator Deployment , edit the install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml file to use those environment variables. When you have an updated configuration, deploy it along with the rest of the installation resources: oc replace -f install/cluster-operator Wait for the rolling updates to complete. Get the image for the Kafka pod to ensure the downgrade was successful: oc get pod my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}' The image tag shows the new AMQ Streams version followed by the Kafka version. For example, NEW-STRIMZI-VERSION -kafka- CURRENT-KAFKA-VERSION . Your Cluster Operator was downgraded to the version. 24.2. Downgrading Kafka Kafka version downgrades are performed by the Cluster Operator. 24.2.1. Kafka version compatibility for downgrades Kafka downgrades are dependent on compatible current and target Kafka versions , and the state at which messages have been logged. You cannot revert to the Kafka version if that version does not support any of the inter.broker.protocol.version settings which have ever been used in that cluster, or messages have been added to message logs that use a newer log.message.format.version . The inter.broker.protocol.version determines the schemas used for persistent metadata stored by the broker, such as the schema for messages written to __consumer_offsets . If you downgrade to a version of Kafka that does not understand an inter.broker.protocol.version that has ever been previously used in the cluster the broker will encounter data it cannot understand. If the target downgrade version of Kafka has: The same log.message.format.version as the current version, the Cluster Operator downgrades by performing a single rolling restart of the brokers. A different log.message.format.version , downgrading is only possible if the running cluster has always had log.message.format.version set to the version used by the downgraded version. This is typically only the case if the upgrade procedure was aborted before the log.message.format.version was changed. In this case, the downgrade requires: Two rolling restarts of the brokers if the interbroker protocol of the two versions is different A single rolling restart if they are the same Downgrading is not possible if the new version has ever used a log.message.format.version that is not supported by the version, including when the default value for log.message.format.version is used. For example, this resource can be downgraded to Kafka version 3.4.0 because the log.message.format.version has not been changed: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... kafka: version: 3.5.0 config: log.message.format.version: "3.4" # ... The downgrade would not be possible if the log.message.format.version was set at "3.5" or a value was absent, so that the parameter took the default value for a 3.5.0 broker of 3.5. Important From Kafka 3.0.0, when the inter.broker.protocol.version is set to 3.0 or higher, the log.message.format.version option is ignored and doesn't need to be set. 24.2.2. Downgrading Kafka brokers and client applications Downgrade an AMQ Streams Kafka cluster to a lower () version of Kafka, such as downgrading from 3.5.0 to 3.4.0. Prerequisites The Cluster Operator is up and running. Before you downgrade the AMQ Streams Kafka cluster, check the following for the Kafka resource: IMPORTANT: Compatibility of Kafka versions . Kafka.spec.kafka.config does not contain options that are not supported by the Kafka version being downgraded to. Kafka.spec.kafka.config has a log.message.format.version and inter.broker.protocol.version that is supported by the Kafka version being downgraded to. From Kafka 3.0.0, when the inter.broker.protocol.version is set to 3.0 or higher, the log.message.format.version option is ignored and doesn't need to be set. Procedure Update the Kafka cluster configuration. oc edit kafka KAFKA-CONFIGURATION-FILE Change the Kafka.spec.kafka.version to specify the version. For example, if downgrading from Kafka 3.5.0 to 3.4.0: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... kafka: version: 3.4.0 1 config: log.message.format.version: "3.4" 2 inter.broker.protocol.version: "3.4" 3 # ... 1 Kafka version is changed to the version. 2 Message format version is unchanged. 3 Inter-broker protocol version is unchanged. Note The value of log.message.format.version and inter.broker.protocol.version must be strings to prevent them from being interpreted as floating point numbers. If the image for the Kafka version is different from the image defined in STRIMZI_KAFKA_IMAGES for the Cluster Operator, update Kafka.spec.kafka.image . See Section 23.5.3, "Kafka version and image mappings" Save and exit the editor, then wait for rolling updates to complete. Check the update in the logs or by watching the pod state transitions: oc logs -f CLUSTER-OPERATOR-POD-NAME | grep -E "Kafka version downgrade from [0-9.]+ to [0-9.]+, phase ([0-9]+) of \1 completed" oc get pod -w Check the Cluster Operator logs for an INFO level message: Reconciliation # NUM (watch) Kafka( NAMESPACE / NAME ): Kafka version downgrade from FROM-VERSION to TO-VERSION , phase 1 of 1 completed Downgrade all client applications (consumers) to use the version of the client binaries. The Kafka cluster and clients are now using the Kafka version. If you are reverting back to a version of AMQ Streams earlier than 1.7, which uses ZooKeeper for the storage of topic metadata, delete the internal topic store topics from the Kafka cluster. oc run kafka-admin -ti --image=registry.redhat.io/amq-streams/kafka-35-rhel8:2.5.2 --rm=true --restart=Never -- ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi-topic-operator-kstreams-topic-store-changelog --delete && ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi_store_topic --delete
[ "sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml", "sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace/' install/cluster-operator/*RoleBinding*.yaml", "replace -f install/cluster-operator", "get pod my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: version: 3.5.0 config: log.message.format.version: \"3.4\" #", "edit kafka KAFKA-CONFIGURATION-FILE", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: version: 3.4.0 1 config: log.message.format.version: \"3.4\" 2 inter.broker.protocol.version: \"3.4\" 3 #", "logs -f CLUSTER-OPERATOR-POD-NAME | grep -E \"Kafka version downgrade from [0-9.]+ to [0-9.]+, phase ([0-9]+) of \\1 completed\"", "get pod -w", "Reconciliation # NUM (watch) Kafka( NAMESPACE / NAME ): Kafka version downgrade from FROM-VERSION to TO-VERSION , phase 1 of 1 completed", "run kafka-admin -ti --image=registry.redhat.io/amq-streams/kafka-35-rhel8:2.5.2 --rm=true --restart=Never -- ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi-topic-operator-kstreams-topic-store-changelog --delete && ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi_store_topic --delete" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/deploying_and_managing_amq_streams_on_openshift/assembly-downgrade-str
9.4. autofs
9.4. autofs One drawback to using /etc/fstab is that, regardless of how infrequently a user accesses the NFS mounted file system, the system must dedicate resources to keep the mounted file system in place. This is not a problem with one or two mounts, but when the system is maintaining mounts to many systems at one time, overall system performance can be affected. An alternative to /etc/fstab is to use the kernel-based automount utility. An automounter consists of two components: a kernel module that implements a file system, and a user-space daemon that performs all of the other functions. The automount utility can mount and unmount NFS file systems automatically (on-demand mounting), therefore saving system resources. It can be used to mount other file systems including AFS, SMBFS, CIFS, and local file systems. Important The nfs-utils package is now a part of both the 'NFS file server' and the 'Network File System Client' groups. As such, it is no longer installed by default with the Base group. Ensure that nfs-utils is installed on the system first before attempting to automount an NFS share. autofs is also part of the 'Network File System Client' group. autofs uses /etc/auto.master (master map) as its default primary configuration file. This can be changed to use another supported network source and name using the autofs configuration (in /etc/sysconfig/autofs ) in conjunction with the Name Service Switch (NSS) mechanism. An instance of the autofs version 4 daemon was run for each mount point configured in the master map and so it could be run manually from the command line for any given mount point. This is not possible with autofs version 5, because it uses a single daemon to manage all configured mount points; as such, all automounts must be configured in the master map. This is in line with the usual requirements of other industry standard automounters. Mount point, hostname, exported directory, and options can all be specified in a set of files (or other supported network sources) rather than configuring them manually for each host. 9.4.1. Improvements in autofs Version 5 over Version 4 autofs version 5 features the following enhancements over version 4: Direct map support Direct maps in autofs provide a mechanism to automatically mount file systems at arbitrary points in the file system hierarchy. A direct map is denoted by a mount point of /- in the master map. Entries in a direct map contain an absolute path name as a key (instead of the relative path names used in indirect maps). Lazy mount and unmount support Multi-mount map entries describe a hierarchy of mount points under a single key. A good example of this is the -hosts map, commonly used for automounting all exports from a host under /net/ host as a multi-mount map entry. When using the -hosts map, an ls of /net/ host will mount autofs trigger mounts for each export from host . These will then mount and expire them as they are accessed. This can greatly reduce the number of active mounts needed when accessing a server with a large number of exports. Enhanced LDAP support The autofs configuration file ( /etc/sysconfig/autofs ) provides a mechanism to specify the autofs schema that a site implements, thus precluding the need to determine this via trial and error in the application itself. In addition, authenticated binds to the LDAP server are now supported, using most mechanisms supported by the common LDAP server implementations. A new configuration file has been added for this support: /etc/autofs_ldap_auth.conf . The default configuration file is self-documenting, and uses an XML format. Proper use of the Name Service Switch ( nsswitch ) configuration. The Name Service Switch configuration file exists to provide a means of determining from where specific configuration data comes. The reason for this configuration is to allow administrators the flexibility of using the back-end database of choice, while maintaining a uniform software interface to access the data. While the version 4 automounter is becoming increasingly better at handling the NSS configuration, it is still not complete. Autofs version 5, on the other hand, is a complete implementation. Refer to man nsswitch.conf for more information on the supported syntax of this file. Not all NSS databases are valid map sources and the parser will reject ones that are invalid. Valid sources are files, yp , nis , nisplus , ldap , and hesiod . Multiple master map entries per autofs mount point One thing that is frequently used but not yet mentioned is the handling of multiple master map entries for the direct mount point /- . The map keys for each entry are merged and behave as one map. Example 9.2. Multiple master map entries per autofs mount point An example is seen in the connectathon test maps for the direct mounts below:
[ "/- /tmp/auto_dcthon /- /tmp/auto_test3_direct /- /tmp/auto_test4_direct" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/nfs-autofs
Chapter 32. Stress testing real-time systems with stress-ng
Chapter 32. Stress testing real-time systems with stress-ng The stress-ng tool measures the system's capability to maintain a good level of efficiency under unfavorable conditions. The stress-ng tool is a stress workload generator to load and stress all kernel interfaces. It includes a wide range of stress mechanisms known as stressors. Stress testing makes a machine work hard and trip hardware issues such as thermal overruns and operating system bugs that occur when a system is being overworked. There are over 270 different tests. These include CPU specific tests that exercise floating point, integer, bit manipulation, control flow, and virtual memory tests. Note Use the stress-ng tool with caution as some of the tests can impact the system's thermal zone trip points on a poorly designed hardware. This can impact system performance and cause excessive system thrashing which can be difficult to stop. 32.1. Testing CPU floating point units and processor data cache A floating point unit is the functional part of the processor that performs floating point arithmetic operations. Floating point units handle mathematical operations and make floating numbers or decimal calculations simpler. Using the --matrix-method option, you can stress test the CPU floating point operations and processor data cache. Prerequisites You have root permissions on the systems Procedure To test the floating point on one CPU for 60 seconds, use the --matrix option: To run multiple stressors on more than one CPUs for 60 seconds, use the --times or -t option: # stress-ng --matrix 0 -t 1m stress-ng --matrix 0 -t 1m --times stress-ng: info: [16783] dispatching hogs: 4 matrix stress-ng: info: [16783] successful run completed in 60.00s (1 min, 0.00 secs) stress-ng: info: [16783] for a 60.00s run time: stress-ng: info: [16783] 240.00s available CPU time stress-ng: info: [16783] 205.21s user time ( 85.50%) stress-ng: info: [16783] 0.32s system time ( 0.13%) stress-ng: info: [16783] 205.53s total time ( 85.64%) stress-ng: info: [16783] load average: 3.20 1.25 1.40 The special mode with 0 stressors, query the available CPUs to run, removing the need to specify the CPU number. The total CPU time required is 4 x 60 seconds (240 seconds), of which 0.13% is in the kernel, 85.50% is in user time, and stress-ng runs 85.64% of all the CPUs. To test message passing between processes using a POSIX message queue, use the -mq option: # stress-ng --mq 0 -t 30s --times --perf The mq option configures a specific number of processes to force context switches using the POSIX message queue. This stress test aims for low data cache misses. 32.2. Testing CPU with multiple stress mechanisms The stress-ng tool runs multiple stress tests. In the default mode, it runs the specified stressor mechanisms in parallel. Prerequisites You have root privileges on the systems Procedure Run multiple instances of CPU stressors as follows: # stress-ng --cpu 2 --matrix 1 --mq 3 -t 5m In the example, stress-ng runs two instances of the CPU stressors, one instance of the matrix stressor and three instances of the message queue stressor to test for five minutes. To run all stress tests in parallel, use the -all option: In this example, stress-ng runs two instances of all stress tests in parallel. To run each different stressor in a specific sequence, use the --seq option. In this example, stress-ng runs all the stressors one by one for 20 minutes, with the number of instances of each stressor matching the number of online CPUs. To exclude specific stressors from a test run, use the -x option: # stress-ng --seq 1 -x numa,matrix,hdd In this example, stress-ng runs all stressors, one instance of each, excluding numa , hdd and key stressors mechanisms. 32.3. Measuring CPU heat generation To measure the CPU heat generation, the specified stressors generate high temperatures for a short time duration to test the system's cooling reliability and stability under maximum heat generation. Using the --matrix-size option, you can measure CPU temperatures in degrees Celsius over a short time duration. Prerequisites You have root privileges on the system. Procedure To test the CPU behavior at high temperatures for a specified time duration, run the following command: # stress-ng --matrix 0 --matrix-size 64 --tz -t 60 stress-ng: info: [18351] dispatching hogs: 4 matrix stress-ng: info: [18351] successful run completed in 60.00s (1 min, 0.00 secs) stress-ng: info: [18351] matrix: stress-ng: info: [18351] x86_pkg_temp 88.00 degC stress-ng: info: [18351] acpitz 87.00 degC In this example, the stress-ng configures the processor package thermal zone to reach 88 degrees Celsius over the duration of 60 seconds. Optional: To print a report at the end of a run, use the --tz option: # stress-ng --cpu 0 --tz -t 60 stress-ng: info: [18065] dispatching hogs: 4 cpu stress-ng: info: [18065] successful run completed in 60.07s (1 min, 0.07 secs) stress-ng: info: [18065] cpu: stress-ng: info: [18065] x86_pkg_temp 88.75 degC stress-ng: info: [18065] acpitz 88.38 degC 32.4. Measuring test outcomes with bogo operations The stress-ng tool can measure a stress test throughput by measuring the bogo operations per second. The size of a bogo operation depends on the stressor being run. The test outcomes are not precise, but they provide a rough estimate of the performance. You must not use this measurement as an accurate benchmark metric. These estimates help to understand the system performance changes on different kernel versions or different compiler versions used to build stress-ng . Use the --metrics-brief option to display the total available bogo operations and the matrix stressor performance on your machine. Prerequisites You have root privileges on the system. Procedure To measure test outcomes with bogo operations, use with the --metrics-brief option: # stress-ng --matrix 0 -t 60s --metrics-brief stress-ng: info: [17579] dispatching hogs: 4 matrix stress-ng: info: [17579] successful run completed in 60.01s (1 min, 0.01 secs) stress-ng: info: [17579] stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s stress-ng: info: [17579] (secs) (secs) (secs) (real time) (usr+sys time) stress-ng: info: [17579] matrix 349322 60.00 203.23 0.19 5822.03 1717.25 The --metrics-brief option displays the test outcomes and the total real-time bogo operations run by the matrix stressor for 60 seconds. 32.5. Generating a virtual memory pressure When under memory pressure, the kernel starts writing pages out to swap. You can stress the virtual memory by using the --page-in option to force non-resident pages to swap back into the virtual memory. This causes the virtual machine to be heavily exercised. Using the --page-in option, you can enable this mode for the bigheap , mmap and virtual machine ( vm ) stressors. The --page-in option, touch allocated pages that are not in core, forcing them to page in. Prerequisites You have root privileges on the system. Procedure To stress test a virtual memory, use the --page-in option: In this example, stress-ng tests memory pressure on a system with 4GB of memory, which is less than the allocated buffer sizes, 2 x 2GB of vm stressor and 2 x 2GB of mmap stressor with --page-in enabled. 32.6. Testing large interrupts loads on a device Running timers at high frequency can generate a large interrupt load. The -timer stressor with an appropriately selected timer frequency can force many interrupts per second. Prerequisites You have root permissions on the system. Procedure To generate an interrupt load, use the --timer option: In this example, stress-ng tests 32 instances at 1MHz. 32.7. Generating major page faults in a program With stress-ng , you can test and analyze the page fault rate by generating major page faults in a page that are not loaded in the memory. On new kernel versions, the userfaultfd mechanism notifies the fault finding threads about the page faults in the virtual memory layout of a process. Prerequisites You have root permissions on the system. Procedure To generate major page faults on early kernel versions, use: # stress-ng --fault 0 --perf -t 1m To generate major page faults on new kernel versions, use: # stress-ng --userfaultfd 0 --perf -t 1m 32.8. Viewing CPU stress test mechanisms The CPU stress test contains methods to exercise a CPU. You can print an output to view all methods using the which option. If you do not specify the test method, by default, the stressor checks all the stressors in a round-robin fashion to test the CPU with each stressor. Prerequisites You have root permissions on the system. Procedure Print all available stressor mechanisms, use the which option: Specify a specific CPU stress method using the --cpu-method option: 32.9. Using the verify mode The verify mode validates the results when a test is active. It sanity checks the memory contents from a test run and reports any unexpected failures. All stressors do not have the verify mode and enabling one will reduce the bogo operation statistics because of the extra verification step being run in this mode. Prerequisites You have root permissions on the system. Procedure To validate a stress test results, use the --verify option: # stress-ng --vm 1 --vm-bytes 2G --verify -v In this example, stress-ng prints the output for an exhaustive memory check on a virtually mapped memory using the vm stressor configured with --verify mode. It sanity checks the read and write results on the memory.
[ "stress-ng --matrix 1 -t 1m", "stress-ng --matrix 0 -t 1m stress-ng --matrix 0 -t 1m --times stress-ng: info: [16783] dispatching hogs: 4 matrix stress-ng: info: [16783] successful run completed in 60.00s (1 min, 0.00 secs) stress-ng: info: [16783] for a 60.00s run time: stress-ng: info: [16783] 240.00s available CPU time stress-ng: info: [16783] 205.21s user time ( 85.50%) stress-ng: info: [16783] 0.32s system time ( 0.13%) stress-ng: info: [16783] 205.53s total time ( 85.64%) stress-ng: info: [16783] load average: 3.20 1.25 1.40", "stress-ng --mq 0 -t 30s --times --perf", "stress-ng --cpu 2 --matrix 1 --mq 3 -t 5m", "stress-ng --all 2", "stress-ng --seq 4 -t 20", "stress-ng --seq 1 -x numa,matrix,hdd", "stress-ng --matrix 0 --matrix-size 64 --tz -t 60 stress-ng: info: [18351] dispatching hogs: 4 matrix stress-ng: info: [18351] successful run completed in 60.00s (1 min, 0.00 secs) stress-ng: info: [18351] matrix: stress-ng: info: [18351] x86_pkg_temp 88.00 degC stress-ng: info: [18351] acpitz 87.00 degC", "stress-ng --cpu 0 --tz -t 60 stress-ng: info: [18065] dispatching hogs: 4 cpu stress-ng: info: [18065] successful run completed in 60.07s (1 min, 0.07 secs) stress-ng: info: [18065] cpu: stress-ng: info: [18065] x86_pkg_temp 88.75 degC stress-ng: info: [18065] acpitz 88.38 degC", "stress-ng --matrix 0 -t 60s --metrics-brief stress-ng: info: [17579] dispatching hogs: 4 matrix stress-ng: info: [17579] successful run completed in 60.01s (1 min, 0.01 secs) stress-ng: info: [17579] stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s stress-ng: info: [17579] (secs) (secs) (secs) (real time) (usr+sys time) stress-ng: info: [17579] matrix 349322 60.00 203.23 0.19 5822.03 1717.25", "stress-ng --vm 2 --vm-bytes 2G --mmap 2 --mmap-bytes 2G --page-in", "stress-ng --timer 32 --timer-freq 1000000", "stress-ng --fault 0 --perf -t 1m", "stress-ng --userfaultfd 0 --perf -t 1m", "stress-ng --cpu-method which cpu-method must be one of: all ackermann bitops callfunc cdouble cfloat clongdouble correlate crc16 decimal32 decimal64 decimal128 dither djb2a double euler explog fft fibonacci float fnv1a gamma gcd gray hamming hanoi hyperbolic idct int128 int64 int32", "stress-ng --cpu 1 --cpu-method fft -t 1m", "stress-ng --vm 1 --vm-bytes 2G --verify -v" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/optimizing_rhel_9_for_real_time_for_low_latency_operation/assembly_stress-testing-real-time-systems-with-stress-ng_optimizing-rhel9-for-real-time-for-low-latency-operation
Chapter 50. Case management
Chapter 50. Case management Case management is an extension of Business Process Management (BPM) that enables you to manage adaptable business processes. BPM is a management practice used to automate tasks that are repeatable and have a common pattern, with a focus on optimization by perfecting a process. Business processes are usually modeled with clearly defined paths leading to a business goal. This requires a lot of predictability, usually based on mass-production principles. However, many real-world applications cannot be described completely from start to finish (including all possible paths, deviations, and exceptions). Using a process-oriented approach in certain cases can lead to complex solutions that are hard to maintain. Case management provides problem resolution for non-repeatable, unpredictable processes as opposed to the efficiency-oriented approach of BPM for routine, predictable tasks. It manages one-off situations when the process cannot be predicted in advance. A case definition usually consists of loosely coupled process fragments that can be connected directly or indirectly to lead to certain milestones and ultimately a business goal, while the process is managed dynamically in response to changes that occur during run time. In Red Hat Process Automation Manager, case management includes the following core process engine features: Case file instance A per case runtime strategy Case comments Milestones Stages Ad hoc fragments Dynamic tasks and processes Case identifier (correlation key) Case lifecycle (close, reopen, cancel, destroy) A case definition is always an ad hoc process definition and does not require an explicit start node. The case definition is the main entry point for the business use case. A process definition is introduced as a supporting construct of the case and can be invoked either as defined in the case definition or dynamically to bring in additional processing when required. A case definition defines the following new objects: Activities (required) Case file (required) Milestones Roles Stages
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/case-management-overview-con_case-management-showcase
3.4. Exclusive Activation of a Volume Group in a Cluster
3.4. Exclusive Activation of a Volume Group in a Cluster The following procedure configures the LVM volume group in a way that will ensure that only the cluster is capable of activating the volume group, and that the volume group will not be activated outside of the cluster on startup. If the volume group is activated by a system outside of the cluster, there is a risk of corrupting the volume group's metadata. This procedure modifies the volume_list entry in the /etc/lvm/lvm.conf configuration file. Volume groups listed in the volume_list entry are allowed to automatically activate on the local node outside of the cluster manager's control. Volume groups related to the node's local root and home directories should be included in this list. All volume groups managed by the cluster manager must be excluded from the volume_list entry. Note that this procedure does not require the use of clvmd . Perform the following procedure on each node in the cluster. Execute the following command to ensure that locking_type is set to 1 and that use_lvmetad is set to 0 in the /etc/lvm/lvm.conf file. This command also disables and stops any lvmetad processes immediately. Determine which volume groups are currently configured on your local storage with the following command. This will output a list of the currently-configured volume groups. If you have space allocated in separate volume groups for root and for your home directory on this node, you will see those volumes in the output, as in this example. Add the volume groups other than my_vg (the volume group you have just defined for the cluster) as entries to volume_list in the /etc/lvm/lvm.conf configuration file. For example, if you have space allocated in separate volume groups for root and for your home directory, you would uncomment the volume_list line of the lvm.conf file and add these volume groups as entries to volume_list as follows: Note If no local volume groups are present on a node to be activated outside of the cluster manager, you must still initialize the volume_list entry as volume_list = [] . Rebuild the initramfs boot image to guarantee that the boot image will not try to activate a volume group controlled by the cluster. Update the initramfs device with the following command. This command may take up to a minute to complete. Reboot the node. Note If you have installed a new Linux kernel since booting the node on which you created the boot image, the new initrd image will be for the kernel that was running when you created it and not for the new kernel that is running when you reboot the node. You can ensure that the correct initrd device is in use by running the uname -r command before and after the reboot to determine the kernel release that is running. If the releases are not the same, update the initrd file after rebooting with the new kernel and then reboot the node. When the node has rebooted, check whether the cluster services have started up again on that node by executing the pcs cluster status command on that node. If this yields the message Error: cluster is not currently running on this node then enter the following command. Alternately, you can wait until you have rebooted each node in the cluster and start cluster services on all of the nodes in the cluster with the following command.
[ "lvmconf --enable-halvm --services --startstopservices", "vgs --noheadings -o vg_name my_vg rhel_home rhel_root", "volume_list = [ \"rhel_root\", \"rhel_home\" ]", "dracut -H -f /boot/initramfs-USD(uname -r).img USD(uname -r)", "pcs cluster start", "pcs cluster start --all" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/s1-exclusiveactivenfs-haaa
Chapter 6. Installing a cluster on Azure with network customizations
Chapter 6. Installing a cluster on Azure with network customizations In OpenShift Container Platform version 4.15, you can install a cluster with a customized network configuration on infrastructure that the installation program provisions on Microsoft Azure. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. 6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If you use customer-managed encryption keys, you prepared your Azure environment for encryption . 6.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 6.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 6.5. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. If you are installing the cluster using a service principal, you have its application ID and password. If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from. If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites: You have its client ID. You have assigned it to the virtual machine that you will run the installation program from. Procedure Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a installation. Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If the installation program cannot locate the osServicePrincipal.json configuration file from a installation, you are prompted for Azure subscription and authentication values. Enter the following Azure parameter values for your subscription: azure subscription id : Enter the subscription ID to use for the cluster. azure tenant id : Enter the tenant ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id : If you are using a service principal, enter its application ID. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, specify its client ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret : If you are using a service principal, enter its password. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, leave this value blank. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. If previously not detected, the installation program creates an osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. Additional resources Installation configuration parameters for Azure 6.5.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 6.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 6.5.2. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 6.1. Machine types based on 64-bit x86 architecture standardBSFamily standardBsv2Family standardDADSv5Family standardDASv4Family standardDASv5Family standardDCACCV5Family standardDCADCCV5Family standardDCADSv5Family standardDCASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardECACCV5Family standardECADCCV5Family standardECADSv5Family standardECASv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIBDSv5Family standardEIBSv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHBv4Family standardHCSFamily standardHXFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSMediumMemoryv2Family standardMDSMediumMemoryv3Family standardMIDSMediumMemoryv2Family standardMISMediumMemoryv2Family standardMSFamily standardMSMediumMemoryv2Family standardMSMediumMemoryv3Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 6.5.3. Tested instance types for Azure on 64-bit ARM infrastructures The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform. Example 6.2. Machine types based on 64-bit ARM architecture standardBpsv2Family standardDPSv5Family standardDPDSv5Family standardDPLDSv5Family standardDPLSv5Family standardEPSv5Family standardEPDSv5Family 6.5.4. Enabling trusted launch for Azure VMs You can enable two trusted launch features when installing your cluster on Azure: secure boot and virtualized Trusted Platform Modules . See the Azure documentation about virtual machine sizes to learn what sizes of virtual machines support these features. Important Trusted launch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 1 Specify controlPlane.platform.azure or compute.platform.azure to enable trusted launch on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to enable trusted launch on all nodes. 2 Enable trusted launch features. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 6.5.5. Enabling confidential VMs You can enable confidential VMs when installing your cluster. You can enable confidential VMs for compute nodes, control plane nodes, or all nodes. Important Using confidential VMs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can use confidential VMs with the following VM sizes: DCasv5-series DCadsv5-series ECasv5-series ECadsv5-series Important Confidential VMs are currently not supported on 64-bit ARM architectures. Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5 1 Specify controlPlane.platform.azure or compute.platform.azure to deploy confidential VMs on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to deploy confidential VMs on all nodes. 2 Enable confidential VMs. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 5 Specify VMGuestStateOnly to encrypt the VM guest state. 6.5.6. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: 11 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 12 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 13 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 14 region: centralus 15 resourceGroupName: existing_resource_group 16 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 17 fips: false 18 sshKey: ssh-ed25519 AAAA... 19 1 10 15 17 Required. The installation program prompts you for this value. 2 6 11 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 12 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 13 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) image that should be used to boot control plane and compute machines. The publisher , offer , sku , and version parameters under platform.azure.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the parameters under controlPlane.platform.azure.osImage or compute.platform.azure.osImage are set, they override the platform.azure.defaultMachinePlatform.osImage parameters. 14 Specify the name of the resource group that contains the DNS zone for your base domain. 16 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 18 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 19 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 6.5.7. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 6.6. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information, see "Installation configuration parameters". Note Set the networking.machineNetwork to match the Classless Inter-Domain Routing (CIDR) where the preferred subnet is located. Important The CIDR range 172.17.0.0/16 is reserved by libVirt . You cannot use any other CIDR range that overlaps with the 172.17.0.0/16 CIDR range for networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration. During phase 2, you cannot override the values that you specified in phase 1 in the install-config.yaml file. However, you can customize the network plugin during phase 2. 6.7. Specifying advanced network configuration You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following example: Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. 6.8. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 6.8.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 6.2. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 6.3. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. OpenShift SDN is no longer available as an installation choice for new clusters. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 6.4. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 6.5. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 6.6. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 6.7. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 6.8. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 6.9. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 6.10. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Table 6.11. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full Important Using OVNKubernetes can lead to a stack exhaustion problem on IBM Power(R). kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 6.12. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 6.9. Configuring hybrid networking with OVN-Kubernetes You can configure your cluster to use hybrid networking with the OVN-Kubernetes network plugin. This allows a hybrid cluster that supports different node networking configurations. Note This configuration is necessary to run both Linux and Windows nodes in the same cluster. Prerequisites You defined OVNKubernetes for the networking.networkType parameter in the install-config.yaml file. See the installation documentation for configuring OpenShift Container Platform network customizations on your chosen cloud provider for more information. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF where: <installation_directory> Specifies the directory name that contains the manifests/ directory for your cluster. Open the cluster-network-03-config.yml file in an editor and configure OVN-Kubernetes with hybrid networking, as in the following example: Specify a hybrid networking configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2 1 Specify the CIDR configuration used for nodes on the additional overlay network. The hybridClusterNetwork CIDR must not overlap with the clusterNetwork CIDR. 2 Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default 4789 port. For more information on this requirement, see the Microsoft documentation on Pod-to-pod connectivity between hosts is broken . Note Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom hybridOverlayVXLANPort value because this Windows server version does not support selecting a custom VXLAN port. Save the cluster-network-03-config.yml file and quit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory when creating the cluster. Note For more information about using Linux and Windows nodes in the same cluster, see Understanding Windows container workloads . Additional resources For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs . 6.10. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 6.11. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an Azure cluster to use short-term credentials . 6.11.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 6.11.2. Configuring an Azure cluster to use short-term credentials To install a cluster that uses Microsoft Entra Workload ID, you must configure the Cloud Credential Operator utility and create the required Azure resources for your cluster. 6.11.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created a global Microsoft Azure account for the ccoctl utility to use with the following permissions: Example 6.3. Required Azure permissions Microsoft.Resources/subscriptions/resourceGroups/read Microsoft.Resources/subscriptions/resourceGroups/write Microsoft.Resources/subscriptions/resourceGroups/delete Microsoft.Authorization/roleAssignments/read Microsoft.Authorization/roleAssignments/delete Microsoft.Authorization/roleAssignments/write Microsoft.Authorization/roleDefinitions/read Microsoft.Authorization/roleDefinitions/write Microsoft.Authorization/roleDefinitions/delete Microsoft.Storage/storageAccounts/listkeys/action Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.Storage/storageAccounts/blobServices/containers/delete Microsoft.Storage/storageAccounts/blobServices/containers/read Microsoft.ManagedIdentity/userAssignedIdentities/delete Microsoft.ManagedIdentity/userAssignedIdentities/read Microsoft.ManagedIdentity/userAssignedIdentities/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/read Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/delete Microsoft.Storage/register/action Microsoft.ManagedIdentity/register/action Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 6.11.2.2. Creating Azure resources with the Cloud Credential Operator utility You can use the ccoctl azure create-all command to automate the creation of Azure resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Access to your Microsoft Azure account by using the Azure CLI. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. To enable the ccoctl utility to detect your Azure credentials automatically, log in to the Azure CLI by running the following command: USD az login Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl azure create-all \ --name=<azure_infra_name> \ 1 --output-dir=<ccoctl_output_dir> \ 2 --region=<azure_region> \ 3 --subscription-id=<azure_subscription_id> \ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \ 6 --tenant-id=<azure_tenant_id> 7 1 Specify the user-defined name for all created Azure resources used for tracking. 2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 3 Specify the Azure region in which cloud resources will be created. 4 Specify the Azure subscription ID to use. 5 Specify the directory containing the files for the component CredentialsRequest objects. 6 Specify the name of the resource group containing the cluster's base domain Azure DNS zone. 7 Specify the Azure tenant ID to use. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. To see additional optional parameters and explanations of how to use them, run the azure create-all --help command. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml You can verify that the Microsoft Entra ID service accounts are created by querying Azure. For more information, refer to Azure documentation on listing Entra ID service accounts. 6.11.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you used the ccoctl utility to create a new Azure resource group instead of using an existing resource group, modify the resourceGroupName parameter in the install-config.yaml as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com # ... platform: azure: resourceGroupName: <azure_infra_name> 1 # ... 1 This value must match the user-defined name for Azure resources that was specified with the --name argument of the ccoctl azure create-all command. If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 6.12. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 6.14. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 6.15. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4", "controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: 11 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 12 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 13 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 14 region: centralus 15 resourceGroupName: existing_resource_group 16 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 17 fips: false 18 sshKey: ssh-ed25519 AAAA... 19", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory>", "cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "az login", "ccoctl azure create-all --name=<azure_infra_name> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --region=<azure_region> \\ 3 --subscription-id=<azure_subscription_id> \\ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \\ 6 --tenant-id=<azure_tenant_id> 7", "ls <path_to_ccoctl_output_dir>/manifests", "azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "apiVersion: v1 baseDomain: example.com platform: azure: resourceGroupName: <azure_infra_name> 1", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_azure/installing-azure-network-customizations
Chapter 5. Installation environment options for Red Hat Decision Manager
Chapter 5. Installation environment options for Red Hat Decision Manager With Red Hat Process Automation Manager, you can set up a development environment to develop business applications, a runtime environment to run those applications to support decisions, or both. Development environment : Typically consists of one Business Central installation and at least one KIE Server installation. You can use Business Central to design decisions and other artifacts, and you can use KIE Server to execute and test the artifacts that you created. Runtime environment : Consists of one or more KIE Server instances with or without Business Central. Business Central has an embedded Process Automation Manager controller. If you install Business Central, use the Menu Deploy Execution servers page to create and maintain containers. If you want to automate KIE Server management without Business Central, you can use the headless Process Automation Manager controller. You can also cluster both development and runtime environments. A clustered development or runtime environment consists of a unified group or cluster of two or more servers. The primary benefit of clustering Red Hat Process Automation Manager development environments is high availability and enhanced collaboration, while the primary benefit of clustering Red Hat Process Automation Manager runtime environments is high availability and load balancing. High availability decreases the chance of data loss when a single server fails. When a server fails, another server fills the gap by providing a copy of the data that was on the failed server. When the failed server comes online again, it resumes its place in the cluster. Note Clustering of the runtime environment is currently supported on Red Hat JBoss EAP 7.4 and Red Hat OpenShift Container Platform only.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/installing_and_configuring_red_hat_decision_manager/installation-options-ref_planning
Chapter 20. Integrating with Microsoft Sentinel notifier
Chapter 20. Integrating with Microsoft Sentinel notifier Microsoft Sentinel is a security information and event management (SIEM) solution which acts on Red Hat Advanced Cluster Security for Kubernetes (RHACS) alerts and audit logs. 20.1. Viewing the log analytics to detect threats By creating a Microsoft Sentinel integration, you can view the log analytics to detect threats. Prerequisites You have created a data collection rule, log analytics workspace, and service principal on Microsoft Azure. You have configured a client secret or client certificate at the service principal for authentication. You have created a log analytics schema by using the TimeGenerated and msg fields in JSON format. Important You need to create separate log analytics tables for audit logs and alerts, and both data sources use the same schema. To create a schema, upload the following content to Microsoft Sentinel: Example JSON { "TimeGenerated": "2024-09-03T10:56:58.5010069Z", 1 "msg": { 2 "id": "1abe30d1-fa3a-xxxx-xxxx-781f0a12228a", 3 "policy" : {} } } 1 The timestamp for the alert. 2 Contains the message details. 3 The payload of the message, either alert or audit log. Procedure In the RHACS portal, click Platform Configuration Integrations . Scroll down to the Notifier Integrations section, and then click Microsoft Sentinel . To create a new integration, click New integration . In the Create integration page, provide the following information: Integration name : Specify a name for the integration. Log ingestion endpoint : Enter the data collection endpoint. You can find the endpoint in the Microsoft Azure portal. For more information, see Data collection rules (DCRs) in Azure Monitor (Microsoft Azure documentation). Directory tenant ID : Enter the tenant ID which uniquely identifies your Azure Active Directory (AAD) within the Microsoft cloud infrastructure. You can find the tenant ID in the Microsoft Azure portal. For more information, see Find tenant name and tenant ID in Azure Active Directory B2C (Microsoft Azure documentation). Application client ID : Enter the client ID which uniquely identifies the specific application registered within your AAD that needs access to resources. You can find the client ID in the Microsoft Entra portal for the service principal you have created. For more information, see Register applications (Microsoft Azure documentation). Choose the appropriate authentication method: If you want to use a secret, enter the secret value. You can find the secret in the Microsoft Azure portal. If you want to use a client certificate, enter the client certificate and private key. You can find the certificate ID and private key in the Microsoft Azure portal. For more information, see The new App registrations experience for Azure Active Directory B2C (Microsoft Azure documentation). Optional: Choose the appropriate method to configure the data collection rule configuration: Select the Enable alert DCR checkbox, if you want to enable the alert data collection rule configuration. To create an alert data collection rule, enter the alert data collection rule stream name and ID. You can find the stream name and ID in the Microsoft Azure portal. Select the Enable audit log DCR checkbox, if you want to enable audit data collection rule configuration. To create an audit data collection rule, enter the stream name and ID. You can find the stream name and ID in the Microsoft Azure portal. For more information, see Data collection rules (DCRs) in Azure Monitor (Microsoft Azure documentation). Optional: To test the new integration, click Test . To save the new integration, click Save . Verification In the RHACS portal, click Platform Configuration Integrations . Scroll down to the Notifier Integrations section, and then click Microsoft Sentinel . In the Integrations Microsoft Sentinel page, verify that the new integration has been created. Verify that the messages receive the correct log tables in your log analytics workspace.
[ "{ \"TimeGenerated\": \"2024-09-03T10:56:58.5010069Z\", 1 \"msg\": { 2 \"id\": \"1abe30d1-fa3a-xxxx-xxxx-781f0a12228a\", 3 \"policy\" : {} } }" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/integrating/integrating-with-microsoft-sentinel-notifier
Chapter 11. Optimizing storage
Chapter 11. Optimizing storage Optimizing storage helps to minimize storage use across all resources. By optimizing storage, administrators help ensure that existing storage resources are working in an efficient manner. 11.1. Available persistent storage options Understand your persistent storage options so that you can optimize your OpenShift Container Platform environment. Table 11.1. Available storage options Storage type Description Examples Block Presented to the operating system (OS) as a block device Suitable for applications that need full control of storage and operate at a low level on files bypassing the file system Also referred to as a Storage Area Network (SAN) Non-shareable, which means that only one client at a time can mount an endpoint of this type AWS EBS and VMware vSphere support dynamic persistent volume (PV) provisioning natively in OpenShift Container Platform. File Presented to the OS as a file system export to be mounted Also referred to as Network Attached Storage (NAS) Concurrency, latency, file locking mechanisms, and other capabilities vary widely between protocols, implementations, vendors, and scales. RHEL NFS, NetApp NFS [1] , and Vendor NFS Object Accessible through a REST API endpoint Configurable for use in the OpenShift Container Platform Registry Applications must build their drivers into the application and/or container. AWS S3 NetApp NFS supports dynamic PV provisioning when using the Trident plug-in. Important Currently, CNS is not supported in OpenShift Container Platform 4.7. 11.2. Recommended configurable storage technology The following table summarizes the recommended and configurable storage technologies for the given OpenShift Container Platform cluster application. Table 11.2. Recommended and configurable storage technology Storage type ROX 1 RWX 2 Registry Scaled registry Metrics 3 Logging Apps 1 ReadOnlyMany 2 ReadWriteMany 3 Prometheus is the underlying technology used for metrics. 4 This does not apply to physical disk, VM physical disk, VMDK, loopback over NFS, AWS EBS, and Azure Disk. 5 For metrics, using file storage with the ReadWriteMany (RWX) access mode is unreliable. If you use file storage, do not configure the RWX access mode on any persistent volume claims (PVCs) that are configured for use with metrics. 6 For logging, using any shared storage would be an anti-pattern. One volume per elasticsearch is required. 7 Object storage is not consumed through OpenShift Container Platform's PVs or PVCs. Apps must integrate with the object storage REST API. Block Yes 4 No Configurable Not configurable Recommended Recommended Recommended File Yes 4 Yes Configurable Configurable Configurable 5 Configurable 6 Recommended Object Yes Yes Recommended Recommended Not configurable Not configurable Not configurable 7 Note A scaled registry is an OpenShift Container Platform registry where two or more pod replicas are running. 11.2.1. Specific application storage recommendations Important Testing shows issues with using the NFS server on Red Hat Enterprise Linux (RHEL) as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. 11.2.1.1. Registry In a non-scaled/high-availability (HA) OpenShift Container Platform registry cluster deployment: The storage technology does not have to support RWX access mode. The storage technology must ensure read-after-write consistency. The preferred storage technology is object storage followed by block storage. File storage is not recommended for OpenShift Container Platform registry cluster deployment with production workloads. 11.2.1.2. Scaled registry In a scaled/HA OpenShift Container Platform registry cluster deployment: The storage technology must support RWX access mode. The storage technology must ensure read-after-write consistency. The preferred storage technology is object storage. Amazon Simple Storage Service (Amazon S3), Google Cloud Storage (GCS), Microsoft Azure Blob Storage, and OpenStack Swift are supported. Object storage should be S3 or Swift compliant. For non-cloud platforms, such as vSphere and bare metal installations, the only configurable technology is file storage. Block storage is not configurable. 11.2.1.3. Metrics In an OpenShift Container Platform hosted metrics cluster deployment: The preferred storage technology is block storage. Object storage is not configurable. Important It is not recommended to use file storage for a hosted metrics cluster deployment with production workloads. 11.2.1.4. Logging In an OpenShift Container Platform hosted logging cluster deployment: The preferred storage technology is block storage. Object storage is not configurable. 11.2.1.5. Applications Application use cases vary from application to application, as described in the following examples: Storage technologies that support dynamic PV provisioning have low mount time latencies, and are not tied to nodes to support a healthy cluster. Application developers are responsible for knowing and understanding the storage requirements for their application, and how it works with the provided storage to ensure that issues do not occur when an application scales or interacts with the storage layer. 11.2.2. Other specific application storage recommendations Important It is not recommended to use RAID configurations on Write intensive workloads, such as etcd . If you are running etcd with a RAID configuration, you might be at risk of encountering performance issues with your workloads. Red Hat OpenStack Platform (RHOSP) Cinder: RHOSP Cinder tends to be adept in ROX access mode use cases. Databases: Databases (RDBMSs, NoSQL DBs, etc.) tend to perform best with dedicated block storage. The etcd database must have enough storage and adequate performance capacity to enable a large cluster. Information about monitoring and benchmarking tools to establish ample storage and a high-performance environment is described in Recommended etcd practices . 11.3. Data storage management The following table summarizes the main directories that OpenShift Container Platform components write data to. Table 11.3. Main directories for storing OpenShift Container Platform data Directory Notes Sizing Expected growth /var/log Log files for all components. 10 to 30 GB. Log files can grow quickly; size can be managed by growing disks or by using log rotate. /var/lib/etcd Used for etcd storage when storing the database. Less than 20 GB. Database can grow up to 8 GB. Will grow slowly with the environment. Only storing metadata. Additional 20-25 GB for every additional 8 GB of memory. /var/lib/containers This is the mount point for the CRI-O runtime. Storage used for active container runtimes, including pods, and storage of local images. Not used for registry storage. 50 GB for a node with 16 GB memory. Note that this sizing should not be used to determine minimum cluster requirements. Additional 20-25 GB for every additional 8 GB of memory. Growth is limited by capacity for running containers. /var/lib/kubelet Ephemeral volume storage for pods. This includes anything external that is mounted into a container at runtime. Includes environment variables, kube secrets, and data volumes not backed by persistent volumes. Varies Minimal if pods requiring storage are using persistent volumes. If using ephemeral storage, this can grow quickly.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/scalability_and_performance/optimizing-storage
Chapter 2. Installing a cluster quickly on RHV
Chapter 2. Installing a cluster quickly on RHV You can quickly install a default, non-customized, OpenShift Container Platform cluster on a Red Hat Virtualization (RHV) cluster, similar to the one shown in the following diagram. The installation program uses installer-provisioned infrastructure to automate creating and deploying the cluster. To install a default cluster, you prepare the environment, run the installation program and answer its prompts. Then, the installation program creates the OpenShift Container Platform cluster. For an alternative to installing a default cluster, see Installing a cluster with customizations . Note This installation program is available for Linux and macOS only. 2.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You have a supported combination of versions in the Support Matrix for OpenShift Container Platform on Red Hat Virtualization (RHV) . You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall, you configured it to allow the sites that your cluster requires access to. 2.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 2.3. Requirements for the RHV environment To install and run an OpenShift Container Platform version 4.13 cluster, the RHV environment must meet the following requirements. Not meeting these requirements can cause the installation or process to fail. Additionally, not meeting these requirements can cause the OpenShift Container Platform cluster to fail days or weeks after installation. The following requirements for CPU, memory, and storage resources are based on default values multiplied by the default number of virtual machines the installation program creates. These resources must be available in addition to what the RHV environment uses for non-OpenShift Container Platform operations. By default, the installation program creates seven virtual machines during the installation process. First, it creates a bootstrap virtual machine to provide temporary services and a control plane while it creates the rest of the OpenShift Container Platform cluster. When the installation program finishes creating the cluster, deleting the bootstrap machine frees up its resources. If you increase the number of virtual machines in the RHV environment, you must increase the resources accordingly. Requirements The RHV version is 4.4. The RHV environment has one data center whose state is Up . The RHV data center contains an RHV cluster. The RHV cluster has the following resources exclusively for the OpenShift Container Platform cluster: Minimum 28 vCPUs: four for each of the seven virtual machines created during installation. 112 GiB RAM or more, including: 16 GiB or more for the bootstrap machine, which provides the temporary control plane. 16 GiB or more for each of the three control plane machines which provide the control plane. 16 GiB or more for each of the three compute machines, which run the application workloads. The RHV storage domain must meet these etcd backend performance requirements . For affinity group support: Three or more hosts in the RHV cluster. If necessary, you can disable affinity groups. For details, see Example: Removing all affinity groups for a non-production lab setup in Installing a cluster on RHV with customizations In production environments, each virtual machine must have 120 GiB or more. Therefore, the storage domain must provide 840 GiB or more for the default OpenShift Container Platform cluster. In resource-constrained or non-production environments, each virtual machine must have 32 GiB or more, so the storage domain must have 230 GiB or more for the default OpenShift Container Platform cluster. To download images from the Red Hat Ecosystem Catalog during installation and update procedures, the RHV cluster must have access to an internet connection. The Telemetry service also needs an internet connection to simplify the subscription and entitlement process. The RHV cluster must have a virtual network with access to the REST API on the RHV Manager. Ensure that DHCP is enabled on this network, because the VMs that the installer creates obtain their IP address by using DHCP. A user account and group with the following least privileges for installing and managing an OpenShift Container Platform cluster on the target RHV cluster: DiskOperator DiskCreator UserTemplateBasedVm TemplateOwner TemplateCreator ClusterAdmin on the target cluster Warning Apply the principle of least privilege: Avoid using an administrator account with SuperUser privileges on RHV during the installation process. The installation program saves the credentials you provide to a temporary ovirt-config.yaml file that might be compromised. Additional resources Example: Removing all affinity groups for a non-production lab setup . 2.4. Verifying the requirements for the RHV environment Verify that the RHV environment meets the requirements to install and run an OpenShift Container Platform cluster. Not meeting these requirements can cause failures. Important These requirements are based on the default resources the installation program uses to create control plane and compute machines. These resources include vCPUs, memory, and storage. If you change these resources or increase the number of OpenShift Container Platform machines, adjust these requirements accordingly. Procedure Check that the RHV version supports installation of OpenShift Container Platform version 4.13. In the RHV Administration Portal, click the ? help icon in the upper-right corner and select About . In the window that opens, make a note of the RHV Software Version . Confirm that the RHV version is 4.4. For more information about supported version combinations, see Support Matrix for OpenShift Container Platform on RHV . Inspect the data center, cluster, and storage. In the RHV Administration Portal, click Compute Data Centers . Confirm that the data center where you plan to install OpenShift Container Platform is accessible. Click the name of that data center. In the data center details, on the Storage tab, confirm the storage domain where you plan to install OpenShift Container Platform is Active . Record the Domain Name for use later on. Confirm Free Space has at least 230 GiB. Confirm that the storage domain meets these etcd backend performance requirements , which you can measure by using the fio performance benchmarking tool . In the data center details, click the Clusters tab. Find the RHV cluster where you plan to install OpenShift Container Platform. Record the cluster name for use later on. Inspect the RHV host resources. In the RHV Administration Portal, click Compute > Clusters . Click the cluster where you plan to install OpenShift Container Platform. In the cluster details, click the Hosts tab. Inspect the hosts and confirm they have a combined total of at least 28 Logical CPU Cores available exclusively for the OpenShift Container Platform cluster. Record the number of available Logical CPU Cores for use later on. Confirm that these CPU cores are distributed so that each of the seven virtual machines created during installation can have four cores. Confirm that, all together, the hosts have 112 GiB of Max free Memory for scheduling new virtual machines distributed to meet the requirements for each of the following OpenShift Container Platform machines: 16 GiB required for the bootstrap machine 16 GiB required for each of the three control plane machines 16 GiB for each of the three compute machines Record the amount of Max free Memory for scheduling new virtual machines for use later on. Verify that the virtual network for installing OpenShift Container Platform has access to the RHV Manager's REST API. From a virtual machine on this network, use curl to reach the RHV Manager's REST API: USD curl -k -u <username>@<profile>:<password> \ 1 https://<engine-fqdn>/ovirt-engine/api 2 1 For <username> , specify the user name of an RHV account with privileges to create and manage an OpenShift Container Platform cluster on RHV. For <profile> , specify the login profile, which you can get by going to the RHV Administration Portal login page and reviewing the Profile dropdown list. For <password> , specify the password for that user name. 2 For <engine-fqdn> , specify the fully qualified domain name of the RHV environment. For example: USD curl -k -u ocpadmin@internal:pw123 \ https://rhv-env.virtlab.example.com/ovirt-engine/api 2.5. Preparing the network environment on RHV Configure two static IP addresses for the OpenShift Container Platform cluster and create DNS entries using these addresses. Procedure Reserve two static IP addresses On the network where you plan to install OpenShift Container Platform, identify two static IP addresses that are outside the DHCP lease pool. Connect to a host on this network and verify that each of the IP addresses is not in use. For example, use Address Resolution Protocol (ARP) to check that none of the IP addresses have entries: USD arp 10.35.1.19 Example output 10.35.1.19 (10.35.1.19) -- no entry Reserve two static IP addresses following the standard practices for your network environment. Record these IP addresses for future reference. Create DNS entries for the OpenShift Container Platform REST API and apps domain names using this format: api.<cluster-name>.<base-domain> <ip-address> 1 *.apps.<cluster-name>.<base-domain> <ip-address> 2 1 For <cluster-name> , <base-domain> , and <ip-address> , specify the cluster name, base domain, and static IP address of your OpenShift Container Platform API. 2 Specify the cluster name, base domain, and static IP address of your OpenShift Container Platform apps for Ingress and the load balancer. For example: api.my-cluster.virtlab.example.com 10.35.1.19 *.apps.my-cluster.virtlab.example.com 10.35.1.20 2.6. Installing OpenShift Container Platform on RHV in insecure mode By default, the installer creates a CA certificate, prompts you for confirmation, and stores the certificate to use during installation. You do not need to create or install one manually. Although it is not recommended, you can override this functionality and install OpenShift Container Platform without verifying a certificate by installing OpenShift Container Platform on RHV in insecure mode. Warning Installing in insecure mode is not recommended, because it enables a potential attacker to perform a Man-in-the-Middle attack and capture sensitive credentials on the network. Procedure Create a file named ~/.ovirt/ovirt-config.yaml . Add the following content to ovirt-config.yaml : ovirt_url: https://ovirt.example.com/ovirt-engine/api 1 ovirt_fqdn: ovirt.example.com 2 ovirt_pem_url: "" ovirt_username: ocpadmin@internal ovirt_password: super-secret-password 3 ovirt_insecure: true 1 Specify the hostname or address of your oVirt engine. 2 Specify the fully qualified domain name of your oVirt engine. 3 Specify the admin password for your oVirt engine. Run the installer. 2.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 2.8. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 2.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Open the ovirt-imageio port to the Manager from the machine running the installer. By default, the port is 54322 . Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. 2 To view different installation details, specify warn , debug , or error instead of info . When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Respond to the installation program prompts. Optional: For SSH Public Key , select a password-less public key, such as ~/.ssh/id_rsa.pub . This key authenticates connections with the new OpenShift Container Platform cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, select an SSH key that your ssh-agent process uses. For Platform , select ovirt . For Engine FQDN[:PORT] , enter the fully qualified domain name (FQDN) of the RHV environment. For example: rhv-env.virtlab.example.com:443 The installation program automatically generates a CA certificate. For Would you like to use the above certificate to connect to the Manager? , answer y or N . If you answer N , you must install OpenShift Container Platform in insecure mode. For Engine username , enter the user name and profile of the RHV administrator using this format: <username>@<profile> 1 1 For <username> , specify the user name of an RHV administrator. For <profile> , specify the login profile, which you can get by going to the RHV Administration Portal login page and reviewing the Profile dropdown list. For example: admin@internal . For Engine password , enter the RHV admin password. For Cluster , select the RHV cluster for installing OpenShift Container Platform. For Storage domain , select the storage domain for installing OpenShift Container Platform. For Network , select a virtual network that has access to the RHV Manager REST API. For Internal API Virtual IP , enter the static IP address you set aside for the cluster's REST API. For Ingress virtual IP , enter the static IP address you reserved for the wildcard apps domain. For Base Domain , enter the base domain of the OpenShift Container Platform cluster. If this cluster is exposed to the outside world, this must be a valid domain recognized by DNS infrastructure. For example, enter: virtlab.example.com For Cluster Name , enter the name of the cluster. For example, my-cluster . Use cluster name from the externally registered/resolvable DNS entries you created for the OpenShift Container Platform REST API and apps domain names. The installation program also gives this name to the cluster in the RHV environment. For Pull Secret , copy the pull secret from the pull-secret.txt file you downloaded earlier and paste it here. You can also get a copy of the same pull secret from the Red Hat OpenShift Cluster Manager . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You have completed the steps required to install the cluster. The remaining steps show you how to verify the cluster and troubleshoot the installation. 2.10. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.13. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.13 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> To learn more, see Getting started with the OpenShift CLI . 2.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 2.12. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A Troubleshooting If the installation fails, the installation program times out and displays an error message. To learn more, see Troubleshooting installation issues . 2.13. Accessing the OpenShift Container Platform web console on RHV After the OpenShift Container Platform cluster initializes, you can log in to the OpenShift Container Platform web console. Procedure Optional: In the Red Hat Virtualization (RHV) Administration Portal, open Compute Cluster . Verify that the installation program creates the virtual machines. Return to the command line where the installation program is running. When the installation program finishes, it displays the user name and temporary password for logging into the OpenShift Container Platform web console. In a browser, open the URL of the OpenShift Container Platform web console. The URL uses this format: 1 For <clustername>.<basedomain> , specify the cluster name and base domain. For example: 2.14. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 2.15. Troubleshooting common issues with installing on Red Hat Virtualization (RHV) Here are some common issues you might encounter, along with proposed causes and solutions. 2.15.1. CPU load increases and nodes go into a Not Ready state Symptom : CPU load increases significantly and nodes start going into a Not Ready state. Cause : The storage domain latency might be too high, especially for control plane nodes. Solution : Make the nodes ready again by restarting the kubelet service: USD systemctl restart kubelet Inspect the OpenShift Container Platform metrics service, which automatically gathers and reports on some valuable data such as the etcd disk sync duration. If the cluster is operational, use this data to help determine whether storage latency or throughput is the root issue. If so, consider using a storage resource that has lower latency and higher throughput. To get raw metrics, enter the following command as kubeadmin or user with cluster-admin privileges: USD oc get --insecure-skip-tls-verify --server=https://localhost:<port> --raw=/metrics To learn more, see Exploring Application Endpoints for the purposes of Debugging with OpenShift 4.x 2.15.2. Trouble connecting the OpenShift Container Platform cluster API Symptom : The installation program completes but the OpenShift Container Platform cluster API is not available. The bootstrap virtual machine remains up after the bootstrap process is complete. When you enter the following command, the response will time out. USD oc login -u kubeadmin -p *** <apiurl> Cause : The bootstrap VM was not deleted by the installation program and has not released the cluster's API IP address. Solution : Use the wait-for subcommand to be notified when the bootstrap process is complete: USD ./openshift-install wait-for bootstrap-complete When the bootstrap process is complete, delete the bootstrap virtual machine: USD ./openshift-install destroy bootstrap 2.16. Post-installation tasks After the OpenShift Container Platform cluster initializes, you can perform the following tasks. Optional: After deployment, add or replace SSH keys using the Machine Config Operator (MCO) in OpenShift Container Platform. Optional: Remove the kubeadmin user. Instead, use the authentication provider to create a user with cluster-admin privileges.
[ "curl -k -u <username>@<profile>:<password> \\ 1 https://<engine-fqdn>/ovirt-engine/api 2", "curl -k -u ocpadmin@internal:pw123 https://rhv-env.virtlab.example.com/ovirt-engine/api", "arp 10.35.1.19", "10.35.1.19 (10.35.1.19) -- no entry", "api.<cluster-name>.<base-domain> <ip-address> 1 *.apps.<cluster-name>.<base-domain> <ip-address> 2", "api.my-cluster.virtlab.example.com 10.35.1.19 *.apps.my-cluster.virtlab.example.com 10.35.1.20", "ovirt_url: https://ovirt.example.com/ovirt-engine/api 1 ovirt_fqdn: ovirt.example.com 2 ovirt_pem_url: \"\" ovirt_username: ocpadmin@internal ovirt_password: super-secret-password 3 ovirt_insecure: true", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "rhv-env.virtlab.example.com:443", "<username>@<profile> 1", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc get nodes", "oc get clusterversion", "oc get clusteroperator", "oc get pods -A", "console-openshift-console.apps.<clustername>.<basedomain> 1", "console-openshift-console.apps.my-cluster.virtlab.example.com", "systemctl restart kubelet", "oc get --insecure-skip-tls-verify --server=https://localhost:<port> --raw=/metrics", "oc login -u kubeadmin -p *** <apiurl>", "./openshift-install wait-for bootstrap-complete", "./openshift-install destroy bootstrap" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_rhv/installing-rhv-default
Administration Guide (Common Criteria Edition)
Administration Guide (Common Criteria Edition) Red Hat Certificate System 10 Red Hat Certificate System 10.4 Common Criteria Edition Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide_common_criteria_edition/index
Chapter 3. Scaling storage capacity of AWS OpenShift Data Foundation cluster
Chapter 3. Scaling storage capacity of AWS OpenShift Data Foundation cluster To scale the storage capacity of your configured Red Hat OpenShift Data Foundation worker nodes on AWS cluster, you can increase the capacity by adding three disks at a time. Three disks are needed since OpenShift Data Foundation uses a replica count of 3 to maintain the high availability. So the amount of storage consumed is three times the usable space. Note Usable space might vary when encryption is enabled or replica 2 pools are being used. 3.1. Scaling up storage capacity on a cluster To increase the storage capacity in a dynamically created storage cluster on an user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. The disk should be of the same size and type as used during initial deployment. Procedure Log in to the OpenShift Web Console. Click Operators Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action Menu (...) on the far right of the storage system name to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class . Choose the storage class which you wish to use to provision new storage devices. Click Add . To check the status, navigate to Storage Data Foundation and verify that the Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new object storage devices (OSDs) and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected hosts. <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 3.2. Scaling out storage capacity on a AWS cluster OpenShift Data Foundation is highly scalable. It can be scaled out by adding new nodes with required storage and enough hardware resources in terms of CPU and RAM. Practically there is no limit on the number of nodes which can be added but from the support perspective 2000 nodes is the limit for OpenShift Data Foundation. Scaling out storage capacity can be broken down into two steps Adding new node Scaling up the storage capacity Note OpenShift Data Foundation does not support heterogeneous OSD/Disk sizes. 3.2.1. Adding a node You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or there are not enough resources to add new OSDs on the existing nodes. It is always recommended to add nodes in the multiple of three, each of them in different failure domains. While it is recommended to add nodes in the multiple of three, you still have the flexibility to add one node at a time in the flexible scaling deployment. Refer to the Knowledgebase article Verify if flexible scaling is enabled . Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during OpenShift Data Foundation deployment. 3.2.1.1. Adding a node to an installer-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Navigate to Compute Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster . Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 3.2.1.2. Adding a node to an user-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Depending on the type of infrastructure, perform the following steps: Get a new machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 3.2.2. Scaling up storage capacity To scale up storage capacity, see Scaling up storage capacity on a cluster .
[ "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm", "NODE compute-1", "oc debug node/ <node-name>", "chroot /host", "lsblk", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get csr", "oc adm certificate approve <Certificate_Name>", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/scaling_storage/scaling_storage_capacity_of_aws_openshift_data_foundation_cluster
Chapter 1. Preparing to install on Nutanix
Chapter 1. Preparing to install on Nutanix Before you install an OpenShift Container Platform cluster, be sure that your Nutanix environment meets the following requirements. 1.1. Nutanix version requirements You must install the OpenShift Container Platform cluster to a Nutanix environment that meets the following requirements. Table 1.1. Version requirements for Nutanix virtual environments Component Required version Nutanix AOS 6.5.2.7 or later Prism Central pc.2022.6 or later 1.2. Environment requirements Before you install an OpenShift Container Platform cluster, review the following Nutanix AOS environment requirements. 1.2.1. Infrastructure requirements You can install OpenShift Container Platform on on-premise Nutanix clusters, Nutanix Cloud Clusters (NC2) on Amazon Web Services (AWS), or NC2 on Microsoft Azure. For more information, see Nutanix Cloud Clusters on AWS and Nutanix Cloud Clusters on Microsoft Azure . 1.2.2. Required account privileges The installation program requires access to a Nutanix account with the necessary permissions to deploy the cluster and to maintain the daily operation of it. The following options are available to you: You can use a local Prism Central user account with administrative privileges. Using a local account is the quickest way to grant access to an account with the required permissions. If your organization's security policies require that you use a more restrictive set of permissions, use the permissions that are listed in the following table to create a custom Cloud Native role in Prism Central. You can then assign the role to a user account that is a member of a Prism Central authentication directory. Consider the following when managing this user account: When assigning entities to the role, ensure that the user can access only the Prism Element and subnet that are required to deploy the virtual machines. Ensure that the user is a member of the project to which it needs to assign virtual machines. For more information, see the Nutanix documentation about creating a Custom Cloud Native role , assigning a role , and adding a user to a project . Example 1.1. Required permissions for creating a Custom Cloud Native role Nutanix Object When required Required permissions in Nutanix API Description Categories Always Create_Category_Mapping Create_Or_Update_Name_Category Create_Or_Update_Value_Category Delete_Category_Mapping Delete_Name_Category Delete_Value_Category View_Category_Mapping View_Name_Category View_Value_Category Create, read, and delete categories that are assigned to the OpenShift Container Platform machines. Images Always Create_Image Delete_Image View_Image Create, read, and delete the operating system images used for the OpenShift Container Platform machines. Virtual Machines Always Create_Virtual_Machine Delete_Virtual_Machine View_Virtual_Machine Create, read, and delete the OpenShift Container Platform machines. Clusters Always View_Cluster View the Prism Element clusters that host the OpenShift Container Platform machines. Subnets Always View_Subnet View the subnets that host the OpenShift Container Platform machines. Projects If you will associate a project with compute machines, control plane machines, or all machines. View_Project View the projects defined in Prism Central and allow a project to be assigned to the OpenShift Container Platform machines. 1.2.3. Cluster limits Available resources vary between clusters. The number of possible clusters within a Nutanix environment is limited primarily by available storage space and any limitations associated with the resources that the cluster creates, and resources that you require to deploy the cluster, such a IP addresses and networks. 1.2.4. Cluster resources A minimum of 800 GB of storage is required to use a standard cluster. When you deploy a OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your Nutanix instance. Although these resources use 856 GB of storage, the bootstrap node is destroyed as part of the installation process. A standard OpenShift Container Platform installation creates the following resources: 1 label Virtual machines: 1 disk image 1 temporary bootstrap node 3 control plane nodes 3 compute machines 1.2.5. Networking requirements You must use either AHV IP Address Management (IPAM) or Dynamic Host Configuration Protocol (DHCP) for the network and ensure that it is configured to provide persistent IP addresses to the cluster machines. Additionally, create the following networking resources before you install the OpenShift Container Platform cluster: IP addresses DNS records Note It is recommended that each OpenShift Container Platform node in the cluster have access to a Network Time Protocol (NTP) server that is discoverable via DHCP. Installation is possible without an NTP server. However, an NTP server prevents errors typically associated with asynchronous server clocks. 1.2.5.1. Required IP Addresses An installer-provisioned installation requires two static virtual IP (VIP) addresses: A VIP address for the API is required. This address is used to access the cluster API. A VIP address for ingress is required. This address is used for cluster ingress traffic. You specify these IP addresses when you install the OpenShift Container Platform cluster. 1.2.5.2. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the Nutanix instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. If you use your own DNS or DHCP server, you must also create records for each node, including the bootstrap, control plane, and compute nodes. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 1.2. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. 1.3. Configuring the Cloud Credential Operator utility The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). To install a cluster on Nutanix, you must set the CCO to manual mode as part of the installation process. To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. Additional resources Preparing to update a cluster with manually maintained credentials
[ "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command." ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_nutanix/preparing-to-install-on-nutanix
Chapter 1. Planning for automation mesh in your VM-based Red Hat Ansible Automation Platform environment
Chapter 1. Planning for automation mesh in your VM-based Red Hat Ansible Automation Platform environment The following topics contain information to help plan an automation mesh deployment in your VM-based Ansible Automation Platform environment. The subsequent sections explain the concepts that comprise automation mesh in addition to providing examples on how you can design automation mesh topologies. Simple to complex topology examples are included to illustrate the various ways you can deploy automation mesh. 1.1. About automation mesh Automation mesh is an overlay network intended to ease the distribution of work across a large and dispersed collection of workers through nodes that establish peer-to-peer connections with each other using existing networks. Red Hat Ansible Automation Platform 2 replaces Ansible Tower and isolated nodes with Ansible Automation Platform and automation hub. Ansible Automation Platform provides the control plane for automation through its UI, RESTful API, RBAC, workflows and CI/CD integration, while automation mesh can be used for setting up, discovering, changing or modifying the nodes that form the control and execution layers. Automation mesh uses TLS encryption for communication, so traffic that traverses external networks (the internet or other) is encrypted in transit. Automation mesh introduces: Dynamic cluster capacity that scales independently, enabling you to create, register, group, ungroup and deregister nodes with minimal downtime. Control and execution plane separation that enables you to scale playbook execution capacity independently from control plane capacity. Deployment choices that are resilient to latency, reconfigurable without outage, and that dynamically re-reroute to choose a different path when outages occur. Connectivity that includes bi-directional, multi-hopped mesh communication possibilities which are Federal Information Processing Standards (FIPS) compliant. 1.2. Control and execution planes Automation mesh makes use of unique node types to create both the control and execution plane. Learn more about the control and execution plane and their node types before designing your automation mesh topology. 1.2.1. Control plane The control plane consists of hybrid and control nodes. Instances in the control plane run persistent automation controller services such as the the web server and task dispatcher, in addition to project updates, and management jobs. Hybrid nodes - this is the default node type for control plane nodes, responsible for automation controller runtime functions like project updates, management jobs and ansible-runner task operations. Hybrid nodes are also used for automation execution. Control nodes - control nodes run project and inventory updates and system jobs, but not regular jobs. Execution capabilities are disabled on these nodes. 1.2.2. Execution plane The execution plane consists of execution nodes that execute automation on behalf of the control plane and have no control functions. Hop nodes serve to communicate. Nodes in the execution plane only run user-space jobs, and may be geographically separated, with high latency, from the control plane. Execution nodes - Execution nodes run jobs under ansible-runner with podman isolation. This node type is similar to isolated nodes. This is the default node type for execution plane nodes. Hop nodes - similar to a jump host, hop nodes route traffic to other execution nodes. Hop nodes cannot execute automation. 1.2.3. Peers Peer relationships define node-to-node connections. You can define peers within the [automationcontroller] and [execution_nodes] groups or using the [automationcontroller:vars] or [execution_nodes:vars] groups 1.2.4. Defining automation mesh node types The examples in this section demonstrate how to set the node type for the hosts in your inventory file. You can set the node_type for single nodes in the control plane or execution plane inventory groups. To define the node type for an entire group of nodes, set the node_type in the vars stanza for the group. The permitted values for node_type in the control plane [automationcontroller] group are hybrid (default) and control . The permitted values for node_type in the [execution_nodes] group are execution (default) and hop . Hybrid node The following inventory consists of a single hybrid node in the control plane: [automationcontroller] control-plane-1.example.com Control node The following inventory consists of a single control node in the control plane: [automationcontroller] control-plane-1.example.com node_type=control If you set node_type to control in the vars stanza for the control plane nodes, then all of the nodes in control plane are control nodes. [automationcontroller] control-plane-1.example.com [automationcontroller:vars] node_type=control Execution node The following stanza defines a single execution node in the execution plane: [execution_nodes] execution-plane-1.example.com Hop node The following stanza defines a single hop node and an execution node in the execution plane. The node_type variable is set for every individual node. [execution_nodes] execution-plane-1.example.com node_type=hop execution-plane-2.example.com If you want to set the node_type at the group level, you must create separate groups for the execution nodes and the hop nodes. [execution_nodes] execution-plane-1.example.com execution-plane-2.example.com [execution_group] execution-plane-2.example.com [execution_group:vars] node_type=execution [hop_group] execution-plane-1.example.com [hop_group:vars] node_type=hop Peer connections Create node-to-node connections using the peers= host variable. The following example connects control-plane-1.example.com to execution-node-1.example.com and execution-node-1.example.com to execution-node-2.example.com : [automationcontroller] control-plane-1.example.com peers=execution-node-1.example.com [automationcontroller:vars] node_type=control [execution_nodes] execution-node-1.example.com peers=execution-node-2.example.com execution-node-2.example.com Additional resources See the example automation mesh topologies in this guide for more examples of how to implement mesh nodes.
[ "[automationcontroller] control-plane-1.example.com", "[automationcontroller] control-plane-1.example.com node_type=control", "[automationcontroller] control-plane-1.example.com [automationcontroller:vars] node_type=control", "[execution_nodes] execution-plane-1.example.com", "[execution_nodes] execution-plane-1.example.com node_type=hop execution-plane-2.example.com", "[execution_nodes] execution-plane-1.example.com execution-plane-2.example.com [execution_group] execution-plane-2.example.com [execution_group:vars] node_type=execution [hop_group] execution-plane-1.example.com [hop_group:vars] node_type=hop", "[automationcontroller] control-plane-1.example.com peers=execution-node-1.example.com [automationcontroller:vars] node_type=control [execution_nodes] execution-node-1.example.com peers=execution-node-2.example.com execution-node-2.example.com" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/automation_mesh_for_vm_environments/assembly-planning-mesh
6.2. Monitoring and Diagnosing Performance Problems
6.2. Monitoring and Diagnosing Performance Problems Red Hat Enterprise Linux 7 provides a number of tools that are useful for monitoring system performance and diagnosing performance problems related to processors and their configuration. This section outlines the available tools and gives examples of how to use them to monitor and diagnose processor related performance issues. 6.2.1. turbostat Turbostat prints counter results at specified intervals to help administrators identify unexpected behavior in servers, such as excessive power usage, failure to enter deep sleep states, or system management interrupts (SMIs) being created unnecessarily. The turbostat tool is part of the kernel-tools package. It is supported for use on systems with AMD64 and Intel (R) 64 processors. It requires root privileges to run, and processor support for invariant time stamp counters, and APERF and MPERF model specific registers. For usage examples, see the man page: 6.2.2. numastat Important This tool received substantial updates in the Red Hat Enterprise Linux 6 life cycle. While the default output remains compatible with the original tool written by Andi Kleen, supplying any options or parameters to numastat significantly changes the format of its output. The numastat tool displays per-NUMA node memory statistics for processes and the operating system and shows administrators whether process memory is spread throughout a system or centralized on specific nodes. Cross reference numastat output with per-processor top output to confirm that process threads are running on the same node from which process memory is allocated. Numastat is provided by the numactl package. For further information about numastat output, see the man page: 6.2.3. /proc/interrupts The /proc/interrupts file lists the number of interrupts sent to each processor from a particular I/O device. It displays the interrupt request (IRQ) number, the number of that type of interrupt request handled by each processor in the system, the type of interrupt sent, and a comma-separated list of devices that respond to the listed interrupt request. If a particular application or device is generating a large number of interrupt requests to be handled by a remote processor, its performance is likely to suffer. In this case, poor performance can be alleviated by having a processor on the same node as the application or device handle the interrupt requests. For details on how to assign interrupt handling to a specific processor, see Section 6.3.7, "Setting Interrupt Affinity on AMD64 and Intel 64" . 6.2.4. Cache and Memory Bandwidth Monitoring with pqos The pqos utility, which is available from the intel-cmt-cat package, enables you to monitor CPU cache and memory bandwidth on recent Intel processors. The pqos utility provides a cache and memory monitoring tool similar to the top utility. It monitors: The instructions per cycle (IPC). The count of last level cache MISSES. The size in kilobytes that the program executing in a given CPU occupies in the LLC. The bandwidth to local memory (MBL). The bandwidth to remote memory (MBR). Use the following command to start the monitoring tool: Items in the output are sorted by the highest LLC occupancy. Additional Resources For a general overview of the pqos utility and the related processor features, see Section 2.14, "pqos" . For an example of how using CAT can minimize the impact of a noisy neighbor virtual machine on the network performance of Data Plane Development Kit (DPDK), see the Increasing Platform Determinism with Platform Quality of Service for the Data Plane Development Kit Intel white paper.
[ "man turbostat", "man numastat", "pqos --mon-top" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sect-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-CPU-Monitoring_and_diagnosing_performance_problems
Administering Red Hat Satellite
Administering Red Hat Satellite Red Hat Satellite 6.16 Administer users and permissions, manage organizations and locations, back up and restore Satellite, maintain Satellite, and more Red Hat Satellite Documentation Team [email protected]
[ "satellite-maintain service list", "satellite-maintain service status", "satellite-maintain service stop", "satellite-maintain service start", "satellite-maintain service restart", "satellite-maintain backup offline --skip-pulp-content --assumeyes /var/backup", "satellite-maintain service stop satellite-maintain service disable", "rsync --archive --partial --progress --compress /var/lib/pulp/ target_server.example.com:/var/lib/pulp/", "du -sh /var/lib/pulp/", "satellite-maintain backup offline --assumeyes /var/backup", "satellite-maintain service stop satellite-maintain service disable", "subscription-manager register subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-appstream-rpms --enable=rhel-9-for-x86_64-baseos-rpms --enable=satellite-maintenance-6.16-for-rhel-9-x86_64-rpms", "subscription-manager register subscription-manager repos --disable=* subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-baseos-rpms --enable=satellite-maintenance-6.16-for-rhel-8-x86_64-rpms dnf module enable satellite-maintenance:el8", "dnf install satellite-clone", "satellite-clone", "cp /etc/foreman-installer/custom-hiera.yaml /etc/foreman-installer/custom-hiera.original", "satellite-installer --tuning medium", "satellite-maintain service status --only postgresql", "subscription-manager repos --disable \"*\"", "subscription-manager repos --enable=satellite-6.16-for-rhel-9-x86_64-rpms --enable=satellite-maintenance-6.16-for-rhel-9-x86_64-rpms --enable=rhel-9-for-x86_64-baseos-rpms --enable=rhel-9-for-x86_64-appstream-rpms", "dnf repolist enabled", "subscription-manager repos --disable \"*\"", "subscription-manager repos --enable=satellite-6.16-for-rhel-8-x86_64-rpms --enable=satellite-maintenance-6.16-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms", "dnf module enable satellite:el8", "dnf repolist enabled", "dnf install postgresql-server postgresql-evr postgresql-contrib", "postgresql-setup initdb", "vi /var/lib/pgsql/data/postgresql.conf", "listen_addresses = '*'", "password_encryption=scram-sha-256", "vi /var/lib/pgsql/data/pg_hba.conf", "host all all Satellite_ip /32 scram-sha-256", "systemctl enable --now postgresql", "firewall-cmd --add-service=postgresql", "firewall-cmd --runtime-to-permanent", "su - postgres -c psql", "CREATE USER \"foreman\" WITH PASSWORD ' Foreman_Password '; CREATE USER \"candlepin\" WITH PASSWORD ' Candlepin_Password '; CREATE USER \"pulp\" WITH PASSWORD ' Pulpcore_Password '; CREATE DATABASE foreman OWNER foreman; CREATE DATABASE candlepin OWNER candlepin; CREATE DATABASE pulpcore OWNER pulp;", "postgres=# \\c pulpcore You are now connected to database \"pulpcore\" as user \"postgres\".", "pulpcore=# CREATE EXTENSION IF NOT EXISTS \"hstore\"; CREATE EXTENSION", "\\q", "PGPASSWORD=' Foreman_Password ' psql -h postgres.example.com -p 5432 -U foreman -d foreman -c \"SELECT 1 as ping\" PGPASSWORD=' Candlepin_Password ' psql -h postgres.example.com -p 5432 -U candlepin -d candlepin -c \"SELECT 1 as ping\" PGPASSWORD=' Pulpcore_Password ' psql -h postgres.example.com -p 5432 -U pulp -d pulpcore -c \"SELECT 1 as ping\"", "satellite-maintain service stop --exclude postgresql", "satellite-maintain backup online --preserve-directory --skip-pulp-content /var/migration_backup", "PGPASSWORD=' Foreman_Password ' pg_restore -h postgres.example.com -U foreman -d foreman < /var/migration_backup/foreman.dump PGPASSWORD=' Candlepin_Password ' pg_restore -h postgres.example.com -U candlepin -d candlepin < /var/migration_backup/candlepin.dump PGPASSWORD=' Pulpcore_Password ' pg_restore -h postgres.example.com -U pulp -d pulpcore < /var/migration_backup/pulpcore.dump", "satellite-installer --katello-candlepin-manage-db false --katello-candlepin-db-host postgres.example.com --katello-candlepin-db-name candlepin --katello-candlepin-db-user candlepin --katello-candlepin-db-password Candlepin_Password --foreman-proxy-content-pulpcore-manage-postgresql false --foreman-proxy-content-pulpcore-postgresql-host postgres.example.com --foreman-proxy-content-pulpcore-postgresql-db-name pulpcore --foreman-proxy-content-pulpcore-postgresql-user pulp --foreman-proxy-content-pulpcore-postgresql-password Pulpcore_Password --foreman-db-manage false --foreman-db-host postgres.example.com --foreman-db-database foreman --foreman-db-username foreman --foreman-db-password Foreman_Password", "satellite-maintain packages remove postgresql-server", "rm -fr /var/lib/pgsql/data", "satellite-maintain packages install ansible-collection-redhat-satellite", "ansible-doc -l redhat.satellite", "ansible-doc redhat.satellite.activation_key", "hammer organization create --name \" My_Organization \" --label \" My_Organization_Label \" --description \" My_Organization_Description \"", "hammer organization update --name \" My_Organization \" --compute-resource-ids 1", "vi 'Default Organization-key-cert.pem'", "openssl pkcs12 -keypbe PBE-SHA1-3DES -certpbe PBE-SHA1-3DES -export -in cert.pem -inkey key.pem -out My_Organization_Label .pfx -name My_Organization", "https:// satellite.example.com /pulp/content/", "curl -k --cert cert.pem --key key.pem https:// satellite.example.com /pulp/content/ My_Organization_Label /Library/content/dist/rhel/server/7/7Server/x86_64/os/", "hammer organization list", "hammer organization delete --id Organization_ID", "hammer location create --description \" My_Location_Description \" --name \" My_Location \" --parent-id \" My_Location_Parent_ID \"", "ORG=\" Example Organization \" LOCATIONS=\" London Munich Boston \" for LOC in USD{LOCATIONS} do hammer location create --name \"USD{LOC}\" hammer location add-organization --name \"USD{LOC}\" --organization \"USD{ORG}\" done", "hammer host list --location \" My_Location \"", "hammer location list", "hammer location delete --id Location ID", "hammer user create --auth-source-id My_Authentication_Source --login My_User_Name --mail My_User_Mail --organization-ids My_Organization_ID_1 , My_Organization_ID_2 --password My_User_Password", "hammer user add-role --id user_id --role role_name", "openssl rand -hex 32", "hammer user ssh-keys add --user-id user_id --name key_name --key-file ~/.ssh/id_rsa.pub", "hammer user ssh-keys add --user-id user_id --name key_name --key ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNtYAAABBBHHS2KmNyIYa27Qaa7EHp+2l99ucGStx4P77e03ZvE3yVRJEFikpoP3MJtYYfIe8k 1/46MTIZo9CPTX4CYUHeN8= host@user", "hammer user ssh-keys delete --id key_id --user-id user_id", "hammer user ssh-keys info --id key_id --user-id user_id", "hammer user ssh-keys list --user-id user_id", "curl https:// satellite.example.com /api/status --user My_Username : My_Personal_Access_Token", "{\"satellite_version\":\"6.16.0\",\"result\":\"ok\",\"status\":200,\"version\":\"3.5.1.10\",\"api_version\":2}", "curl https:// satellite.example.com /api/status --user My_Username : My_Personal_Access_Token", "{ \"error\": {\"message\":\"Unable to authenticate user My_Username \"} }", "hammer user-group create --name My_User_Group_Name --role-ids My_Role_ID_1 , My_Role_ID_2 --user-ids My_User_ID_1 , My_User_ID_2", "hammer role create --name My_Role_Name", "hammer filter available-permissions", "hammer filter create --permission-ids My_Permission_ID_1 , My_Permission_ID_2 --role My_Role_Name", "foreman-rake console", "f = File.open('/tmp/table.html', 'w') result = Foreman::AccessControl.permissions {|a,b| a.security_block <=> b.security_block}.collect do |p| actions = p.actions.collect { |a| \"<li>#{a}</li>\" } \"<tr><td>#{p.name}</td><td><ul>#{actions.join('')}</ul></td><td>#{p.resource_type}</td></tr>\" end.join(\"\\n\") f.write(result)", "<table border=\"1\"><tr><td>Permission name</td><td>Actions</td><td>Resource type</td></tr>", "</table>", "field_name operator value", "hammer filter create --permission-ids 91 --search \"name ~ ccv*\" --role qa-user", "hostgroup = host-editors", "name ^ (XXXX, Yyyy, zzzz)", "Dev", "postqueue: warning: Mail system is down -- accessing queue directly -Queue ID- --Size-- ----Arrival Time---- -Sender/Recipient------- BE68482A783 1922 Thu Oct 3 05:13:36 [email protected]", "systemctl start postfix", "foreman-rake reports:_My_Frequency_", "satellite-maintain service stop", "satellite-maintain service start", "du -sh /var/lib/pgsql/data /var/lib/pulp 100G /var/lib/pgsql/data 100G /var/lib/pulp du -csh /var/lib/tftpboot /etc /root/ssl-build /var/www/html/pub /opt/puppetlabs 16M /var/lib/tftpboot 37M /etc 900K /root/ssl-build 100K /var/www/html/pub 2M /opt/puppetlabs 942M total", "satellite-maintain backup offline --help", "satellite-maintain backup online --help", "satellite-maintain backup offline /var/satellite-backup", "satellite-maintain backup offline /var/foreman-proxy-backup", "satellite-maintain backup offline --skip-pulp-content /var/backup_directory", "satellite-maintain backup offline /var/backup_directory", "satellite-maintain backup offline --incremental /var/backup_directory/full_backup /var/backup_directory", "satellite-maintain backup offline --incremental /var/backup_directory/first_incremental_backup /var/backup_directory", "satellite-maintain backup offline --incremental /var/backup_directory/full_backup /var/backup_directory", "#!/bin/bash -e PATH=/sbin:/bin:/usr/sbin:/usr/bin DESTINATION=/var/backup_directory if [[ USD(date +%w) == 0 ]]; then satellite-maintain backup offline --assumeyes USDDESTINATION else LAST=USD(ls -td -- USDDESTINATION/*/ | head -n 1) satellite-maintain backup offline --assumeyes --incremental \"USDLAST\" USDDESTINATION fi exit 0", "satellite-maintain backup online /var/backup_directory", "satellite-maintain advanced procedure run -h", "satellite-maintain backup online --whitelist backup-metadata /var/backup_directory", "du -sh /var/backup_directory", "df -h /var/backup_directory", "restorecon -Rv /", "satellite-maintain restore /var/backup_directory", "satellite-maintain restore /var/backup_directory /FIRST_INCREMENTAL satellite-maintain restore /var/backup_directory /SECOND_INCREMENTAL", "satellite-change-hostname new-satellite --username My_Username --password My_Password", "satellite-change-hostname new-satellite --username My_Username --password My_Password --custom-cert \"/root/ownca/test.com/test.com.crt\" --custom-key \"/root/ownca/test.com/test.com.key\"", "satellite-installer --foreman-proxy-foreman-base-url https:// new-satellite.example.com --foreman-proxy-trusted-hosts new-satellite.example.com", "hammer capsule list", "hammer capsule content synchronize --id My_Capsule_ID", "capsule-certs-generate --certs-tar /root/ new-capsule.example.com-certs.tar --foreman-proxy-fqdn new-capsule.example.com", "scp /root/ new-capsule.example.com-certs.tar root@ capsule.example.com :", "satellite-change-hostname new-capsule.example.com --certs-tar /root/ new-capsule.example.com-certs.tar --password My_Password --username My_Username", "foreman-rake audits:expire days= Number_Of_Days", "foreman-rake audits:anonymize days=7", "foreman-rake reports:expire days=7", "satellite-installer --foreman-plugin-tasks-cron-line \"00 15 * * *\"", "satellite-installer --foreman-plugin-tasks-automatic-cleanup false", "satellite-installer --foreman-plugin-tasks-automatic-cleanup true", "foreman-rake foreman_tasks:cleanup TASK_SEARCH='label = Actions::Katello::Repository::Sync' STATES='stopped'", "ssh [email protected]", "hammer task info --id My_Task_ID", "foreman-rake foreman_tasks:cleanup TASK_SEARCH=\"id= My_Task_ID \"", "hammer task info --id My_Task_ID", "foreman-rake katello:delete_orphaned_content RAILS_ENV=production", "satellite-maintain service stop", "satellite-maintain service start", "satellite-maintain service stop --exclude postgresql", "su - postgres -c 'vacuumdb --full --all'", "satellite-maintain service start", "foreman-rake katello:delete_orphaned_content", "katello-certs-check -t satellite -b /root/ satellite_cert/ca_cert_bundle.pem -c /root/ satellite_cert/satellite_cert.pem -k /root/ satellite_cert/satellite_cert_key.pem", "satellite-installer --scenario satellite --certs-server-cert \"/root/ satellite_cert/satellite_cert.pem \" --certs-server-key \"/root/ satellite_cert/satellite_cert_key.pem \" --certs-server-ca-cert \"/root/ satellite_cert/ca_cert_bundle.pem \" --certs-update-server --certs-update-server-ca", "katello-certs-check -t capsule -b /root/ capsule_cert/ca_cert_bundle.pem -c /root/ capsule_cert/capsule_cert.pem -k /root/ capsule_cert/capsule_cert_key.pem", "capsule-certs-generate --certs-tar \" /root/My_Certificates/capsule.example.com-certs.tar \" --certs-update-server --foreman-proxy-fqdn \" capsule.example.com \" --server-ca-cert \" /root/My_Certificates/ca_cert_bundle.pem \" --server-cert \" /root/My_Certificates/capsule_cert.pem \" --server-key \" /root/My_Certificates/capsule_cert_key.pem \"", "scp /root/My_Certificates/capsule.example.com-certs.tar [email protected] :", "satellite-installer --scenario capsule --certs-tar-file \" /root/My_Certificates/capsule.example.com-certs.tar \" --certs-update-server --foreman-proxy-foreman-base-url \"https:// satellite.example.com \" --foreman-proxy-register-in-foreman \"true\"", "mkdir --parents /usr/share/foreman/.config/git", "touch /usr/share/foreman/.config/git/config", "chown --recursive foreman /usr/share/foreman/.config", "sudo --user foreman git config --global http.sslCAPath Path_To_CA_Certificate", "sudo --user foreman ssh-keygen", "sudo --user foreman ssh git.example.com", "<%# kind: provision name: My_Provisioning_Template oses: - My_first_Operating_System - My_second_Operating_System locations: - My_first_Location - My_second_Location organizations: - My_first_Organization - My_second_Organization %>", "mkdir /var/lib/foreman/ My_Templates_Dir", "chown foreman /var/lib/foreman/ My_Templates_Dir", "<%# kind: provision name: My_Provisioning_Template oses: - My_first_Operating_System - My_second_Operating_System locations: - My_first_Location - My_second_Location organizations: - My_first_Organization - My_second_Organization %>", "hammer import-templates --branch \" My_Branch \" --filter '.* Template NameUSD ' --organization \" My_Organization \" --prefix \"[ Custom Index ] \" --repo \" https://git.example.com/path/to/repository \"", "curl -H \"Accept:application/json\" -H \"Content-Type:application/json\" -u login : password -k https:// satellite.example.com /api/v2/templates/import -X POST", "hammer export-templates --organization \" My_Organization \" --repo \" https://git.example.com/path/to/repository \"", "curl -H \"Accept:application/json\" -H \"Content-Type:application/json\" -u login : password -k https:// satellite.example.com /api/v2/templates/export -X POST", "curl -H \"Accept:application/json\" -H \"Content-Type:application/json\" -u login:password -k https:// satellite.example.com /api/v2/templates/export -X POST -d \"{\\\"repo\\\":\\\"git.example.com/templates\\\"}\"", "satellite-installer --foreman-logging-level debug", "satellite-installer --reset-foreman-logging-level", "satellite-installer --full-help | grep logging", ":log_level: 'debug'", "satellite-installer --foreman-proxy-log-level DEBUG", "satellite-installer --reset-foreman-proxy-log-level", "satellite-installer --katello-candlepin-loggers log4j.logger.org.candlepin:DEBUG", "satellite-installer --katello-candlepin-loggers log4j.logger.org.candlepin:DEBUG --katello-candlepin-loggers log4j.logger.org.candlepin.resource.ConsumerResource:WARN --katello-candlepin-loggers log4j.logger.org.candlepin.resource.HypervisorResource:WARN", "satellite-installer --reset-katello-candlepin-loggers", "loglevel debug", "systemctl restart redis", "satellite-installer --verbose-log-level debug", "LOGGING = {\"dynaconf_merge\": True, \"loggers\": {'': {'handlers': ['console'], 'level': 'DEBUG'}}}", "systemctl restart pulpcore-api pulpcore-content pulpcore-resource-manager pulpcore-worker@1 pulpcore-worker@2 redis", "satellite-installer --puppet-agent-additional-settings log_level:debug", "satellite-installer --puppet-server-additional-settings log_level:debug", "satellite-maintain service restart --only puppetserver", "hammer admin logging --list", "hammer admin logging --all --level-debug satellite-maintain service restart", "hammer admin logging --all --level-production satellite-maintain service restart", "hammer admin logging --components My_Component --level-debug satellite-maintain service restart", "hammer admin logging --help", "satellite-installer --foreman-logging-type journald --foreman-proxy-log JOURNAL", "satellite-installer --reset-foreman-logging-type --reset-foreman-proxy-log", "satellite-installer --foreman-logging-layout json --foreman-logging-type file", "cat /var/log/foreman/production.log | jq", "satellite-installer --foreman-loggers ldap:true --foreman-loggers sql:true", "satellite-installer --reset-foreman-loggers", "hammer ping", "satellite-maintain service status", "satellite-maintain health check", "satellite-maintain service restart", "awk '/add_loggers/,/^USD/' /usr/share/foreman/config/application.rb", "There was an issue with the backend service candlepin: Connection refused - connect(2).", "foreman-rake audits:list_attributes", "{ \"text\": \"job invocation <%= @object.job_invocation_id %> finished with result <%= @object.task.result %>\" }", "{ \"text\": \"user with login <%= @object.login %> and email <%= @object.mail %> created\" }", "satellite-installer --enable-foreman-proxy-plugin-shellhooks", "{ \"X-Shellhook-Arg-1\": \" VALUE \", \"X-Shellhook-Arg-2\": \" VALUE \" }", "{ \"X-Shellhook-Arg-1\": \"<%= @object.content_view_version_id %>\", \"X-Shellhook-Arg-2\": \"<%= @object.content_view_name %>\" }", "\"X-Shellhook-Arg-1: VALUE \" \"X-Shellhook-Arg-2: VALUE \"", "curl --data \"\" --header \"Content-Type: text/plain\" --header \"X-Shellhook-Arg-1: Version 1.0\" --header \"X-Shellhook-Arg-2: My content view\" --request POST --show-error --silent https://capsule.example.com:9090/shellhook/My_Script", "#!/bin/sh # Prints all arguments to stderr # echo \"USD@\" >&2", "https:// capsule.example.com :9090/shellhook/print_args", "{ \"X-Shellhook-Arg-1\": \"Hello\", \"X-Shellhook-Arg-2\": \"World!\" }", "tail /var/log/foreman-proxy/proxy.log", "[I] Started POST /shellhook/print_args [I] Finished POST /shellhook/print_args with 200 (0.33 ms) [I] [3520] Started task /var/lib/foreman-proxy/shellhooks/print_args\\ Hello\\ World\\! [W] [3520] Hello World!", "parameter operator value", "satellite-maintain packages install My_Package", "satellite-maintain packages check-update", "satellite-maintain packages update", "satellite-maintain packages update My_Package", "satellite-maintain packages lock" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html-single/administering_red_hat_satellite/index
Chapter 9. Installation configuration parameters for vSphere
Chapter 9. Installation configuration parameters for vSphere Before you deploy an OpenShift Container Platform cluster on vSphere, you provide parameters to customize your cluster and the platform that hosts it. When you create the install-config.yaml file, you provide values for the required parameters through the command line. You can then modify the install-config.yaml file to customize your cluster further. 9.1. Available installation configuration parameters for vSphere The following tables specify the required, optional, and vSphere-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 9.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 9.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters and hyphens ( - ), such as dev . The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 9.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. If you use the Red Hat OpenShift Networking OVN-Kubernetes network plugin, both IPv4 and IPv6 address families are supported. Note On VMware vSphere, dual-stack networking can specify either IPv4 or IPv6 as the primary address family. If you configure your cluster to use both IP address families, review the following requirements: Both IP families must use the same network interface for the default gateway. Both IP families must have the default gateway. You must specify IPv4 and IPv6 addresses in the same order for all network configuration parameters. For example, in the following configuration IPv4 addresses are listed before IPv6 addresses. networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112 Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 9.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. OVNKubernetes . OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OVN-Kubernetes network plugins supports only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 9.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 9.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Mint , Passthrough , Manual or an empty string ( "" ). [1] Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. 9.1.4. Additional VMware vSphere configuration parameters Additional VMware vSphere configuration parameters are described in the following table: Table 9.4. Additional VMware vSphere cluster parameters Parameter Description Values Describes your account on the cloud platform that hosts your cluster. You can use the parameter to customize the platform. If you provide additional configuration settings for compute and control plane machines in the machine pool, the parameter is not required. You can only specify one vCenter server for your OpenShift Container Platform cluster. A dictionary of vSphere configuration objects Virtual IP (VIP) addresses that you configured for control plane API access. Note This parameter applies only to installer-provisioned infrastructure without an external load balancer configured. You must not specify this parameter in user-provisioned infrastructure. Multiple IP addresses Optional: The disk provisioning method. This value defaults to the vSphere default storage policy if not set. Valid values are thin , thick , or eagerZeroedThick . Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. An array of failure domain configuration objects. The name of the failure domain. String If you define multiple failure domains for your cluster, you must attach the tag to each vCenter datacenter. To define a region, use a tag from the openshift-region tag category. For a single vSphere datacenter environment, you do not need to attach a tag, but you must enter an alphanumeric value, such as datacenter , for the parameter. String Specifies the fully-qualified hostname or IP address of the VMware vCenter server, so that a client can access failure domain resources. You must apply the server role to the vSphere vCenter server location. String If you define multiple failure domains for your cluster, you must attach a tag to each vCenter cluster. To define a zone, use a tag from the openshift-zone tag category. For a single vSphere datacenter environment, you do not need to attach a tag, but you must enter an alphanumeric value, such as cluster , for the parameter. String The path to the vSphere compute cluster. String Lists and defines the datacenters where OpenShift Container Platform virtual machines (VMs) operate. The list of datacenters must match the list of datacenters specified in the vcenters field. String Specifies the path to a vSphere datastore that stores virtual machines files for a failure domain. You must apply the datastore role to the vSphere vCenter datastore location. String Optional: The absolute path of an existing folder where the user creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster and you do not want to use the default StorageClass object, named thin , you can omit the folder parameter from the install-config.yaml file. String Lists any network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. String Optional: The absolute path of an existing resource pool where the installation program creates the virtual machines, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . If you do not specify a value, the installation program installs the resources in the root of the cluster under /<datacenter_name>/host/<cluster_name>/Resources . String Specifies the absolute path to a pre-existing Red Hat Enterprise Linux CoreOS (RHCOS) image template or virtual machine. The installation program can use the image template or virtual machine to quickly install RHCOS on vSphere hosts. Consider using this parameter as an alternative to uploading an RHCOS image on vSphere hosts. This parameter is available for use only on installer-provisioned infrastructure. String Virtual IP (VIP) addresses that you configured for cluster Ingress. Note This parameter applies only to installer-provisioned infrastructure without an external load balancer configured. You must not specify this parameter in user-provisioned infrastructure. Multiple IP addresses Configures the connection details so that services can communicate with a vCenter server. Currently, only a single vCenter server is supported. An array of vCenter configuration objects. Lists and defines the datacenters where OpenShift Container Platform virtual machines (VMs) operate. The list of datacenters must match the list of datacenters specified in the failureDomains field. String The password associated with the vSphere user. String The port number used to communicate with the vCenter server. Integer The fully qualified host name (FQHN) or IP address of the vCenter server. String The username associated with the vSphere user. String 9.1.5. Deprecated VMware vSphere configuration parameters In OpenShift Container Platform 4.13, the following vSphere configuration parameters are deprecated. You can continue to use these parameters, but the installation program does not automatically specify these parameters in the install-config.yaml file. The following table lists each deprecated vSphere configuration parameter: Table 9.5. Deprecated VMware vSphere cluster parameters Parameter Description Values The virtual IP (VIP) address that you configured for control plane API access. Note In OpenShift Container Platform 4.12 and later, the apiVIP configuration setting is deprecated. Instead, use a List format to enter a value in the apiVIPs configuration setting. An IP address, for example 128.0.0.1 . The vCenter cluster to install the OpenShift Container Platform cluster in. String Defines the datacenter where OpenShift Container Platform virtual machines (VMs) operate. String The name of the default datastore to use for provisioning volumes. String Optional: The absolute path of an existing folder where the installation program creates the virtual machines. If you do not provide this value, the installation program creates a folder that is named with the infrastructure ID in the data center virtual machine folder. String, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . Virtual IP (VIP) addresses that you configured for cluster Ingress. Note In OpenShift Container Platform 4.12 and later, the ingressVIP configuration setting is deprecated. Instead, use a List format to enter a value in the ingressVIPs configuration setting. An IP address, for example 128.0.0.1 . The network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. String The password for the vCenter user name. String Optional: The absolute path of an existing resource pool where the installation program creates the virtual machines. If you do not specify a value, the installation program installs the resources in the root of the cluster under /<datacenter_name>/host/<cluster_name>/Resources . String, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . The user name to use to connect to the vCenter instance with. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. String The fully-qualified hostname or IP address of a vCenter server. String 9.1.6. Optional VMware vSphere machine pool configuration parameters Optional VMware vSphere machine pool configuration parameters are described in the following table: Table 9.6. Optional VMware vSphere machine pool parameters Parameter Description Values The location from which the installation program downloads the Red Hat Enterprise Linux CoreOS (RHCOS) image. Before setting a path value for this parameter, ensure that the default RHCOS boot image in the OpenShift Container Platform release matches the RHCOS image template or virtual machine version; otherwise, cluster installation might fail. An HTTP or HTTPS URL, optionally with a SHA-256 checksum. For example, https://mirror.openshift.com/images/rhcos-<version>-vmware.<architecture>.ova . The size of the disk in gigabytes. Integer The total number of virtual processor cores to assign a virtual machine. The value of platform.vsphere.cpus must be a multiple of platform.vsphere.coresPerSocket value. Integer The number of cores per socket in a virtual machine. The number of virtual sockets on the virtual machine is platform.vsphere.cpus / platform.vsphere.coresPerSocket . The default value for control plane nodes and worker nodes is 4 and 2 , respectively. Integer The size of a virtual machine's memory in megabytes. Integer
[ "apiVersion:", "baseDomain:", "metadata:", "metadata: name:", "platform:", "pullSecret:", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112", "networking:", "networking: networkType:", "networking: clusterNetwork:", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: clusterNetwork: cidr:", "networking: clusterNetwork: hostPrefix:", "networking: serviceNetwork:", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork:", "networking: machineNetwork: - cidr: 10.0.0.0/16", "networking: machineNetwork: cidr:", "additionalTrustBundle:", "capabilities:", "capabilities: baselineCapabilitySet:", "capabilities: additionalEnabledCapabilities:", "cpuPartitioningMode:", "compute:", "compute: architecture:", "compute: name:", "compute: platform:", "compute: replicas:", "featureSet:", "controlPlane:", "controlPlane: architecture:", "controlPlane: name:", "controlPlane: platform:", "controlPlane: replicas:", "credentialsMode:", "fips:", "imageContentSources:", "imageContentSources: source:", "imageContentSources: mirrors:", "publish:", "sshKey:", "platform: vsphere:", "platform: vsphere: apiVIPs:", "platform: vsphere: diskType:", "platform: vsphere: failureDomains:", "platform: vsphere: failureDomains: name:", "platform: vsphere: failureDomains: region:", "platform: vsphere: failureDomains: server:", "platform: vsphere: failureDomains: zone:", "platform: vsphere: failureDomains: topology: computeCluster:", "platform: vsphere: failureDomains: topology: datacenter:", "platform: vsphere: failureDomains: topology: datastore:", "platform: vsphere: failureDomains: topology: folder:", "platform: vsphere: failureDomains: topology: networks:", "platform: vsphere: failureDomains: topology: resourcePool:", "platform: vsphere: failureDomains: topology template:", "platform: vsphere: ingressVIPs:", "platform: vsphere: vcenters:", "platform: vsphere: vcenters: datacenters:", "platform: vsphere: vcenters: password:", "platform: vsphere: vcenters: port:", "platform: vsphere: vcenters: server:", "platform: vsphere: vcenters: user:", "platform: vsphere: apiVIP:", "platform: vsphere: cluster:", "platform: vsphere: datacenter:", "platform: vsphere: defaultDatastore:", "platform: vsphere: folder:", "platform: vsphere: ingressVIP:", "platform: vsphere: network:", "platform: vsphere: password:", "platform: vsphere: resourcePool:", "platform: vsphere: username:", "platform: vsphere: vCenter:", "platform: vsphere: clusterOSImage:", "platform: vsphere: osDisk: diskSizeGB:", "platform: vsphere: cpus:", "platform: vsphere: coresPerSocket:", "platform: vsphere: memoryMB:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_vsphere/installation-config-parameters-vsphere
Preface
Preface Standalone Manager installation is manual and customizable. You must install a Red Hat Enterprise Linux machine, then run the configuration script ( engine-setup ) and provide information about how you want to configure the Red Hat Virtualization Manager. Add hosts and storage after the Manager is running. At least two hosts are required for virtual machine high availability. In a local database environment, the Manager database and Data Warehouse database can be created automatically by the Manager configuration script. Alternatively, you can create these databases manually on the Manager machine before running engine-setup . See the Planning and Prerequisites Guide for information on environment options and recommended configuration. Table 1. Red Hat Virtualization Key Components Component Name Description Red Hat Virtualization Manager A service that provides a graphical user interface and a REST API to manage the resources in the environment. The Manager is installed on a physical or virtual machine running Red Hat Enterprise Linux. Hosts Red Hat Enterprise Linux hosts (RHEL hosts) and Red Hat Virtualization Hosts (image-based hypervisors) are the two supported types of host. Hosts use Kernel-based Virtual Machine (KVM) technology and provide resources used to run virtual machines. Shared Storage A storage service is used to store the data associated with virtual machines. Data Warehouse A service that collects configuration information and statistical data from the Manager. Standalone Manager Architecture The Red Hat Virtualization Manager runs on a physical server, or a virtual machine hosted in a separate virtualization environment. A standalone Manager is easier to deploy and manage, but requires an additional physical server. The Manager is only highly available when managed externally with a product such as Red Hat's High Availability Add-On. The minimum setup for a standalone Manager environment includes: One Red Hat Virtualization Manager machine. The Manager is typically deployed on a physical server. However, it can also be deployed on a virtual machine, as long as that virtual machine is hosted in a separate environment. The Manager must run on Red Hat Enterprise Linux 7. A minimum of two hosts for virtual machine high availability. You can use Red Hat Enterprise Linux hosts or Red Hat Virtualization Hosts (RHVH). VDSM (the host agent) runs on all hosts to facilitate communication with the Red Hat Virtualization Manager. One storage service, which can be hosted locally or on a remote server, depending on the storage type used. The storage service must be accessible to all hosts. Figure 1. Standalone Manager Red Hat Virtualization Architecture
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/installing_red_hat_virtualization_as_a_standalone_manager_with_local_databases/pr01
Chapter 4. Introduction to devfile in Dev Spaces
Chapter 4. Introduction to devfile in Dev Spaces Devfiles are yaml text files used for development environment customization. Use them to configure a devfile to suit your specific needs and share the customized devfile across multiple workspaces to ensure identical user experience and build, run, and deploy behaviours across your team. Red Hat OpenShift Dev Spaces-specific devfile features Red Hat OpenShift Dev Spaces is expected to work with most of the popular images defined in the components section of devfile. For production purposes, it is recommended to use one of the Universal Base Images as a base image for defining the Cloud Development Environment. Warning Some images can not be used as-is for defining Cloud Development Environment since Visual Studio Code - Open Source ("Code - OSS") can not be started in the containers with missing openssl and libbrotli . Missing libraries should be explicitly installed on the Dockerfile level e.g. RUN yum install compat-openssl11 libbrotli Devfile and Universal Developer Image You do not need a devfile to start a workspace. If you do not include a devfile in your project repository, Red Hat OpenShift Dev Spaces automatically loads a default devfile with a Universal Developer Image (UDI). Devfile Registry Devfile Registry contains ready-to-use community-supported devfiles for different languages and technologies. Devfiles included in the registry should be treated as samples rather than templates. Additional resources What is a devfile Benefits of devfile Devfile customization overview Devfile.io Customizing Cloud Development Environments
null
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.19/html/user_guide/devfile-introduction
3.9. Devices
3.9. Devices ipmitool component Not specifying the -N option when setting retransmission intervals of IPMI messages over the LAN or LANplus interface may cause various error messages to be returned. For example: ipmitool component The ipmitool may crash in certain cases. For example, when an incorrect password is used, a segmentation fault occurs: kernel component, Unloading the be2net driver with a Virtual Function (VF) attached to a virtual guest results in kernel panic. kernel component The Brocade BFA Fibre Channel and FCoE driver does not currently support dynamic recognition of Logical Unit addition or removal using the sg3_utils utilities (for example, the sg_scan command) or similar functionality. Please consult Brocade directly for a Brocade equivalent of this functionality. kernel component iSCSI and FCoE boot support on Broadcom devices is not included in Red Hat Enterprise Linux 6.3. These two features, which are provided by the bnx2i and bnx2fc Broadcom drivers, remain a Technology Preview until further notice. kexec-tools component Starting with Red Hat Enterprise Linux 6.0 and later, kexec kdump supports dumping core to the Brtfs file system. However, note that because the findfs utility in busybox does not support Btrfs yet, UUID/LABEL resolving is not functional. Avoid using the UUID/LABEL syntax when dumping core to Btrfs file systems. busybox component When running kdump in a busybox environment and dumping to a Btrfs file system, you may receive the following error message: However, Btrfs is supported as a kdump target. To work around this issue, install the btrfs-progs package, verify that the /sbin/btrfsck file exists, and retry. trace-cmd component The trace-cmd service does start on 64-bit PowerPC and IBM System z systems because the sys_enter and sys_exit events do not get enabled on the aforementioned systems. trace-cmd component trace-cmd 's subcommand, report , does not work on IBM System z systems. This is due to the fact that the CONFIG_FTRACE_SYSCALLS parameter is not set on IBM System z systems. tuned component Red Hat Enterprise Linux 6.1 and later enter processor power-saving states more aggressively. This may result in a small performance penalty on certain workloads. This functionality may be disabled at boot time by passing the intel_idle.max_cstate=0 parameter, or at run time by using the cpu_dma_latency pm_qos interface. libfprint component Red Hat Enterprise Linux 6 only has support for the first revision of the UPEK Touchstrip fingerprint reader (USB ID 147e:2016). Attempting to use a second revision device may cause the fingerprint reader daemon to crash. The following command returns the version of the device being used in an individual machine: kernel component The Emulex Fibre Channel/Fibre Channel-over-Ethernet (FCoE) driver in Red Hat Enterprise Linux 6 does not support DH-CHAP authentication. DH-CHAP authentication provides secure access between hosts and mass storage in Fibre-Channel and FCoE SANs in compliance with the FC-SP specification. Note, however that the Emulex driver ( lpfc ) does support DH-CHAP authentication on Red Hat Enterprise Linux 5, from version 5.4. Future Red Hat Enterprise Linux 6 releases may include DH-CHAP authentication. kernel component The recommended minimum HBA firmware revision for use with the mpt2sas driver is "Phase 5 firmware" (that is, with version number in the form 05.xx.xx.xx ). Note that following this recommendation is especially important on complex SAS configurations involving multiple SAS expanders.
[ "~]# ipmitool -I lanplus -H USDHOST -U root -P USDPASS sensor list Unable to renew SDR reservation Close Session command failed: Reservation cancelled or invalid ~]# ipmitool -I lanplus -H USDHOST -U root -P USDPASS delloem powermonitor Error getting power management information, return code c1 Close Session command failed: Invalid command", "~]# ipmitool -I lanplus -H USDHOST -U root -P wrongpass delloem powermonitor Error: Unable to establish IPMI v2 / RMCP+ session Segmentation fault (core dumped)", "/etc/kdump.conf: Unsupported type btrfs", "~]USD lsusb -v -d 147e:2016 | grep bcdDevice" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/devices_issues
3.2. Upgrading a Remote Database Environment from Red Hat Virtualization 4.2 to 4.3
3.2. Upgrading a Remote Database Environment from Red Hat Virtualization 4.2 to 4.3 Upgrading your environment from 4.2 to 4.3 involves the following steps: Make sure you meet the prerequisites, including enabling the correct repositories Use the Log Collection Analysis tool and Image Discrepancies tool to check for issues that might prevent a successful upgrade Update the 4.2 Manager to the latest version of 4.2 Upgrade the database from PostgreSQL 9.5 to 10.0 Upgrade the Manager from 4.2 to 4.3 Update the hosts Update the compatibility version of the clusters Reboot any running or suspended virtual machines to update their configuration Update the compatibility version of the data centers If you previously upgraded to 4.2 without replacing SHA-1 certificates with SHA-256 certificates, you must replace the certificates now . 3.2.1. Prerequisites Plan for any necessary virtual machine downtime. After you update the clusters' compatibility versions during the upgrade, a new hardware configuration is automatically applied to each virtual machine once it reboots. You must reboot any running or suspended virtual machines as soon as possible to apply the configuration changes. Ensure your environment meets the requirements for Red Hat Virtualization 4.4. For a complete list of prerequisites, see the Planning and Prerequisites Guide . When upgrading Red Hat Virtualization Manager, it is recommended that you use one of the existing hosts. If you decide to use a new host, you must assign a unique name to the new host and then add it to the existing cluster before you begin the upgrade procedure. 3.2.2. Analyzing the Environment It is recommended to run the Log Collection Analysis tool and the Image Discrepancies tool prior to performing updates and for troubleshooting. These tools analyze your environment for known issues that might prevent you from performing an update, and provide recommendations to resolve them. 3.2.3. Log Collection Analysis tool Run the Log Collection Analysis tool prior to performing updates and for troubleshooting. The tool analyzes your environment for known issues that might prevent you from performing an update, and provides recommendations to resolve them. The tool gathers detailed information about your system and presents it as an HTML file. Prerequisites Ensure the Manager has the correct repositories enabled. For the list of required repositories, see Enabling the Red Hat Virtualization Manager Repositories for Red Hat Virtualization 4.2. Updates to the Red Hat Virtualization Manager are released through the Content Delivery Network. Procedure Install the Log Collection Analysis tool on the Manager machine: Run the tool: A detailed report is displayed. By default, the report is saved to a file called analyzer_report.html . To save the file to a specific location, use the --html flag and specify the location: # rhv-log-collector-analyzer --live --html=/ directory / filename .html You can use the ELinks text mode web browser to read the analyzer reports within the terminal. To install the ELinks browser: Launch ELinks and open analyzer_report.html . To navigate the report, use the following commands in ELinks: Insert to scroll up Delete to scroll down PageUp to page up PageDown to page down Left Bracket to scroll left Right Bracket to scroll right 3.2.3.1. Monitoring snapshot health with the image discrepancies tool The RHV Image Discrepancies tool analyzes image data in the Storage Domain and RHV Database. It alerts you if it finds discrepancies in volumes and volume attributes, but does not fix those discrepancies. Use this tool in a variety of scenarios, such as: Before upgrading versions, to avoid carrying over broken volumes or chains to the new version. Following a failed storage operation, to detect volumes or attributes in a bad state. After restoring the RHV database or storage from backup. Periodically, to detect potential problems before they worsen. To analyze a snapshot- or live storage migration-related issues, and to verify system health after fixing these types of problems. Prerequisites Required Versions: this tool was introduced in RHV version 4.3.8 with rhv-log-collector-analyzer-0.2.15-0.el7ev . Because data collection runs simultaneously at different places and is not atomic, stop all activity in the environment that can modify the storage domains. That is, do not create or remove snapshots, edit, move, create, or remove disks. Otherwise, false detection of inconsistencies may occur. Virtual Machines can remain running normally during the process. Procedure To run the tool, enter the following command on the RHV Manager: If the tool finds discrepancies, rerun it to confirm the results, especially if there is a chance some operations were performed while the tool was running. Note This tool includes any Export and ISO storage domains and may report discrepancies for them. If so, these can be ignored, as these storage domains do not have entries for images in the RHV database. Understanding the results The tool reports the following: If there are volumes that appear on the storage but are not in the database, or appear in the database but are not on the storage. If some volume attributes differ between the storage and the database. Sample output: You can now update the Manager to the latest version of 4.2. 3.2.4. Updating the Red Hat Virtualization Manager Prerequisites Ensure the Manager has the correct repositories enabled. For the list of required repositories, see Enabling the Red Hat Virtualization Manager Repositories for Red Hat Virtualization 4.2. Updates to the Red Hat Virtualization Manager are released through the Content Delivery Network. Procedure On the Manager machine, check if updated packages are available: Update the setup packages: # yum update ovirt\*setup\* rh\*vm-setup-plugins Update the Red Hat Virtualization Manager with the engine-setup script. The engine-setup script prompts you with some configuration questions, then stops the ovirt-engine service, downloads and installs the updated packages, backs up and updates the database, performs post-installation configuration, and starts the ovirt-engine service. When the script completes successfully, the following message appears: Note The engine-setup script is also used during the Red Hat Virtualization Manager installation process, and it stores the configuration values supplied. During an update, the stored values are displayed when previewing the configuration, and might not be up to date if engine-config was used to update configuration after installation. For example, if engine-config was used to update SANWipeAfterDelete to true after installation, engine-setup will output "Default SAN wipe after delete: False" in the configuration preview. However, the updated values will not be overwritten by engine-setup . Important The update process might take some time. Do not stop the process before it completes. Update the base operating system and any optional packages installed on the Manager: Important If you encounter a required Ansible package conflict during the update, see Cannot perform yum update on my RHV manager (ansible conflict) . Important If any kernel packages were updated, reboot the machine to complete the update. 3.2.5. Upgrading remote databases from PostgreSQL 9.5 to 10 Red Hat Virtualization 4.3 uses PostgreSQL 10 instead of PostgreSQL 9.5. If your databases are installed locally, the upgrade script automatically upgrades them from version 9.5 to 10. However, if either of your databases (Manager or Data Warehouse) is installed on a separate machine, you must perform the following procedure on each remote database before upgrading the Manager. Stop the service running on the machine: When upgrading the Manager database, stop the ovirt-engine service on the Manager machine: # systemctl stop ovirt-engine When upgrading the Data Warehouse database, stop the ovirt-engine-dwhd service on the Data Warehouse machine: # systemctl stop ovirt-engine-dwhd Enable the required repository to receive the PostgreSQL 10 package: Enable either the Red Hat Virtualization Manager repository: # subscription-manager repos --enable=rhel-7-server-rhv-4.3-manager-rpms or the SCL repository: # subscription-manager repos --enable rhel-server-rhscl-7-rpms Install the PostgreSQL 10 packages: Stop and disable the PostgreSQL 9.5 service: Upgrade the PostgreSQL 9.5 database to PostgreSQL 10: Start and enable the rh-postgresql10-postgresql.service and check that it is running: Ensure that you see output similar to the following: Copy the pg_hba.conf client configuration file from the PostgreSQL 9.5 environment to the PostgreSQL 10 environment: # cp -p /var/opt/rh/rh-postgresql95/lib/pgsql/data/pg_hba.conf /var/opt/rh/rh-postgresql10/lib/pgsql/data/pg_hba.conf Update the following parameters in /var/opt/rh/rh-postgresql10/lib/pgsql/data/postgresql.conf : listen_addresses='*' autovacuum_vacuum_scale_factor=0.01 autovacuum_analyze_scale_factor=0.075 autovacuum_max_workers=6 maintenance_work_mem=65536 max_connections=150 work_mem = 8192 Restart the PostgreSQL 10 service to apply the configuration changes: You can now upgrade the Manager to 4.3. 3.2.6. Upgrading the Red Hat Virtualization Manager from 4.2 to 4.3 Follow these same steps when upgrading any of the following: the Red Hat Virtualization Manager a remote machine with the Data Warehouse service You need to be logged into the machine that you are upgrading. Important If the upgrade fails, the engine-setup command attempts to restore your Red Hat Virtualization Manager installation to its state. For this reason, do not remove the version's repositories until after the upgrade is complete. If the upgrade fails, the engine-setup script explains how to restore your installation. Procedure Enable the Red Hat Virtualization 4.3 repositories: # subscription-manager repos \ --enable=rhel-7-server-rhv-4.3-manager-rpms \ --enable=jb-eap-7.2-for-rhel-7-server-rpms All other repositories remain the same across Red Hat Virtualization releases. Update the setup packages: # yum update ovirt\*setup\* rh\*vm-setup-plugins Run engine-setup and follow the prompts to upgrade the Red Hat Virtualization Manager, the remote database or remote service: # engine-setup Note During the upgrade process for the Manager, the engine-setup script might prompt you to disconnect the remote Data Warehouse database. You must disconnect it to continue the setup. When the script completes successfully, the following message appears: Execution of setup completed successfully Disable the Red Hat Virtualization 4.2 repositories to ensure the system does not use any 4.2 packages: # subscription-manager repos \ --disable=rhel-7-server-rhv-4.2-manager-rpms \ --disable=jb-eap-7-for-rhel-7-server-rpms Update the base operating system: # yum update Important If you encounter a required Ansible package conflict during the update, see Cannot perform yum update on my RHV manager (ansible conflict) . Important If any kernel packages were updated, reboot the machine to complete the upgrade. The Manager is now upgraded to version 4.3. 3.2.6.1. Completing the remote Data Warehouse database upgrade Complete these additional steps when upgrading a remote Data Warehouse database from PostgreSQL 9.5 to 10. Procedure The ovirt-engine-dwhd service is now running on the Manager machine. If the ovirt-engine-dwhd service is on a remote machine, stop and disable the ovirt-engine-dwhd service on the Manager machine, and remove the configuration files that engine-setup created: # systemctl stop ovirt-engine-dwhd # systemctl disable ovirt-engine-dwhd # rm -f /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/* Repeat the steps in Upgrading the Manager to 4.3 on the machine hosting the ovirt-engine-dwhd service. You can now update the hosts. 3.2.7. Updating All Hosts in a Cluster You can update all hosts in a cluster instead of updating hosts individually. This is particularly useful during upgrades to new versions of Red Hat Virtualization. See oVirt Cluster Upgrade for more information about the Ansible role used to automate the updates. Update one cluster at a time. Limitations On RHVH, the update only preserves modified content in the /etc and /var directories. Modified data in other paths is overwritten during an update. If the cluster has migration enabled, virtual machines are automatically migrated to another host in the cluster. In a self-hosted engine environment, the Manager virtual machine can only migrate between self-hosted engine nodes in the same cluster. It cannot migrate to standard hosts. The cluster must have sufficient memory reserved for its hosts to perform maintenance. Otherwise, virtual machine migrations will hang and fail. You can reduce the memory usage of host updates by shutting down some or all virtual machines before updating hosts. You cannot migrate a pinned virtual machine (such as a virtual machine using a vGPU) to another host. Pinned virtual machines are shut down during the update, unless you choose to skip that host instead. Procedure In the Administration Portal, click Compute Clusters and select the cluster. The Upgrade status column shows if an upgrade is available for any hosts in the cluster. Click Upgrade . Select the hosts to update, then click . Configure the options: Stop Pinned VMs shuts down any virtual machines that are pinned to hosts in the cluster, and is selected by default. You can clear this check box to skip updating those hosts so that the pinned virtual machines stay running, such as when a pinned virtual machine is running important services or processes and you do not want it to shut down at an unknown time during the update. Upgrade Timeout (Minutes) sets the time to wait for an individual host to be updated before the cluster upgrade fails with a timeout. The default is 60 . You can increase it for large clusters where 60 minutes might not be enough, or reduce it for small clusters where the hosts update quickly. Check Upgrade checks each host for available updates before running the upgrade process. It is not selected by default, but you can select it if you need to ensure that recent updates are included, such as when you have configured the Manager to check for host updates less frequently than the default. Reboot After Upgrade reboots each host after it is updated, and is selected by default. You can clear this check box to speed up the process if you are sure that there are no pending updates that require a host reboot. Use Maintenance Policy sets the cluster's scheduling policy to cluster_maintenance during the update. It is selected by default, so activity is limited and virtual machines cannot start unless they are highly available. You can clear this check box if you have a custom scheduling policy that you want to keep using during the update, but this could have unknown consequences. Ensure your custom policy is compatible with cluster upgrade activity before disabling this option. Click . Review the summary of the hosts and virtual machines that will be affected. Click Upgrade . You can track the progress of host updates: in the Compute Clusters view, the Upgrade Status column shows Upgrade in progress . in the Compute Hosts view in the Events section of the Notification Drawer ( ). You can track the progress of individual virtual machine migrations in the Status column of the Compute Virtual Machines view. In large environments, you may need to filter the results to show a particular group of virtual machines. 3.2.8. Changing the Cluster Compatibility Version Red Hat Virtualization clusters have a compatibility version. The cluster compatibility version indicates the features of Red Hat Virtualization supported by all of the hosts in the cluster. The cluster compatibility is set according to the version of the least capable host operating system in the cluster. Prerequisites To change the cluster compatibility level, you must first update all the hosts in your cluster to a level that supports your desired compatibility level. Check if there is an icon to the host indicating an update is available. Limitations Virtio NICs are enumerated as a different device after upgrading the cluster compatibility level to 4.6. Therefore, the NICs might need to be reconfigured. Red Hat recommends that you test the virtual machines before you upgrade the cluster by setting the cluster compatibility level to 4.6 on the virtual machine and verifying the network connection. If the network connection for the virtual machine fails, configure the virtual machine with a custom emulated machine that matches the current emulated machine, for example pc-q35-rhel8.3.0 for 4.5 compatibility version, before upgrading the cluster. Procedure In the Administration Portal, click Compute Clusters . Select the cluster to change and click Edit . On the General tab, change the Compatibility Version to the desired value. Click OK . The Change Cluster Compatibility Version confirmation dialog opens. Click OK to confirm. Important An error message might warn that some virtual machines and templates are incorrectly configured. To fix this error, edit each virtual machine manually. The Edit Virtual Machine window provides additional validations and warnings that show what to correct. Sometimes the issue is automatically corrected and the virtual machine's configuration just needs to be saved again. After editing each virtual machine, you will be able to change the cluster compatibility version. 3.2.9. Changing Virtual Machine Cluster Compatibility After updating a cluster's compatibility version, you must update the cluster compatibility version of all running or suspended virtual machines by rebooting them from the Administration Portal, or using the REST API, or from within the guest operating system. Virtual machines that require a reboot are marked with the pending changes icon ( ). Although you can wait to reboot the virtual machines at a convenient time, rebooting immediately is highly recommended so that the virtual machines use the latest configuration. Any virtual machine that has not been rebooted runs with the configuration, and subsequent configuration changes made to the virtual machine might overwrite its pending cluster compatibility changes. Procedure In the Administration Portal, click Compute Virtual Machines . Check which virtual machines require a reboot. In the Vms: search bar, enter the following query: next_run_config_exists=True The search results show all virtual machines with pending changes. Select each virtual machine and click Restart . Alternatively, if necessary you can reboot a virtual machine from within the virtual machine itself. When the virtual machine starts, the new compatibility version is automatically applied. Note You cannot change the cluster compatibility version of a virtual machine snapshot that is in preview. You must first commit or undo the preview. 3.2.10. Changing the Data Center Compatibility Version Red Hat Virtualization data centers have a compatibility version. The compatibility version indicates the version of Red Hat Virtualization with which the data center is intended to be compatible. All clusters in the data center must support the desired compatibility level. Prerequisites To change the data center compatibility level, you must first update the compatibility version of all clusters and virtual machines in the data center. Procedure In the Administration Portal, click Compute Data Centers . Select the data center to change and click Edit . Change the Compatibility Version to the desired value. Click OK . The Change Data Center Compatibility Version confirmation dialog opens. Click OK to confirm. If you previously upgraded to 4.2 without replacing SHA-1 certificates with SHA-256 certificates, you must do so now. 3.2.11. Replacing SHA-1 Certificates with SHA-256 Certificates Red Hat Virtualization 4.4 uses SHA-256 signatures, which provide a more secure way to sign SSL certificates than SHA-1. Newly installed systems do not require any special steps to enable Red Hat Virtualization's public key infrastructure (PKI) to use SHA-256 signatures. Warning Do NOT let certificates expire. If they expire, the environment becomes non-responsive and recovery is an error prone and time consuming process. For information on renewing certificates, see Renewing certificates before they expire in the Administration Guide . Preventing Warning Messages from Appearing in the Browser Log in to the Manager machine as the root user. Check whether /etc/pki/ovirt-engine/openssl.conf includes the line default_md = sha256 : # cat /etc/pki/ovirt-engine/openssl.conf If it still includes default_md = sha1 , back up the existing configuration and change the default to sha256 : # cp -p /etc/pki/ovirt-engine/openssl.conf /etc/pki/ovirt-engine/openssl.conf."USD(date +"%Y%m%d%H%M%S")" # sed -i 's/^default_md = sha1/default_md = sha256/' /etc/pki/ovirt-engine/openssl.conf Define the certificate that should be re-signed: # names="apache" On the Manager, save a backup of the /etc/ovirt-engine/engine.conf.d and /etc/pki/ovirt-engine directories, and re-sign the certificates: # . /etc/ovirt-engine/engine.conf.d/10-setup-protocols.conf # for name in USDnames; do subject="USD( openssl \ x509 \ -in /etc/pki/ovirt-engine/certs/"USD{name}".cer \ -noout \ -subject \ -nameopt compat \ | sed \ 's;subject=\(.*\);\1;' \ )" /usr/share/ovirt-engine/bin/pki-enroll-pkcs12.sh \ --name="USD{name}" \ --password=mypass \ <1> --subject="USD{subject}" \ --san=DNS:"USD{ENGINE_FQDN}" \ --keep-key done Do not change this the password value. Restart the httpd service: # systemctl restart httpd Connect to the Administration Portal to confirm that the warning no longer appears. If you previously imported a CA or https certificate into the browser, find the certificate(s), remove them from the browser, and reimport the new CA certificate. Install the certificate authority according to the instructions provided by your browser. To get the certificate authority's certificate, navigate to http:// your-manager-fqdn /ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA , replacing your-manager-fqdn with the fully qualified domain name (FQDN). Replacing All Signed Certificates with SHA-256 Log in to the Manager machine as the root user. Check whether /etc/pki/ovirt-engine/openssl.conf includes the line default_md = sha256 : # cat /etc/pki/ovirt-engine/openssl.conf If it still includes default_md = sha1 , back up the existing configuration and change the default to sha256 : # cp -p /etc/pki/ovirt-engine/openssl.conf /etc/pki/ovirt-engine/openssl.conf."USD(date +"%Y%m%d%H%M%S")" # sed -i 's/^default_md = sha1/default_md = sha256/' /etc/pki/ovirt-engine/openssl.conf Re-sign the CA certificate by backing it up and creating a new certificate in ca.pem.new : # cp -p /etc/pki/ovirt-engine/private/ca.pem /etc/pki/ovirt-engine/private/ca.pem."USD(date +"%Y%m%d%H%M%S")" # openssl x509 -signkey /etc/pki/ovirt-engine/private/ca.pem -in /etc/pki/ovirt-engine/ca.pem -out /etc/pki/ovirt-engine/ca.pem.new -days 3650 -sha256 Replace the existing certificate with the new certificate: # mv /etc/pki/ovirt-engine/ca.pem.new /etc/pki/ovirt-engine/ca.pem Define the certificates that should be re-signed: # names="engine apache websocket-proxy jboss imageio-proxy" If you replaced the Red Hat Virtualization Manager SSL Certificate after the upgrade, run the following instead: # names="engine websocket-proxy jboss imageio-proxy" For more details see Replacing the Red Hat Virtualization Manager CA Certificate in the Administration Guide . On the Manager, save a backup of the /etc/ovirt-engine/engine.conf.d and /etc/pki/ovirt-engine directories, and re-sign the certificates: # . /etc/ovirt-engine/engine.conf.d/10-setup-protocols.conf # for name in USDnames; do subject="USD( openssl \ x509 \ -in /etc/pki/ovirt-engine/certs/"USD{name}".cer \ -noout \ -subject \ -nameopt compat \ | sed \ 's;subject=\(.*\);\1;' \ )" /usr/share/ovirt-engine/bin/pki-enroll-pkcs12.sh \ --name="USD{name}" \ --password=mypass \ <1> --subject="USD{subject}" \ --san=DNS:"USD{ENGINE_FQDN}" \ --keep-key done Do not change this the password value. Restart the following services: # systemctl restart httpd # systemctl restart ovirt-engine # systemctl restart ovirt-websocket-proxy # systemctl restart ovirt-imageio Connect to the Administration Portal to confirm that the warning no longer appears. If you previously imported a CA or https certificate into the browser, find the certificate(s), remove them from the browser, and reimport the new CA certificate. Install the certificate authority according to the instructions provided by your browser. To get the certificate authority's certificate, navigate to http:// your-manager-fqdn /ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA , replacing your-manager-fqdn with the fully qualified domain name (FQDN). Enroll the certificates on the hosts. Repeat the following procedure for each host. In the Administration Portal, click Compute Hosts . Select the host and click Management Maintenance and OK . Once the host is in maintenance mode, click Installation Enroll Certificate . Click Management Activate .
[ "yum install rhv-log-collector-analyzer", "rhv-log-collector-analyzer --live", "rhv-log-collector-analyzer --live --html=/ directory / filename .html", "yum install -y elinks", "elinks /home/user1/analyzer_report.html", "rhv-image-discrepancies", "Checking storage domain c277ad93-0973-43d9-a0ca-22199bc8e801 Looking for missing images No missing images found Checking discrepancies between SD/DB attributes image ef325650-4b39-43cf-9e00-62b9f7659020 has a different attribute capacity on storage(2696984576) and on DB(2696986624) image 852613ce-79ee-4adc-a56a-ea650dcb4cfa has a different attribute capacity on storage(5424252928) and on DB(5424254976) Checking storage domain c64637b4-f0e8-408c-b8af-6a52946113e2 Looking for missing images No missing images found Checking discrepancies between SD/DB attributes No discrepancies found", "engine-upgrade-check", "yum update ovirt\\*setup\\* rh\\*vm-setup-plugins", "engine-setup", "Execution of setup completed successfully", "yum update --nobest", "systemctl stop ovirt-engine", "systemctl stop ovirt-engine-dwhd", "subscription-manager repos --enable=rhel-7-server-rhv-4.3-manager-rpms", "subscription-manager repos --enable rhel-server-rhscl-7-rpms", "yum install rh-postgresql10 rh-postgresql10-postgresql-contrib", "systemctl stop rh-postgresql95-postgresql systemctl disable rh-postgresql95-postgresql", "scl enable rh-postgresql10 -- postgresql-setup --upgrade-from=rh-postgresql95-postgresql --upgrade", "systemctl start rh-postgresql10-postgresql.service systemctl enable rh-postgresql10-postgresql.service systemctl status rh-postgresql10-postgresql.service", "rh-postgresql10-postgresql.service - PostgreSQL database server Loaded: loaded (/usr/lib/systemd/system/rh-postgresql10-postgresql.service; enabled; vendor preset: disabled) Active: active (running) since", "cp -p /var/opt/rh/rh-postgresql95/lib/pgsql/data/pg_hba.conf /var/opt/rh/rh-postgresql10/lib/pgsql/data/pg_hba.conf", "listen_addresses='*' autovacuum_vacuum_scale_factor=0.01 autovacuum_analyze_scale_factor=0.075 autovacuum_max_workers=6 maintenance_work_mem=65536 max_connections=150 work_mem = 8192", "systemctl restart rh-postgresql10-postgresql.service", "subscription-manager repos --enable=rhel-7-server-rhv-4.3-manager-rpms --enable=jb-eap-7.2-for-rhel-7-server-rpms", "yum update ovirt\\*setup\\* rh\\*vm-setup-plugins", "engine-setup", "Execution of setup completed successfully", "subscription-manager repos --disable=rhel-7-server-rhv-4.2-manager-rpms --disable=jb-eap-7-for-rhel-7-server-rpms", "yum update", "systemctl stop ovirt-engine-dwhd systemctl disable ovirt-engine-dwhd rm -f /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/*", "next_run_config_exists=True", "cat /etc/pki/ovirt-engine/openssl.conf", "cp -p /etc/pki/ovirt-engine/openssl.conf /etc/pki/ovirt-engine/openssl.conf.\"USD(date +\"%Y%m%d%H%M%S\")\" sed -i 's/^default_md = sha1/default_md = sha256/' /etc/pki/ovirt-engine/openssl.conf", "names=\"apache\"", ". /etc/ovirt-engine/engine.conf.d/10-setup-protocols.conf for name in USDnames; do subject=\"USD( openssl x509 -in /etc/pki/ovirt-engine/certs/\"USD{name}\".cer -noout -subject -nameopt compat | sed 's;subject=\\(.*\\);\\1;' )\" /usr/share/ovirt-engine/bin/pki-enroll-pkcs12.sh --name=\"USD{name}\" --password=mypass \\ <1> --subject=\"USD{subject}\" --san=DNS:\"USD{ENGINE_FQDN}\" --keep-key done", "systemctl restart httpd", "cat /etc/pki/ovirt-engine/openssl.conf", "cp -p /etc/pki/ovirt-engine/openssl.conf /etc/pki/ovirt-engine/openssl.conf.\"USD(date +\"%Y%m%d%H%M%S\")\" sed -i 's/^default_md = sha1/default_md = sha256/' /etc/pki/ovirt-engine/openssl.conf", "cp -p /etc/pki/ovirt-engine/private/ca.pem /etc/pki/ovirt-engine/private/ca.pem.\"USD(date +\"%Y%m%d%H%M%S\")\" openssl x509 -signkey /etc/pki/ovirt-engine/private/ca.pem -in /etc/pki/ovirt-engine/ca.pem -out /etc/pki/ovirt-engine/ca.pem.new -days 3650 -sha256", "mv /etc/pki/ovirt-engine/ca.pem.new /etc/pki/ovirt-engine/ca.pem", "names=\"engine apache websocket-proxy jboss imageio-proxy\"", "names=\"engine websocket-proxy jboss imageio-proxy\"", ". /etc/ovirt-engine/engine.conf.d/10-setup-protocols.conf for name in USDnames; do subject=\"USD( openssl x509 -in /etc/pki/ovirt-engine/certs/\"USD{name}\".cer -noout -subject -nameopt compat | sed 's;subject=\\(.*\\);\\1;' )\" /usr/share/ovirt-engine/bin/pki-enroll-pkcs12.sh --name=\"USD{name}\" --password=mypass \\ <1> --subject=\"USD{subject}\" --san=DNS:\"USD{ENGINE_FQDN}\" --keep-key done", "systemctl restart httpd systemctl restart ovirt-engine systemctl restart ovirt-websocket-proxy systemctl restart ovirt-imageio" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/upgrade_guide/Remote_Upgrading_from_4-2
Chapter 2. Configuring an AWS account
Chapter 2. Configuring an AWS account Before you can install OpenShift Container Platform, you must configure an Amazon Web Services (AWS) account. 2.1. Configuring Route 53 To install OpenShift Container Platform, the Amazon Web Services (AWS) account you use must have a dedicated public hosted zone in your Route 53 service. This zone must be authoritative for the domain. The Route 53 service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through AWS or another source. Note If you purchase a new domain through AWS, it takes time for the relevant DNS changes to propagate. For more information about purchasing domains through AWS, see Registering Domain Names Using Amazon Route 53 in the AWS documentation. If you are using an existing domain and registrar, migrate its DNS to AWS. See Making Amazon Route 53 the DNS Service for an Existing Domain in the AWS documentation. Create a public hosted zone for your domain or subdomain. See Creating a Public Hosted Zone in the AWS documentation. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . Extract the new authoritative name servers from the hosted zone records. See Getting the Name Servers for a Public Hosted Zone in the AWS documentation. Update the registrar records for the AWS Route 53 name servers that your domain uses. For example, if you registered your domain to a Route 53 service in a different accounts, see the following topic in the AWS documentation: Adding or Changing Name Servers or Glue Records . If you are using a subdomain, add its delegation records to the parent domain. This gives Amazon Route 53 responsibility for the subdomain. Follow the delegation procedure outlined by the DNS provider of the parent domain. See Creating a subdomain that uses Amazon Route 53 as the DNS service without migrating the parent domain in the AWS documentation for an example high level procedure. 2.1.1. Ingress Operator endpoint configuration for AWS Route 53 If you install in either Amazon Web Services (AWS) GovCloud (US) US-West or US-East region, the Ingress Operator uses us-gov-west-1 region for Route53 and tagging API clients. The Ingress Operator uses https://tagging.us-gov-west-1.amazonaws.com as the tagging API endpoint if a tagging custom endpoint is configured that includes the string 'us-gov-east-1'. For more information on AWS GovCloud (US) endpoints, see the Service Endpoints in the AWS documentation about GovCloud (US). Important Private, disconnected installations are not supported for AWS GovCloud when you install in the us-gov-east-1 region. Example Route 53 configuration platform: aws: region: us-gov-west-1 serviceEndpoints: - name: ec2 url: https://ec2.us-gov-west-1.amazonaws.com - name: elasticloadbalancing url: https://elasticloadbalancing.us-gov-west-1.amazonaws.com - name: route53 url: https://route53.us-gov.amazonaws.com 1 - name: tagging url: https://tagging.us-gov-west-1.amazonaws.com 2 1 Route 53 defaults to https://route53.us-gov.amazonaws.com for both AWS GovCloud (US) regions. 2 Only the US-West region has endpoints for tagging. Omit this parameter if your cluster is in another region. 2.2. AWS account limits The OpenShift Container Platform cluster uses a number of Amazon Web Services (AWS) components, and the default Service Limits affect your ability to install OpenShift Container Platform clusters. If you use certain cluster configurations, deploy your cluster in certain AWS regions, or run multiple clusters from your account, you might need to request additional resources for your AWS account. The following table summarizes the AWS components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of clusters available by default Default AWS limit Description Instance Limits Varies Varies By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane nodes Three worker nodes These instance type counts are within a new account's default limit. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, review your account limits to ensure that your cluster can deploy the machines that you need. In most regions, the worker machines use an m6i.large instance and the bootstrap and control plane machines use m6i.xlarge instances. In some regions, including all regions that do not support these instance types, m5.large and m5.xlarge instances are used instead. Elastic IPs (EIPs) 0 to 1 5 EIPs per account To provision the cluster in a highly available configuration, the installation program creates a public and private subnet for each availability zone within a region . Each private subnet requires a NAT Gateway , and each NAT gateway requires a separate elastic IP . Review the AWS region map to determine how many availability zones are in each region. To take advantage of the default high availability, install the cluster in a region with at least three availability zones. To install a cluster in a region with more than five availability zones, you must increase the EIP limit. Important To use the us-east-1 region, you must increase the EIP limit for your account. Virtual Private Clouds (VPCs) 5 5 VPCs per region Each cluster creates its own VPC. Elastic Load Balancing (ELB/NLB) 3 20 per region By default, each cluster creates internal and external network load balancers for the master API server and a single Classic Load Balancer for the router. Deploying more Kubernetes Service objects with type LoadBalancer will create additional load balancers . NAT Gateways 5 5 per availability zone The cluster deploys one NAT gateway in each availability zone. Elastic Network Interfaces (ENIs) At least 12 350 per region The default installation creates 21 ENIs and an ENI for each availability zone in your region. For example, the us-east-1 region contains six availability zones, so a cluster that is deployed in that zone uses 27 ENIs. Review the AWS region map to determine how many availability zones are in each region. Additional ENIs are created for additional machines and ELB load balancers that are created by cluster usage and deployed workloads. VPC Gateway 20 20 per account Each cluster creates a single VPC Gateway for S3 access. S3 buckets 99 100 buckets per account Because the installation process creates a temporary bucket and the registry component in each cluster creates a bucket, you can create only 99 OpenShift Container Platform clusters per AWS account. Security Groups 250 2,500 per account Each cluster creates 10 distinct security groups. 2.3. Required AWS permissions for the IAM user Note Your IAM user must have the permission tag:GetResources in the region us-east-1 to delete the base cluster resources. As part of the AWS API requirement, the OpenShift Container Platform installation program performs various actions in this region. When you attach the AdministratorAccess policy to the IAM user that you create in Amazon Web Services (AWS), you grant that user all of the required permissions. To deploy all components of an OpenShift Container Platform cluster, the IAM user requires the following permissions: Example 2.1. Required EC2 permissions for installation ec2:AttachNetworkInterface ec2:AuthorizeSecurityGroupEgress ec2:AuthorizeSecurityGroupIngress ec2:CopyImage ec2:CreateNetworkInterface ec2:CreateSecurityGroup ec2:CreateTags ec2:CreateVolume ec2:DeleteSecurityGroup ec2:DeleteSnapshot ec2:DeleteTags ec2:DeregisterImage ec2:DescribeAccountAttributes ec2:DescribeAddresses ec2:DescribeAvailabilityZones ec2:DescribeDhcpOptions ec2:DescribeImages ec2:DescribeInstanceAttribute ec2:DescribeInstanceCreditSpecifications ec2:DescribeInstances ec2:DescribeInstanceTypes ec2:DescribeInternetGateways ec2:DescribeKeyPairs ec2:DescribeNatGateways ec2:DescribeNetworkAcls ec2:DescribeNetworkInterfaces ec2:DescribePrefixLists ec2:DescribePublicIpv4Pools (only required if publicIpv4Pool is specified in install-config.yaml ) ec2:DescribeRegions ec2:DescribeRouteTables ec2:DescribeSecurityGroupRules ec2:DescribeSecurityGroups ec2:DescribeSubnets ec2:DescribeTags ec2:DescribeVolumes ec2:DescribeVpcAttribute ec2:DescribeVpcClassicLink ec2:DescribeVpcClassicLinkDnsSupport ec2:DescribeVpcEndpoints ec2:DescribeVpcs ec2:DisassociateAddress (only required if publicIpv4Pool is specified in install-config.yaml ) ec2:GetEbsDefaultKmsKeyId ec2:ModifyInstanceAttribute ec2:ModifyNetworkInterfaceAttribute ec2:RevokeSecurityGroupEgress ec2:RevokeSecurityGroupIngress ec2:RunInstances ec2:TerminateInstances Example 2.2. Required permissions for creating network resources during installation ec2:AllocateAddress ec2:AssociateAddress ec2:AssociateDhcpOptions ec2:AssociateRouteTable ec2:AttachInternetGateway ec2:CreateDhcpOptions ec2:CreateInternetGateway ec2:CreateNatGateway ec2:CreateRoute ec2:CreateRouteTable ec2:CreateSubnet ec2:CreateVpc ec2:CreateVpcEndpoint ec2:ModifySubnetAttribute ec2:ModifyVpcAttribute Note If you use an existing Virtual Private Cloud (VPC), your account does not require these permissions for creating network resources. Example 2.3. Required Elastic Load Balancing permissions (ELB) for installation elasticloadbalancing:AddTags elasticloadbalancing:ApplySecurityGroupsToLoadBalancer elasticloadbalancing:AttachLoadBalancerToSubnets elasticloadbalancing:ConfigureHealthCheck elasticloadbalancing:CreateListener elasticloadbalancing:CreateLoadBalancer elasticloadbalancing:CreateLoadBalancerListeners elasticloadbalancing:CreateTargetGroup elasticloadbalancing:DeleteLoadBalancer elasticloadbalancing:DeregisterInstancesFromLoadBalancer elasticloadbalancing:DeregisterTargets elasticloadbalancing:DescribeInstanceHealth elasticloadbalancing:DescribeListeners elasticloadbalancing:DescribeLoadBalancerAttributes elasticloadbalancing:DescribeLoadBalancers elasticloadbalancing:DescribeTags elasticloadbalancing:DescribeTargetGroupAttributes elasticloadbalancing:DescribeTargetHealth elasticloadbalancing:ModifyLoadBalancerAttributes elasticloadbalancing:ModifyTargetGroup elasticloadbalancing:ModifyTargetGroupAttributes elasticloadbalancing:RegisterInstancesWithLoadBalancer elasticloadbalancing:RegisterTargets elasticloadbalancing:SetLoadBalancerPoliciesOfListener elasticloadbalancing:SetSecurityGroups Important OpenShift Container Platform uses both the ELB and ELBv2 API services to provision load balancers. The permission list shows permissions required by both services. A known issue exists in the AWS web console where both services use the same elasticloadbalancing action prefix but do not recognize the same actions. You can ignore the warnings about the service not recognizing certain elasticloadbalancing actions. Example 2.4. Required IAM permissions for installation iam:AddRoleToInstanceProfile iam:CreateInstanceProfile iam:CreateRole iam:DeleteInstanceProfile iam:DeleteRole iam:DeleteRolePolicy iam:GetInstanceProfile iam:GetRole iam:GetRolePolicy iam:GetUser iam:ListInstanceProfilesForRole iam:ListRoles iam:ListUsers iam:PassRole iam:PutRolePolicy iam:RemoveRoleFromInstanceProfile iam:SimulatePrincipalPolicy iam:TagInstanceProfile iam:TagRole Note If you specify an existing IAM role in the install-config.yaml file, the following IAM permissions are not required: iam:CreateRole , iam:DeleteRole , iam:DeleteRolePolicy , and iam:PutRolePolicy . If you have not created a load balancer in your AWS account, the IAM user also requires the iam:CreateServiceLinkedRole permission. Example 2.5. Required Route 53 permissions for installation route53:ChangeResourceRecordSets route53:ChangeTagsForResource route53:CreateHostedZone route53:DeleteHostedZone route53:GetChange route53:GetHostedZone route53:ListHostedZones route53:ListHostedZonesByName route53:ListResourceRecordSets route53:ListTagsForResource route53:UpdateHostedZoneComment Example 2.6. Required Amazon Simple Storage Service (S3) permissions for installation s3:CreateBucket s3:DeleteBucket s3:GetAccelerateConfiguration s3:GetBucketAcl s3:GetBucketCors s3:GetBucketLocation s3:GetBucketLogging s3:GetBucketObjectLockConfiguration s3:GetBucketPolicy s3:GetBucketRequestPayment s3:GetBucketTagging s3:GetBucketVersioning s3:GetBucketWebsite s3:GetEncryptionConfiguration s3:GetLifecycleConfiguration s3:GetReplicationConfiguration s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketTagging s3:PutEncryptionConfiguration Example 2.7. S3 permissions that cluster Operators require s3:DeleteObject s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:GetObjectVersion s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Example 2.8. Required permissions to delete base cluster resources autoscaling:DescribeAutoScalingGroups ec2:DeleteNetworkInterface ec2:DeletePlacementGroup ec2:DeleteVolume elasticloadbalancing:DeleteTargetGroup elasticloadbalancing:DescribeTargetGroups iam:DeleteAccessKey iam:DeleteUser iam:DeleteUserPolicy iam:ListAttachedRolePolicies iam:ListInstanceProfiles iam:ListRolePolicies iam:ListUserPolicies s3:DeleteObject s3:ListBucketVersions tag:GetResources Example 2.9. Required permissions to delete network resources ec2:DeleteDhcpOptions ec2:DeleteInternetGateway ec2:DeleteNatGateway ec2:DeleteRoute ec2:DeleteRouteTable ec2:DeleteSubnet ec2:DeleteVpc ec2:DeleteVpcEndpoints ec2:DetachInternetGateway ec2:DisassociateRouteTable ec2:ReleaseAddress ec2:ReplaceRouteTableAssociation Note If you use an existing VPC, your account does not require these permissions to delete network resources. Instead, your account only requires the tag:UntagResources permission to delete network resources. Example 2.10. Optional permissions for installing a cluster with a custom Key Management Service (KMS) key kms:CreateGrant kms:Decrypt kms:DescribeKey kms:Encrypt kms:GenerateDataKey kms:GenerateDataKeyWithoutPlainText kms:ListGrants kms:RevokeGrant Example 2.11. Required permissions to delete a cluster with shared instance roles iam:UntagRole Example 2.12. Additional IAM and S3 permissions that are required to create manifests iam:GetUserPolicy iam:ListAccessKeys iam:PutUserPolicy iam:TagUser s3:AbortMultipartUpload s3:GetBucketPublicAccessBlock s3:ListBucket s3:ListBucketMultipartUploads s3:PutBucketPublicAccessBlock s3:PutLifecycleConfiguration Note If you are managing your cloud provider credentials with mint mode, the IAM user also requires the iam:CreateAccessKey and iam:CreateUser permissions. Example 2.13. Optional permissions for instance and quota checks for installation ec2:DescribeInstanceTypeOfferings servicequotas:ListAWSDefaultServiceQuotas Example 2.14. Optional permissions for the cluster owner account when installing a cluster on a shared VPC sts:AssumeRole Example 2.15. Required permissions for enabling Bring your own public IPv4 addresses (BYOIP) feature for installation ec2:DescribePublicIpv4Pools ec2:DisassociateAddress 2.4. Creating an IAM user Each Amazon Web Services (AWS) account contains a root user account that is based on the email address you used to create the account. This is a highly-privileged account, and it is recommended to use it for only initial account and billing configuration, creating an initial set of users, and securing the account. Before you install OpenShift Container Platform, create a secondary IAM administrative user. As you complete the Creating an IAM User in Your AWS Account procedure in the AWS documentation, set the following options: Procedure Specify the IAM user name and select Programmatic access . Attach the AdministratorAccess policy to ensure that the account has sufficient permission to create the cluster. This policy provides the cluster with the ability to grant credentials to each OpenShift Container Platform component. The cluster grants the components only the credentials that they require. Note While it is possible to create a policy that grants the all of the required AWS permissions and attach it to the user, this is not the preferred option. The cluster will not have the ability to grant additional credentials to individual components, so the same credentials are used by all components. Optional: Add metadata to the user by attaching tags. Confirm that the user name that you specified is granted the AdministratorAccess policy. Record the access key ID and secret access key values. You must use these values when you configure your local machine to run the installation program. Important You cannot use a temporary session token that you generated while using a multi-factor authentication device to authenticate to AWS when you deploy a cluster. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. 2.5. IAM Policies and AWS authentication By default, the installation program creates instance profiles for the bootstrap, control plane, and compute instances with the necessary permissions for the cluster to operate. Note To enable pulling images from the Amazon Elastic Container Registry (ECR) as a postinstallation task in a single-node OpenShift cluster, you must add the AmazonEC2ContainerRegistryReadOnly policy to the IAM role associated with the cluster's control plane role. However, you can create your own IAM roles and specify them as part of the installation process. You might need to specify your own roles to deploy the cluster or to manage the cluster after installation. For example: Your organization's security policies require that you use a more restrictive set of permissions to install the cluster. After the installation, the cluster is configured with an Operator that requires access to additional services. If you choose to specify your own IAM roles, you can take the following steps: Begin with the default policies and adapt as required. For more information, see "Default permissions for IAM instance profiles". Use the AWS Identity and Access Management Access Analyzer (IAM Access Analyzer) to create a policy template that is based on the cluster's activity. For more information see, "Using AWS IAM Analyzer to create policy templates". 2.5.1. Default permissions for IAM instance profiles By default, the installation program creates IAM instance profiles for the bootstrap, control plane and worker instances with the necessary permissions for the cluster to operate. The following lists specify the default permissions for control plane and compute machines: Example 2.16. Default IAM role permissions for control plane instance profiles ec2:AttachVolume ec2:AuthorizeSecurityGroupIngress ec2:CreateSecurityGroup ec2:CreateTags ec2:CreateVolume ec2:DeleteSecurityGroup ec2:DeleteVolume ec2:Describe* ec2:DetachVolume ec2:ModifyInstanceAttribute ec2:ModifyVolume ec2:RevokeSecurityGroupIngress elasticloadbalancing:AddTags elasticloadbalancing:AttachLoadBalancerToSubnets elasticloadbalancing:ApplySecurityGroupsToLoadBalancer elasticloadbalancing:CreateListener elasticloadbalancing:CreateLoadBalancer elasticloadbalancing:CreateLoadBalancerPolicy elasticloadbalancing:CreateLoadBalancerListeners elasticloadbalancing:CreateTargetGroup elasticloadbalancing:ConfigureHealthCheck elasticloadbalancing:DeleteListener elasticloadbalancing:DeleteLoadBalancer elasticloadbalancing:DeleteLoadBalancerListeners elasticloadbalancing:DeleteTargetGroup elasticloadbalancing:DeregisterInstancesFromLoadBalancer elasticloadbalancing:DeregisterTargets elasticloadbalancing:Describe* elasticloadbalancing:DetachLoadBalancerFromSubnets elasticloadbalancing:ModifyListener elasticloadbalancing:ModifyLoadBalancerAttributes elasticloadbalancing:ModifyTargetGroup elasticloadbalancing:ModifyTargetGroupAttributes elasticloadbalancing:RegisterInstancesWithLoadBalancer elasticloadbalancing:RegisterTargets elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer elasticloadbalancing:SetLoadBalancerPoliciesOfListener kms:DescribeKey Example 2.17. Default IAM role permissions for compute instance profiles ec2:DescribeInstances ec2:DescribeRegions 2.5.2. Specifying an existing IAM role Instead of allowing the installation program to create IAM instance profiles with the default permissions, you can use the install-config.yaml file to specify an existing IAM role for control plane and compute instances. Prerequisites You have an existing install-config.yaml file. Procedure Update compute.platform.aws.iamRole with an existing role for the compute machines. Sample install-config.yaml file with an IAM role for compute instances compute: - hyperthreading: Enabled name: worker platform: aws: iamRole: ExampleRole Update controlPlane.platform.aws.iamRole with an existing role for the control plane machines. Sample install-config.yaml file with an IAM role for control plane instances controlPlane: hyperthreading: Enabled name: master platform: aws: iamRole: ExampleRole Save the file and reference it when installing the OpenShift Container Platform cluster. Note To change or update an IAM account after the cluster has been installed, see RHOCP 4 AWS cloud-credentials access key is expired (Red Hat Knowledgebase). Additional resources Deploying the cluster 2.5.3. Using AWS IAM Analyzer to create policy templates The minimal set of permissions that the control plane and compute instance profiles require depends on how the cluster is configured for its daily operation. One way to determine which permissions the cluster instances require is to use the AWS Identity and Access Management Access Analyzer (IAM Access Analyzer) to create a policy template: A policy template contains the permissions the cluster has used over a specified period of time. You can then use the template to create policies with fine-grained permissions. Procedure The overall process could be: Ensure that CloudTrail is enabled. CloudTrail records all of the actions and events in your AWS account, including the API calls that are required to create a policy template. For more information, see the AWS documentation for working with CloudTrail . Create an instance profile for control plane instances and an instance profile for compute instances. Be sure to assign each role a permissive policy, such as PowerUserAccess. For more information, see the AWS documentation for creating instance profile roles . Install the cluster in a development environment and configure it as required. Be sure to deploy all of applications the cluster will host in a production environment. Test the cluster thoroughly. Testing the cluster ensures that all of the required API calls are logged. Use the IAM Access Analyzer to create a policy template for each instance profile. For more information, see the AWS documentation for generating policies based on the CloudTrail logs . Create and add a fine-grained policy to each instance profile. Remove the permissive policy from each instance profile. Deploy a production cluster using the existing instance profiles with the new policies. Note You can add IAM Conditions to your policy to make it more restrictive and compliant with your organization security requirements. 2.6. Supported AWS Marketplace regions Installing an OpenShift Container Platform cluster using an AWS Marketplace image is available to customers who purchase the offer in North America. While the offer must be purchased in North America, you can deploy the cluster to any of the following supported paritions: Public GovCloud Note Deploying a OpenShift Container Platform cluster using an AWS Marketplace image is not supported for the AWS secret regions or China regions. 2.7. Supported AWS regions You can deploy an OpenShift Container Platform cluster to the following regions. Note Your IAM user must have the permission tag:GetResources in the region us-east-1 to delete the base cluster resources. As part of the AWS API requirement, the OpenShift Container Platform installation program performs various actions in this region. 2.7.1. AWS public regions The following AWS public regions are supported: af-south-1 (Cape Town) ap-east-1 (Hong Kong) ap-northeast-1 (Tokyo) ap-northeast-2 (Seoul) ap-northeast-3 (Osaka) ap-south-1 (Mumbai) ap-south-2 (Hyderabad) ap-southeast-1 (Singapore) ap-southeast-2 (Sydney) ap-southeast-3 (Jakarta) ap-southeast-4 (Melbourne) ca-central-1 (Central) ca-west-1 (Calgary) eu-central-1 (Frankfurt) eu-central-2 (Zurich) eu-north-1 (Stockholm) eu-south-1 (Milan) eu-south-2 (Spain) eu-west-1 (Ireland) eu-west-2 (London) eu-west-3 (Paris) il-central-1 (Tel Aviv) me-central-1 (UAE) me-south-1 (Bahrain) sa-east-1 (Sao Paulo) us-east-1 (N. Virginia) us-east-2 (Ohio) us-west-1 (N. California) us-west-2 (Oregon) 2.7.2. AWS GovCloud regions The following AWS GovCloud regions are supported: us-gov-west-1 us-gov-east-1 2.7.3. AWS SC2S and C2S secret regions The following AWS secret regions are supported: us-isob-east-1 Secret Commercial Cloud Services (SC2S) us-iso-east-1 Commercial Cloud Services (C2S) 2.7.4. AWS China regions The following AWS China regions are supported: cn-north-1 (Beijing) cn-northwest-1 (Ningxia) 2.8. steps Install an OpenShift Container Platform cluster: Quickly install a cluster with default options on installer-provisioned infrastructure Install a cluster with cloud customizations on installer-provisioned infrastructure Install a cluster with network customizations on installer-provisioned infrastructure Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates
[ "platform: aws: region: us-gov-west-1 serviceEndpoints: - name: ec2 url: https://ec2.us-gov-west-1.amazonaws.com - name: elasticloadbalancing url: https://elasticloadbalancing.us-gov-west-1.amazonaws.com - name: route53 url: https://route53.us-gov.amazonaws.com 1 - name: tagging url: https://tagging.us-gov-west-1.amazonaws.com 2", "compute: - hyperthreading: Enabled name: worker platform: aws: iamRole: ExampleRole", "controlPlane: hyperthreading: Enabled name: master platform: aws: iamRole: ExampleRole" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_aws/installing-aws-account
Chapter 2. Disaster recovery subscription requirement
Chapter 2. Disaster recovery subscription requirement Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription Any Red Hat OpenShift Data Foundation Cluster containing PVs participating in active replication either as a source or destination requires OpenShift Data Foundation Advanced entitlement. This subscription should be active on both source and destination clusters. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/disaster-recovery-subscriptions_common
Chapter 24. Publishing the certified Operator
Chapter 24. Publishing the certified Operator 24.1. Publishing certified Operators The certification is considered complete and your Operator will appear in the Red Hat Container Catalog and embedded OperatorHub within OpenShift after all the tests have passed successfully, and the certification pipeline is enabled to submit results to Red Hat. Additionally, the entry will appear on Red Hat Certification Ecosystem . 24.2. Publishing validated Operators After submitting your Operators for validation, the Red Hat certification team will review and verify the entered details of the validation questionnaire. If at a later date you want to certify your partner validated application, complete the Certification details. The Red Hat certification team will review the submitted test results. After successful verification, to publish your product on the Red Hat Ecosystem Catalog , go to the Product Listings page to attach the Partner Validated or Certified application. Important Red Hat OpenShift software certification or validation does not conduct testing of the Partner's product in how it functions or performs outside of the Operator constructs and its impact on the Red Hat platform on which it was installed and executed. Any and all aspects of the certification candidate product's quality assurance remains the Partner's sole responsibility.
null
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_software_certification_workflow_guide/proc_publishing-the-certified-operator_openshift-sw-cert-workflow-running-the-certification-suite-with-redhat-hosted-pipeline
Chapter 6. Uninstalling OpenShift Data Foundation
Chapter 6. Uninstalling OpenShift Data Foundation 6.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledgebase article on Uninstalling OpenShift Data Foundation .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_amazon_web_services/uninstalling_openshift_data_foundation
Chapter 6. Troubleshooting a multi-site Ceph Object Gateway
Chapter 6. Troubleshooting a multi-site Ceph Object Gateway This chapter contains information on how to fix the most common errors related to multi-site Ceph Object Gateways configuration and operational conditions. Note When the radosgw-admin bucket sync status command reports that the bucket is behind on shards even if the data is consistent across multi-site, run additional writes to the bucket. It synchronizes the status reports and displays a message that the bucket is caught up with source. Prerequisites A running Red Hat Ceph Storage cluster. A running Ceph Object Gateway. 6.1. Error code definitions for the Ceph Object Gateway The Ceph Object Gateway logs contain error and warning messages to assist in troubleshooting conditions in your environment. Some common ones are listed below with suggested resolutions. Common error messages data_sync: ERROR: a sync operation returned error This is the high-level data sync process complaining that a lower-level bucket sync process returned an error. This message is redundant; the bucket sync error appears above it in the log. data sync: ERROR: failed to sync object: BUCKET_NAME :_OBJECT_NAME_ Either the process failed to fetch the required object over HTTP from a remote gateway or the process failed to write that object to RADOS and it will be tried again. data sync: ERROR: failure in sync, backing out (sync_status=2) A low level message reflecting one of the above conditions, specifically that the data was deleted before it could sync and thus showing a -2 ENOENT status. data sync: ERROR: failure in sync, backing out (sync_status=-5) A low level message reflecting one of the above conditions, specifically that we failed to write that object to RADOS and thus showing a -5 EIO . ERROR: failed to fetch remote data log info: ret=11 This is the EAGAIN generic error code from libcurl reflecting an error condition from another gateway. It will try again by default. meta sync: ERROR: failed to read mdlog info with (2) No such file or directory The shard of the mdlog was never created so there is nothing to sync. Syncing error messages failed to sync object Either the process failed to fetch this object over HTTP from a remote gateway or it failed to write that object to RADOS and it will be tried again. failed to sync bucket instance: (11) Resource temporarily unavailable A connection issue between primary and secondary zones. failed to sync bucket instance: (125) Operation canceled A racing condition exists between writes to the same RADOS object. ERROR: request failed: (13) Permission denied If the realm has been changed on the master zone, the master zone's gateway may need to be restarted to recognize this user While configuring the secondary site, sometimes a rgw realm pull --url http://primary_endpoint --access-key <> --secret <> command fails with a permission denied error. In such cases, run the following commands on the primary site to ensure that the system user credentials are the same: Additional Resources Contact Red Hat Support for any additional assistance. 6.2. Syncing a multi-site Ceph Object Gateway A multi-site sync reads the change log from other zones. To get a high-level view of the sync progress from the metadata and the data logs, you can use the following command: Example This command lists which log shards, if any, which are behind their source zone. Note Sometimes you might observe recovering shards when running the radosgw-admin sync status command. For data sync, there are 128 shards of replication logs that are each processed independently. If any of the actions triggered by these replication log events result in any error from the network, storage, or elsewhere, those errors get tracked so the operation can retry again later. While a given shard has errors that need a retry, radosgw-admin sync status command reports that shard as recovering . This recovery happens automatically, so the operator does not need to intervene to resolve them. If the results of the sync status you have run above reports log shards are behind, run the following command substituting the shard-id for X . Buckets within a multi-site object can be also be monitored on the Ceph dashboard. For more information, see Monitoring buckets of a multi-site object within the Red Hat Ceph Storage Dashboard Guide . Syntax Example The output lists which buckets are to sync and which buckets, if any, are going to be retried due to errors. Inspect the status of individual buckets with the following command, substituting the bucket id for X . Syntax Replace X with the ID number of the bucket. The result shows which bucket index log shards are behind their source zone. A common error in sync is EBUSY , which means the sync is already in progress, often on another gateway. Read errors written to the sync error log, which can be read with the following command: The syncing process will try again until it is successful. Errors can still occur that can require intervention. 6.3. Performance counters for multi-site Ceph Object Gateway data sync The following performance counters are available for multi-site configurations of the Ceph Object Gateway to measure data sync: poll_latency measures the latency of requests for remote replication logs. fetch_bytes measures the number of objects and bytes fetched by data sync. Use the ceph --admin-daemon command to view the current metric data for the performance counters: Syntax Example Note You must run the ceph --admin-daemon command from the node running the daemon. Additional Resources See the Ceph performance counters chapter in the Red Hat Ceph Storage Administration Guide for more information about performance counters. 6.4. Synchronizing data in a multi-site Ceph Object Gateway configuration In a multi-site Ceph Object Gateway configuration of a storage cluster, failover and failback causes data synchronization to stop. The radosgw-admin sync status command reports that the data sync is behind for an extended period of time. You can run the radosgw-admin data sync init command to synchronize data between the sites and then restart the Ceph Object Gateway. This command does not touch any actual object data and initiates data sync for a specified source zone. It causes the zone to restart a full sync from the source zone. Important Contact Red Hat support before running the data sync init command. If you are going for a full restart of sync, and if there is a lot of data that needs to be synced on the source zone, then the bandwidth consumption is high and then you have to plan accordingly. Note If a user accidentally deletes a bucket on the secondary site, you can use the metadata sync init command on the site to synchronize data. Prerequisites A running Red Hat Ceph Storage cluster. Ceph Object Gateway configured at two sites at least. Procedure Check the sync status between the sites: Example Synchronize data from the secondary zone: Example Restart all the Ceph Object Gateway daemons at the site: Example 6.5. Troubleshooting radosgw-admin commands after upgrading a cluster Troubleshoot using radosgw-admin commands inside the cephadm shell after upgrading a cluster. The following is an example of errors that could be emitted after trying to run radosgw-admin commands inside the cephadm shell after upgrading a cluster. Example Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the nodes. Procedure Repair the conenction by running the command again with the -- radosgw-admin syntax. Syntax Example
[ "radosgw-admin user info --uid SYNCHRONIZATION_USER, and radosgw-admin zone get", "radosgw-admin sync status", "radosgw-admin data sync status --shard-id= X --source-zone= ZONE_NAME", "radosgw-admin data sync status --shard-id=27 --source-zone=us-east { \"shard_id\": 27, \"marker\": { \"status\": \"incremental-sync\", \"marker\": \"1_1534494893.816775_131867195.1\", \"next_step_marker\": \"\", \"total_entries\": 1, \"pos\": 0, \"timestamp\": \"0.000000\" }, \"pending_buckets\": [], \"recovering_buckets\": [ \"pro-registry:4ed07bb2-a80b-4c69-aa15-fdc17ae6f5f2.314303.1:26\" ] }", "radosgw-admin bucket sync status --bucket= X .", "radosgw-admin sync error list", "ceph --admin-daemon /var/run/ceph/ceph-client.rgw. RGW_ID .asok perf dump data-sync-from- ZONE_NAME", "ceph --admin-daemon /var/run/ceph/ceph-client.rgw.host02-rgw0.103.94309060818504.asok perf dump data-sync-from-us-west { \"data-sync-from-us-west\": { \"fetch bytes\": { \"avgcount\": 54, \"sum\": 54526039885 }, \"fetch not modified\": 7, \"fetch errors\": 0, \"poll latency\": { \"avgcount\": 41, \"sum\": 2.533653367, \"avgtime\": 0.061796423 }, \"poll errors\": 0 } }", "radosgw-admin sync status realm d713eec8-6ec4-4f71-9eaf-379be18e551b (india) zonegroup ccf9e0b2-df95-4e0a-8933-3b17b64c52b7 (shared) zone 04daab24-5bbd-4c17-9cf5-b1981fd7ff79 (primary) current time 2022-09-15T06:53:52Z zonegroup features enabled: resharding metadata sync no sync (zone is master) data sync source: 596319d2-4ffe-4977-ace1-8dd1790db9fb (secondary) syncing full sync: 0/128 shards incremental sync: 128/128 shards data is caught up with source", "radosgw-admin data sync init --source-zone primary", "ceph orch restart rgw.myrgw", "2024-05-13T09:05:30.607+0000 7f4e7c4ea500 0 ERROR: failed to decode obj from .rgw.root:periods.91d2a42c-735b-492a-bcf3-05235ce888aa.3 2024-05-13T09:05:30.607+0000 7f4e7c4ea500 0 failed reading current period info: (5) Input/output error 2024-05-13T09:05:30.607+0000 7f4e7c4ea500 0 ERROR: failed to start notify service ((5) Input/output error 2024-05-13T09:05:30.607+0000 7f4e7c4ea500 0 ERROR: failed to init services (ret=(5) Input/output error) couldn't init storage provider", "date;radosgw-admin bucket list Mon May 13 09:05:30 UTC 2024 2024-05-13T09:05:30.607+0000 7f4e7c4ea500 0 ERROR: failed to decode obj from .rgw.root:periods.91d2a42c-735b-492a-bcf3-05235ce888aa.3 2024-05-13T09:05:30.607+0000 7f4e7c4ea500 0 failed reading current period info: (5) Input/output error 2024-05-13T09:05:30.607+0000 7f4e7c4ea500 0 ERROR: failed to start notify service ((5) Input/output error 2024-05-13T09:05:30.607+0000 7f4e7c4ea500 0 ERROR: failed to init services (ret=(5) Input/output error) couldn't init storage provider", "cephadm shell --radosgw-admin COMMAND", "cephadm shell -- radosgw-admin bucket list" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/troubleshooting_guide/troubleshooting-a-multisite-ceph-object-gateway
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate and prioritize your feedback regarding our documentation. Provide as much detail as possible, so that your request can be quickly addressed. Prerequisites You are logged in to the Red Hat Customer Portal. Procedure To provide feedback, perform the following steps: Click the following link: Create Issue . Describe the issue or enhancement in the Summary text box. Provide details about the issue or requested enhancement in the Description text box. Type your name in the Reporter text box. Click the Create button. This action creates a documentation ticket and routes it to the appropriate documentation team. Thank you for taking the time to provide feedback.
null
https://docs.redhat.com/en/documentation/hybrid_committed_spend/1-latest/html/getting_started_with_hybrid_committed_spend/proc-providing-feedback-on-redhat-documentation
Chapter 7. Managing organizations
Chapter 7. Managing organizations Organizations divide Red Hat Satellite resources into logical groups based on ownership, purpose, content, security level, or other divisions. You can create and manage multiple organizations through Red Hat Satellite, then divide and assign your Red Hat subscriptions to each individual organization. This provides a method of managing the content of several individual organizations under one management system. 7.1. Examples of using organizations in Satellite Single Organization Using a single organization is well suited for a small business with a simple system administration chain. In this case, you create a single organization for the business and assign content to it. You can also use the Default Organization for this purpose. Multiple Organizations Using multiple organizations is well suited for a large company that owns several smaller business units. For example, a company with separate system administration and software development groups. In this case, you create one organization for the company and then an organization for each of the business units it owns. You then assign content to each organization based on its needs. External Organizations Using external organizations is well suited for a company that manages external systems for other organizations. For example, a company offering cloud computing and web hosting resources to customers. In this case, you create an organization for the company's own system infrastructure and then an organization for each external business. You then assign content to each organization where necessary. 7.2. Creating an organization Use this procedure to create an organization. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Administer > Organizations . Click New Organization . In the Name field, enter a name for the organization. In the Label field, enter a unique identifier for the organization. This is used for creating and mapping certain assets, such as directories for content storage. Use letters, numbers, underscores, and dashes, but no spaces. Optional: If you do not wish to enable Simple Content Access (SCA), uncheck the Simple Content Access checkbox. For more information on SCA, see Simple Content Access . Note Red Hat does not recommend disabling SCA as entitlement mode is deprecated. Optional: In the Description field, enter a description for the organization. Click Submit . If you have hosts with no organization assigned, select the hosts that you want to add to the organization, then click Proceed to Edit . In the Edit page, assign the infrastructure resources that you want to add to the organization. This includes networking resources, installation media, kickstart templates, and other parameters. You can return to this page at any time by navigating to Administer > Organizations and then selecting an organization to edit. Click Submit . CLI procedure To create an organization, enter the following command: Note Organizations created this way have Simple Content Access (SCA) enabled by default. If you wish to disable SCA, add the --simple-content-access false parameter to the command. Red Hat does not advise you to disable SCA because entitlement mode (not using SCA) is deprecated. Optional: To edit an organization, enter the hammer organization update command. For example, the following command assigns a compute resource to the organization: 7.3. Creating an organization debug certificate If you require a debug certificate for your organization, use the following procedure. Procedure In the Satellite web UI, navigate to Administer > Organizations . Select an organization that you want to generate a debug certificate for. Click Generate and Download . Save the certificate file in a secure location. Debug certificates for provisioning templates Debug Certificates are automatically generated for provisioning template downloads if they do not already exist in the organization for which they are being downloaded. 7.4. Browsing repository content using an organization debug certificate You can view an organization's repository content using a web browser or using the API if you have a debug certificate for that organization. Prerequisites You created and downloaded an organization certificate. For more information, see Section 7.3, "Creating an organization debug certificate" . Procedure Split the private and public keys from the certificate into two files. Open the X.509 certificate, for example, for the default organization: Copy the contents of the file from -----BEGIN RSA PRIVATE KEY----- to -----END RSA PRIVATE KEY----- , into a key.pem file. Copy the contents of the file from -----BEGIN CERTIFICATE----- to -----END CERTIFICATE----- , into a cert.pem file. To use a browser, you must first convert the X.509 certificate to a format your browser supports and then import the certificate. For Firefox users Convert the certificate into the PKCS12 format using the following command: In the Firefox browser, navigate to Edit > Preferences > Advanced Tab . Select View Certificates and click the Your Certificates tab. Click Import and select the .pfx file to load. Enter the following URL in the address bar to browse the accessible paths for all the repositories and check their contents: For CURL users To use the organization debug certificate with CURL, enter the following command: Ensure that the paths to cert.pem and key.pem are the correct absolute paths otherwise the command fails silently. Pulp uses the organization label, therefore, you must enter the organization label into the URL. 7.5. Deleting an organization You can delete an organization if the organization is not associated with any lifecycle environments or host groups. If there are any lifecycle environments or host groups associated with the organization you are about to delete, remove them by navigating to Administer > Organizations and clicking the relevant organization. Important Do not delete Default Organization created during installation because the default organization is a placeholder for any unassociated hosts in your Satellite environment. There must be at least one organization in the environment at any given time. Procedure In the Satellite web UI, navigate to Administer > Organizations . From the list to the right of the name of the organization you want to delete, select Delete . Click OK to delete the organization. CLI procedure Enter the following command to retrieve the ID of the organization that you want to delete: From the output, note the ID of the organization that you want to delete. Enter the following command to delete an organization:
[ "hammer organization create --name \" My_Organization \" --label \" My_Organization_Label \" --description \" My_Organization_Description \"", "hammer organization update --name \" My_Organization \" --compute-resource-ids 1", "vi 'Default Organization-key-cert.pem'", "openssl pkcs12 -keypbe PBE-SHA1-3DES -certpbe PBE-SHA1-3DES -export -in cert.pem -inkey key.pem -out My_Organization_Label .pfx -name My_Organization", "https:// satellite.example.com /pulp/content/", "curl -k --cert cert.pem --key key.pem https:// satellite.example.com /pulp/content/ My_Organization_Label /Library/content/dist/rhel/server/7/7Server/x86_64/os/", "hammer organization list", "hammer organization delete --id Organization_ID" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/administering_red_hat_satellite/Managing_Organizations_admin
function::fastcall
function::fastcall Name function::fastcall - Mark function as declared fastcall Synopsis Arguments None Description Call this function before accessing arguments using the *_arg functions if the probed kernel function was declared fastcall in the source.
[ "fastcall()" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-fastcall
Chapter 3. Distribution of content in RHEL 8
Chapter 3. Distribution of content in RHEL 8 3.1. Installation Red Hat Enterprise Linux 8 is installed using ISO images. Two types of ISO image are available for the AMD64, Intel 64-bit, 64-bit ARM, IBM Power Systems, and IBM Z architectures: Binary DVD ISO: A full installation image that contains the BaseOS and AppStream repositories and allows you to complete the installation without additional repositories. Note The Binary DVD ISO image is larger than 4.7 GB, and as a result, it might not fit on a single-layer DVD. A dual-layer DVD or USB key is recommended when using the Binary DVD ISO image to create bootable installation media. You can also use the Image Builder tool to create customized RHEL images. For more information about Image Builder, see the Composing a customized RHEL system image document. Boot ISO: A minimal boot ISO image that is used to boot into the installation program. This option requires access to the BaseOS and AppStream repositories to install software packages. The repositories are part of the Binary DVD ISO image. See the Interactively installing RHEL from installation media document for instructions on downloading ISO images, creating installation media, and completing a RHEL installation. For automated Kickstart installations and other advanced topics, see the Automatically installing RHEL document. 3.2. Repositories Red Hat Enterprise Linux 8 is distributed through two main repositories: BaseOS AppStream Both repositories are required for a basic RHEL installation, and are available with all RHEL subscriptions. Content in the BaseOS repository is intended to provide the core set of the underlying OS functionality that provides the foundation for all installations. This content is available in the RPM format and is subject to support terms similar to those in releases of RHEL. For a list of packages distributed through BaseOS, see the Package manifest . Content in the Application Stream repository includes additional user space applications, runtime languages, and databases in support of the varied workloads and use cases. Application Streams are available in the familiar RPM format, as an extension to the RPM format called modules , or as Software Collections. For a list of packages available in AppStream, see the Package manifest . In addition, the CodeReady Linux Builder repository is available with all RHEL subscriptions. It provides additional packages for use by developers. Packages included in the CodeReady Linux Builder repository are unsupported. For more information about RHEL 8 repositories, see the Package manifest . 3.3. Application Streams Red Hat Enterprise Linux 8 introduces the concept of Application Streams. Multiple versions of user space components are now delivered and updated more frequently than the core operating system packages. This provides greater flexibility to customize Red Hat Enterprise Linux without impacting the underlying stability of the platform or specific deployments. Components made available as Application Streams can be packaged as modules or RPM packages and are delivered through the AppStream repository in RHEL 8. Each Application Stream component has a given life cycle, either the same as RHEL 8 or shorter. For details, see Red Hat Enterprise Linux Life Cycle . Modules are collections of packages representing a logical unit: an application, a language stack, a database, or a set of tools. These packages are built, tested, and released together. Module streams represent versions of the Application Stream components. For example, several streams (versions) of the PostgreSQL database server are available in the postgresql module with the default postgresql:10 stream. Only one module stream can be installed on the system. Different versions can be used in separate containers. Detailed module commands are described in the Installing, managing, and removing user-space components document. For a list of modules available in AppStream, see the Package manifest . 3.4. Package management with YUM/DNF On Red Hat Enterprise Linux 8, installing software is ensured by the YUM tool, which is based on the DNF technology. We deliberately adhere to usage of the yum term for consistency with major versions of RHEL. However, if you type dnf instead of yum , the command works as expected because yum is an alias to dnf for compatibility. For more details, see the following documentation: Installing, managing, and removing user-space components Considerations in adopting RHEL 8
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.4_release_notes/distribution-of-content-in-rhel-8
8.215. system-config-lvm
8.215. system-config-lvm 8.215.1. RHBA-2013:1570 - system-config-lvm bug fix update An updated system-config-lvm package that fixes one bug is now available for Red Hat Enterprise Linux 6. The system-config-lvm utility enables users to configure logical volumes using a GUI. Bug Fix BZ# 923643 Due to a bug in the system-config-lvm utility, the utility terminated unexpectedly when striped mirrored devices were found on the system. With this update, the underlying source code has been modified so that the users can now fully interact with supported devices. However, volume group information may not always render properly for stripped mirrored devices. Users of system-config-lvm are advised to upgrade to this updated package, which fixes this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/system-config-lvm
Builds using BuildConfig
Builds using BuildConfig OpenShift Container Platform 4.17 Builds Red Hat OpenShift Documentation Team
[ "kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: \"ruby-sample-build\" 1 spec: runPolicy: \"Serial\" 2 triggers: 3 - type: \"GitHub\" github: secret: \"secret101\" - type: \"Generic\" generic: secret: \"secret101\" - type: \"ImageChange\" source: 4 git: uri: \"https://github.com/openshift/ruby-hello-world\" strategy: 5 sourceStrategy: from: kind: \"ImageStreamTag\" name: \"ruby-20-centos7:latest\" output: 6 to: kind: \"ImageStreamTag\" name: \"origin-ruby-sample:latest\" postCommit: 7 script: \"bundle exec rake test\"", "source: git: uri: https://github.com/openshift/ruby-hello-world.git 1 ref: \"master\" images: - from: kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: - destinationDir: app/dir/injected/dir 2 sourcePath: /usr/lib/somefile.jar contextDir: \"app/dir\" 3 dockerfile: \"FROM centos:7\\nRUN yum install -y httpd\" 4", "source: dockerfile: \"FROM centos:7\\nRUN yum install -y httpd\" 1", "source: git: uri: https://github.com/openshift/ruby-hello-world.git ref: \"master\" images: 1 - from: 2 kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: 3 - destinationDir: injected/dir 4 sourcePath: /usr/lib/somefile.jar 5 - from: kind: ImageStreamTag name: myotherinputimage:latest namespace: myothernamespace pullSecret: mysecret 6 paths: - destinationDir: injected/dir sourcePath: /usr/lib/somefile.jar", "oc secrets link builder dockerhub", "source: git: 1 uri: \"https://github.com/openshift/ruby-hello-world\" ref: \"master\" contextDir: \"app/dir\" 2 dockerfile: \"FROM openshift/ruby-22-centos7\\nUSER example\" 3", "source: git: uri: \"https://github.com/openshift/ruby-hello-world\" ref: \"master\" httpProxy: http://proxy.example.com httpsProxy: https://proxy.example.com noProxy: somedomain.com, otherdomain.com", "oc annotate secret mysecret 'build.openshift.io/source-secret-match-uri-1=ssh://bitbucket.atlassian.com:7999/*'", "kind: Secret apiVersion: v1 metadata: name: matches-all-corporate-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://*.mycorp.com/* data: --- kind: Secret apiVersion: v1 metadata: name: override-for-my-dev-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://mydev1.mycorp.com/* build.openshift.io/source-secret-match-uri-2: https://mydev2.mycorp.com/* data:", "oc annotate secret mysecret 'build.openshift.io/source-secret-match-uri-1=https://*.mycorp.com/*'", "apiVersion: \"build.openshift.io/v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: output: to: kind: \"ImageStreamTag\" name: \"sample-image:latest\" source: git: uri: \"https://github.com/user/app.git\" sourceSecret: name: \"basicsecret\" strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"python-33-centos7:latest\"", "oc set build-secret --source bc/sample-build basicsecret", "oc create secret generic <secret_name> --from-file=<path/to/.gitconfig>", "[http] sslVerify=false", "cat .gitconfig", "[user] name = <name> email = <email> [http] sslVerify = false sslCert = /var/run/secrets/openshift.io/source/client.crt sslKey = /var/run/secrets/openshift.io/source/client.key sslCaInfo = /var/run/secrets/openshift.io/source/cacert.crt", "oc create secret generic <secret_name> --from-literal=username=<user_name> \\ 1 --from-literal=password=<password> \\ 2 --from-file=.gitconfig=.gitconfig --from-file=client.crt=/var/run/secrets/openshift.io/source/client.crt --from-file=cacert.crt=/var/run/secrets/openshift.io/source/cacert.crt --from-file=client.key=/var/run/secrets/openshift.io/source/client.key", "oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --type=kubernetes.io/basic-auth", "oc create secret generic <secret_name> --from-literal=password=<token> --type=kubernetes.io/basic-auth", "ssh-keygen -t ed25519 -C \"[email protected]\"", "oc create secret generic <secret_name> --from-file=ssh-privatekey=<path/to/ssh/private/key> --from-file=<path/to/known_hosts> \\ 1 --type=kubernetes.io/ssh-auth", "cat intermediateCA.crt intermediateCA.crt rootCA.crt > ca.crt", "oc create secret generic mycert --from-file=ca.crt=</path/to/file> 1", "oc create secret generic <secret_name> --from-file=ssh-privatekey=<path/to/ssh/private/key> --from-file=<path/to/.gitconfig> --type=kubernetes.io/ssh-auth", "oc create secret generic <secret_name> --from-file=ca.crt=<path/to/certificate> --from-file=<path/to/.gitconfig>", "oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=ca-cert=</path/to/file> --type=kubernetes.io/basic-auth", "oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=</path/to/.gitconfig> --type=kubernetes.io/basic-auth", "oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=</path/to/.gitconfig> --from-file=ca-cert=</path/to/file> --type=kubernetes.io/basic-auth", "apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5", "oc create -f <filename>", "oc create secret generic dockerhub --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson", "apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password>", "apiVersion: v1 kind: Secret metadata: name: aregistrykey namespace: myapps type: kubernetes.io/dockerconfigjson 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2", "oc create -f <your_yaml_file>.yaml", "oc logs secret-example-pod", "oc delete pod secret-example-pod", "apiVersion: v1 kind: Secret metadata: name: test-secret data: username: <username> 1 password: <password> 2 stringData: hostname: myapp.mydomain.com 3 secret.properties: |- 4 property1=valueA property2=valueB", "apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"cat /etc/secret-volume/*\" ] volumeMounts: # name must match the volume name below - name: secret-volume mountPath: /etc/secret-volume readOnly: true volumes: - name: secret-volume secret: secretName: test-secret restartPolicy: Never", "apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"export\" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username restartPolicy: Never", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username", "oc create configmap settings-mvn --from-file=settings.xml=<path/to/settings.xml>", "apiVersion: core/v1 kind: ConfigMap metadata: name: settings-mvn data: settings.xml: | <settings> ... # Insert maven settings here </settings>", "oc create secret generic secret-mvn --from-file=ssh-privatekey=<path/to/.ssh/id_rsa> --type=kubernetes.io/ssh-auth", "apiVersion: core/v1 kind: Secret metadata: name: secret-mvn type: kubernetes.io/ssh-auth data: ssh-privatekey: | # Insert ssh private key, base64 encoded", "source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn secrets: - secret: name: secret-mvn", "oc new-build openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git --context-dir helloworld --build-secret \"secret-mvn\" --build-config-map \"settings-mvn\"", "source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn destinationDir: \".m2\" secrets: - secret: name: secret-mvn destinationDir: \".ssh\"", "oc new-build openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git --context-dir helloworld --build-secret \"secret-mvn:.ssh\" --build-config-map \"settings-mvn:.m2\"", "FROM centos/ruby-22-centos7 USER root COPY ./secret-dir /secrets COPY ./config / Create a shell script that will output secrets and ConfigMaps when the image is run RUN echo '#!/bin/sh' > /input_report.sh RUN echo '(test -f /secrets/secret1 && echo -n \"secret1=\" && cat /secrets/secret1)' >> /input_report.sh RUN echo '(test -f /config && echo -n \"relative-configMap=\" && cat /config)' >> /input_report.sh RUN chmod 755 /input_report.sh CMD [\"/bin/sh\", \"-c\", \"/input_report.sh\"]", "#!/bin/sh APP_VERSION=1.0 wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar", "#!/bin/sh exec java -jar app.jar", "FROM jboss/base-jdk:8 ENV APP_VERSION 1.0 RUN wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar EXPOSE 8080 CMD [ \"java\", \"-jar\", \"app.jar\" ]", "auths: index.docker.io/v1/: 1 auth: \"YWRfbGzhcGU6R2labnRib21ifTE=\" 2 email: \"[email protected]\" 3 docker.io/my-namespace/my-user/my-image: 4 auth: \"GzhYWRGU6R2fbclabnRgbkSp=\"\" email: \"[email protected]\" docker.io/my-namespace: 5 auth: \"GzhYWRGU6R2deesfrRgbkSp=\"\" email: \"[email protected]\"", "oc create secret generic dockerhub --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson", "spec: output: to: kind: \"DockerImage\" name: \"private.registry.com/org/private-image:latest\" pushSecret: name: \"dockerhub\"", "oc set build-secret --push bc/sample-build dockerhub", "oc secrets link builder dockerhub", "strategy: sourceStrategy: from: kind: \"DockerImage\" name: \"docker.io/user/private_repository\" pullSecret: name: \"dockerhub\"", "oc set build-secret --pull bc/sample-build dockerhub", "oc secrets link builder dockerhub", "env: - name: FIELDREF_ENV valueFrom: fieldRef: fieldPath: metadata.name", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: MYVAL valueFrom: secretKeyRef: key: myval name: mysecret", "spec: output: to: kind: \"ImageStreamTag\" name: \"sample-image:latest\"", "spec: output: to: kind: \"DockerImage\" name: \"my-registry.mycompany.com:5000/myimages/myimage:tag\"", "spec: output: to: kind: \"ImageStreamTag\" name: \"my-image:latest\" imageLabels: - name: \"vendor\" value: \"MyCompany\" - name: \"authoritative-source-url\" value: \"registry.mycompany.com\"", "strategy: dockerStrategy: from: kind: \"ImageStreamTag\" name: \"debian:latest\"", "strategy: dockerStrategy: dockerfilePath: dockerfiles/app1/Dockerfile", "dockerStrategy: env: - name: \"HTTP_PROXY\" value: \"http://myproxy.net:5187/\"", "dockerStrategy: buildArgs: - name: \"version\" value: \"latest\"", "strategy: dockerStrategy: imageOptimizationPolicy: SkipLayers", "spec: dockerStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8 - name: my-csi-volume 9 mounts: - destinationPath: /opt/app-root/src/some_path 10 source: type: CSI 11 csi: driver: csi.sharedresource.openshift.io 12 readOnly: true 13 volumeAttributes: 14 attribute: value", "strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"incremental-image:latest\" 1 incremental: true 2", "strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"builder-image:latest\" scripts: \"http://somehost.com/scripts_directory\" 1", "sourceStrategy: env: - name: \"DISABLE_ASSET_COMPILATION\" value: \"true\"", "#!/bin/bash restore build artifacts if [ \"USD(ls /tmp/s2i/artifacts/ 2>/dev/null)\" ]; then mv /tmp/s2i/artifacts/* USDHOME/. fi move the application source mv /tmp/s2i/src USDHOME/src build application artifacts pushd USD{HOME} make all install the artifacts make install popd", "#!/bin/bash run the application /opt/application/run.sh", "#!/bin/bash pushd USD{HOME} if [ -d deps ]; then # all deps contents to tar stream tar cf - deps fi popd", "#!/bin/bash inform the user how to use the image cat <<EOF This is a S2I sample builder image, to use it, install https://github.com/openshift/source-to-image EOF", "spec: sourceStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8 - name: my-csi-volume 9 mounts: - destinationPath: /opt/app-root/src/some_path 10 source: type: CSI 11 csi: driver: csi.sharedresource.openshift.io 12 readOnly: true 13 volumeAttributes: 14 attribute: value", "strategy: customStrategy: from: kind: \"DockerImage\" name: \"openshift/sti-image-builder\"", "strategy: customStrategy: secrets: - secretSource: 1 name: \"secret1\" mountPath: \"/tmp/secret1\" 2 - secretSource: name: \"secret2\" mountPath: \"/tmp/secret2\"", "customStrategy: env: - name: \"HTTP_PROXY\" value: \"http://myproxy.net:5187/\"", "oc set env <enter_variables>", "kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"sample-pipeline\" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: |- node('agent') { stage 'build' openshiftBuild(buildConfig: 'ruby-sample-build', showBuildLogs: 'true') stage 'deploy' openshiftDeploy(deploymentConfig: 'frontend') }", "kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"sample-pipeline\" spec: source: git: uri: \"https://github.com/openshift/ruby-hello-world\" strategy: jenkinsPipelineStrategy: jenkinsfilePath: some/repo/dir/filename 1", "jenkinsPipelineStrategy: env: - name: \"FOO\" value: \"BAR\"", "oc project <project_name>", "oc new-app jenkins-ephemeral 1", "kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"nodejs-sample-pipeline\" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: <pipeline content from below> type: JenkinsPipeline", "def templatePath = 'https://raw.githubusercontent.com/openshift/nodejs-ex/master/openshift/templates/nodejs-mongodb.json' 1 def templateName = 'nodejs-mongodb-example' 2 pipeline { agent { node { label 'nodejs' 3 } } options { timeout(time: 20, unit: 'MINUTES') 4 } stages { stage('preamble') { steps { script { openshift.withCluster() { openshift.withProject() { echo \"Using project: USD{openshift.project()}\" } } } } } stage('cleanup') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.selector(\"all\", [ template : templateName ]).delete() 5 if (openshift.selector(\"secrets\", templateName).exists()) { 6 openshift.selector(\"secrets\", templateName).delete() } } } } } } stage('create') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.newApp(templatePath) 7 } } } } } stage('build') { steps { script { openshift.withCluster() { openshift.withProject() { def builds = openshift.selector(\"bc\", templateName).related('builds') timeout(5) { 8 builds.untilEach(1) { return (it.object().status.phase == \"Complete\") } } } } } } } stage('deploy') { steps { script { openshift.withCluster() { openshift.withProject() { def rm = openshift.selector(\"dc\", templateName).rollout() timeout(5) { 9 openshift.selector(\"dc\", templateName).related('pods').untilEach(1) { return (it.object().status.phase == \"Running\") } } } } } } } stage('tag') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.tag(\"USD{templateName}:latest\", \"USD{templateName}-staging:latest\") 10 } } } } } } }", "oc create -f nodejs-sample-pipeline.yaml", "oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/jenkins/pipeline/nodejs-sample-pipeline.yaml", "oc start-build nodejs-sample-pipeline", "FROM registry.redhat.io/rhel8/buildah In this example, `/tmp/build` contains the inputs that build when this custom builder image is run. Normally the custom builder image fetches this content from some location at build time, by using git clone as an example. ADD dockerfile.sample /tmp/input/Dockerfile ADD build.sh /usr/bin RUN chmod a+x /usr/bin/build.sh /usr/bin/build.sh contains the actual custom build logic that will be run when this custom builder image is run. ENTRYPOINT [\"/usr/bin/build.sh\"]", "FROM registry.access.redhat.com/ubi9/ubi RUN touch /tmp/build", "#!/bin/sh Note that in this case the build inputs are part of the custom builder image, but normally this is retrieved from an external source. cd /tmp/input OUTPUT_REGISTRY and OUTPUT_IMAGE are env variables provided by the custom build framework TAG=\"USD{OUTPUT_REGISTRY}/USD{OUTPUT_IMAGE}\" performs the build of the new image defined by dockerfile.sample buildah --storage-driver vfs bud --isolation chroot -t USD{TAG} . buildah requires a slight modification to the push secret provided by the service account to use it for pushing the image cp /var/run/secrets/openshift.io/push/.dockercfg /tmp (echo \"{ \\\"auths\\\": \" ; cat /var/run/secrets/openshift.io/push/.dockercfg ; echo \"}\") > /tmp/.dockercfg push the new image to the target for the build buildah --storage-driver vfs push --tls-verify=false --authfile /tmp/.dockercfg USD{TAG}", "oc new-build --binary --strategy=docker --name custom-builder-image", "oc start-build custom-builder-image --from-dir . -F", "kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: sample-custom-build labels: name: sample-custom-build annotations: template.alpha.openshift.io/wait-for-ready: 'true' spec: strategy: type: Custom customStrategy: forcePull: true from: kind: ImageStreamTag name: custom-builder-image:latest namespace: <yourproject> 1 output: to: kind: ImageStreamTag name: sample-custom:latest", "oc create -f buildconfig.yaml", "kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: sample-custom spec: {}", "oc create -f imagestream.yaml", "oc start-build sample-custom-build -F", "oc start-build <buildconfig_name>", "oc start-build --from-build=<build_name>", "oc start-build <buildconfig_name> --follow", "oc start-build <buildconfig_name> --env=<key>=<value>", "oc start-build hello-world --from-repo=../hello-world --commit=v2", "oc cancel-build <build_name>", "oc cancel-build <build1_name> <build2_name> <build3_name>", "oc cancel-build bc/<buildconfig_name>", "oc cancel-build bc/<buildconfig_name>", "oc delete bc <BuildConfigName>", "oc delete --cascade=false bc <BuildConfigName>", "oc describe build <build_name>", "oc describe build <build_name>", "oc logs -f bc/<buildconfig_name>", "oc logs --version=<number> bc/<buildconfig_name>", "sourceStrategy: env: - name: \"BUILD_LOGLEVEL\" value: \"2\" 1", "type: \"GitHub\" github: secretReference: name: \"mysecret\"", "- kind: Secret apiVersion: v1 metadata: name: mysecret creationTimestamp: data: WebHookSecretKey: c2VjcmV0dmFsdWUx", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"true\" name: webhook-access-unauthenticated namespace: <namespace> 1 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: \"system:webhook\" subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: \"system:unauthenticated\"", "oc apply -f add-webhooks-unauth.yaml", "type: \"GitHub\" github: secretReference: name: \"mysecret\"", "https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github", "oc describe bc/<name_of_your_BuildConfig>", "https://api.starter-us-east-1.openshift.com:443/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github", "curl -H \"X-GitHub-Event: push\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github", "type: \"GitLab\" gitlab: secretReference: name: \"mysecret\"", "https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab", "oc describe bc <name>", "curl -H \"X-GitLab-Event: Push Hook\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab", "type: \"Bitbucket\" bitbucket: secretReference: name: \"mysecret\"", "https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket", "oc describe bc <name>", "curl -H \"X-Event-Key: repo:push\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket", "type: \"Generic\" generic: secretReference: name: \"mysecret\" allowEnv: true 1", "https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic", "curl -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic", "git: uri: \"<url to git repository>\" ref: \"<optional git reference>\" commit: \"<commit hash identifying a specific git commit>\" author: name: \"<author name>\" email: \"<author e-mail>\" committer: name: \"<committer name>\" email: \"<committer e-mail>\" message: \"<commit message>\" env: 1 - name: \"<variable name>\" value: \"<variable value>\"", "curl -H \"Content-Type: application/yaml\" --data-binary @payload_file.yaml -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic", "oc describe bc <name>", "kind: \"ImageStream\" apiVersion: \"v1\" metadata: name: \"ruby-20-centos7\"", "strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"ruby-20-centos7:latest\"", "type: \"ImageChange\" 1 imageChange: {} type: \"ImageChange\" 2 imageChange: from: kind: \"ImageStreamTag\" name: \"custom-image:latest\"", "strategy: sourceStrategy: from: kind: \"DockerImage\" name: \"172.30.17.3:5001/mynamespace/ruby-20-centos7:<immutableid>\"", "type: \"ImageChange\" imageChange: from: kind: \"ImageStreamTag\" name: \"custom-image:latest\" paused: true", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: bc-ict-example namespace: bc-ict-example-namespace spec: triggers: - imageChange: from: kind: ImageStreamTag name: input:latest namespace: bc-ict-example-namespace - imageChange: from: kind: ImageStreamTag name: input2:latest namespace: bc-ict-example-namespace type: ImageChange status: imageChangeTriggers: - from: name: input:latest namespace: bc-ict-example-namespace lastTriggerTime: \"2021-06-30T13:47:53Z\" lastTriggeredImageID: image-registry.openshift-image-registry.svc:5000/bc-ict-example-namespace/input@sha256:0f88ffbeb9d25525720bfa3524cb1bf0908b7f791057cf1acfae917b11266a69 - from: name: input2:latest namespace: bc-ict-example-namespace lastTriggeredImageID: image-registry.openshift-image-registry.svc:5000/bc-ict-example-namespace/input2@sha256:0f88ffbeb9d25525720bfa3524cb2ce0908b7f791057cf1acfae917b11266a69 lastVersion: 1", "Then you use the `name` and `namespace` from that build to find the corresponding image change trigger in `buildConfig.spec.triggers`.", "type: \"ConfigChange\"", "oc set triggers bc <name> --from-github", "oc set triggers bc <name> --from-image='<image>'", "oc set triggers bc <name> --from-bitbucket --remove", "oc set triggers --help", "postCommit: script: \"bundle exec rake test --verbose\"", "postCommit: command: [\"/bin/bash\", \"-c\", \"bundle exec rake test --verbose\"]", "postCommit: command: [\"bundle\", \"exec\", \"rake\", \"test\"] args: [\"--verbose\"]", "oc set build-hook bc/mybc --post-commit --command -- bundle exec rake test --verbose", "oc set build-hook bc/mybc --post-commit --script=\"bundle exec rake test --verbose\"", "apiVersion: \"v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: resources: limits: cpu: \"100m\" 1 memory: \"256Mi\" 2", "resources: requests: 1 cpu: \"100m\" memory: \"256Mi\"", "spec: completionDeadlineSeconds: 1800", "apiVersion: \"v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: nodeSelector: 1 key1: value1 key2: value2", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: artifact-build spec: output: to: kind: ImageStreamTag name: artifact-image:latest source: git: uri: https://github.com/openshift/openshift-jee-sample.git ref: \"master\" strategy: sourceStrategy: from: kind: ImageStreamTag name: wildfly:10.1 namespace: openshift", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: image-build spec: output: to: kind: ImageStreamTag name: image-build:latest source: dockerfile: |- FROM jee-runtime:latest COPY ROOT.war /deployments/ROOT.war images: - from: 1 kind: ImageStreamTag name: artifact-image:latest paths: 2 - sourcePath: /wildfly/standalone/deployments/ROOT.war destinationDir: \".\" strategy: dockerStrategy: from: 3 kind: ImageStreamTag name: jee-runtime:latest triggers: - imageChange: {} type: ImageChange", "apiVersion: \"v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: successfulBuildsHistoryLimit: 2 1 failedBuildsHistoryLimit: 2 2", "oc tag --source=docker registry.redhat.io/ubi9/ubi:latest ubi9:latest -n openshift", "apiVersion: image.openshift.io/v1 kind: ImageStream metadata: name: ubi9 namespace: openshift spec: tags: - from: kind: DockerImage name: registry.redhat.io/ubi9/ubi:latest name: latest referencePolicy: type: Source", "oc tag --source=docker registry.redhat.io/ubi9/ubi:latest ubi:latest", "apiVersion: image.openshift.io/v1 kind: ImageStream metadata: name: ubi9 spec: tags: - from: kind: DockerImage name: registry.redhat.io/ubi9/ubi:latest name: latest referencePolicy: type: Source", "cat << EOF > secret-template.txt kind: Secret apiVersion: v1 metadata: name: etc-pki-entitlement type: Opaque data: {{ range \\USDkey, \\USDvalue := .data }} {{ \\USDkey }}: {{ \\USDvalue }} {{ end }} EOF oc get secret etc-pki-entitlement -n openshift-config-managed -o=go-template-file --template=secret-template.txt | oc apply -f -", "strategy: dockerStrategy: from: kind: ImageStreamTag name: ubi9:latest volumes: - name: etc-pki-entitlement mounts: - destinationPath: /etc/pki/entitlement source: type: Secret secret: secretName: etc-pki-entitlement", "FROM registry.redhat.io/ubi9/ubi:latest RUN rm -rf /etc/rhsm-host 1 RUN yum --enablerepo=codeready-builder-for-rhel-9-x86_64-rpms install \\ 2 nss_wrapper uid_wrapper -y && yum clean all -y RUN ln -s /run/secrets/rhsm /etc/rhsm-host 3", "[test-<name>] name=test-<number> baseurl = https://satellite.../content/dist/rhel/server/7/7Server/x86_64/os enabled=1 gpgcheck=0 sslverify=0 sslclientkey = /etc/pki/entitlement/...-key.pem sslclientcert = /etc/pki/entitlement/....pem", "oc create configmap yum-repos-d --from-file /path/to/satellite.repo", "strategy: dockerStrategy: from: kind: ImageStreamTag name: ubi9:latest volumes: - name: yum-repos-d mounts: - destinationPath: /etc/yum.repos.d source: type: ConfigMap configMap: name: yum-repos-d - name: etc-pki-entitlement mounts: - destinationPath: /etc/pki/entitlement source: type: Secret secret: secretName: etc-pki-entitlement", "FROM registry.redhat.io/ubi9/ubi:latest RUN rm -rf /etc/rhsm-host 1 RUN yum --enablerepo=codeready-builder-for-rhel-9-x86_64-rpms install \\ 2 nss_wrapper uid_wrapper -y && yum clean all -y RUN ln -s /run/secrets/rhsm /etc/rhsm-host 3", "oc apply -f - <<EOF kind: SharedSecret apiVersion: sharedresource.openshift.io/v1alpha1 metadata: name: etc-pki-entitlement spec: secretRef: name: etc-pki-entitlement namespace: openshift-config-managed EOF", "oc apply -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: builder-etc-pki-entitlement namespace: build-namespace rules: - apiGroups: - sharedresource.openshift.io resources: - sharedsecrets resourceNames: - etc-pki-entitlement verbs: - use EOF", "oc create rolebinding builder-etc-pki-entitlement --role=builder-etc-pki-entitlement --serviceaccount=build-namespace:builder", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: uid-wrapper-rhel9 namespace: build-namespace spec: runPolicy: Serial source: dockerfile: | FROM registry.redhat.io/ubi9/ubi:latest RUN rm -rf /etc/rhsm-host 1 RUN yum --enablerepo=codeready-builder-for-rhel-9-x86_64-rpms install \\ 2 nss_wrapper uid_wrapper -y && yum clean all -y RUN ln -s /run/secrets/rhsm /etc/rhsm-host 3 strategy: type: Docker dockerStrategy: volumes: - mounts: - destinationPath: \"/etc/pki/entitlement\" name: etc-pki-entitlement source: csi: driver: csi.sharedresource.openshift.io readOnly: true 4 volumeAttributes: sharedSecret: etc-pki-entitlement 5 type: CSI", "oc start-build uid-wrapper-rhel9 -n build-namespace -F", "oc annotate clusterrolebinding.rbac system:build-strategy-docker-binding 'rbac.authorization.kubernetes.io/autoupdate=false' --overwrite", "oc adm policy remove-cluster-role-from-group system:build-strategy-docker system:authenticated", "oc get clusterrole admin -o yaml | grep \"builds/docker\"", "oc get clusterrole edit -o yaml | grep \"builds/docker\"", "oc adm policy add-cluster-role-to-user system:build-strategy-docker devuser", "oc adm policy add-role-to-user system:build-strategy-docker devuser -n devproject", "oc edit build.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Build 1 metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 2 name: cluster resourceVersion: \"107233\" selfLink: /apis/config.openshift.io/v1/builds/cluster uid: e2e9cc14-78a9-11e9-b92b-06d6c7da38dc spec: buildDefaults: 2 defaultProxy: 3 httpProxy: http://proxy.com httpsProxy: https://proxy.com noProxy: internal.com env: 4 - name: envkey value: envvalue gitProxy: 5 httpProxy: http://gitproxy.com httpsProxy: https://gitproxy.com noProxy: internalgit.com imageLabels: 6 - name: labelkey value: labelvalue resources: 7 limits: cpu: 100m memory: 50Mi requests: cpu: 10m memory: 10Mi buildOverrides: 8 imageLabels: 9 - name: labelkey value: labelvalue nodeSelector: 10 selectorkey: selectorvalue tolerations: 11 - effect: NoSchedule key: node-role.kubernetes.io/builds operator: Exists", "requested access to the resource is denied", "oc describe quota", "secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60", "oc delete secret <secret_name>", "oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-", "oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num-", "oc create configmap registry-cas -n openshift-config --from-file=myregistry.corp.com..5000=/etc/docker/certs.d/myregistry.corp.com:5000/ca.crt --from-file=otherregistry.com=/etc/docker/certs.d/otherregistry.com/ca.crt", "oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-cas\"}}}' --type=merge" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/builds_using_buildconfig/index
Chapter 2. Preparing to deploy multiple OpenShift Data Foundation storage clusters
Chapter 2. Preparing to deploy multiple OpenShift Data Foundation storage clusters Before you begin the deployment of OpenShift Data Foundation using dynamic, local, or external storage, ensure that your resource requirements are met. See the Resource requirements section in the Planning guide. Things you should remember before installing multiple OpenShift Data Foundation storage clusters: openshift-storage and openshift-storage-extended are the exclusively supported namespaces. Internal storage cluster is restricted to the OpenShift Data Foundation operator namespace. External storage cluster is permissible in both operator and non-operator namespaces. Multiple storage clusters are not supported in the same namespace. Hence, the external storage system will not be visible under the OpenShift Data Foundation operator page as the operator is under openshift-storage namespace and the external storage system is not. Customers running external storage clusters in the operator namespace cannot utilize multiple storage clusters. Multicloud Object Gateway is supported solely within the operator namespace. It is ignored in other namespaces. RADOS Gateway (RGW) can be in either the operator namespace, a non-operator namespace, or both Network File System (NFS) is enabled as long as it is enabled for at least one of the clusters. Topology is enabled as long as it is enabled for at least one of the clusters. Topology domain labels are set as long as the internal cluster is present. The Topology view of the cluster is only supported for OpenShift Data Foundation internal mode deployments. Different multus settings are not supported for multiple storage clusters.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_multiple_openshift_data_foundation_storage_clusters/preparing-to-deploy-multiple-odf-storage-clusters_rhodf
Chapter 5. Manually upgrading the kernel
Chapter 5. Manually upgrading the kernel The Red Hat Enterprise Linux kernel is custom-built by the Red Hat Enterprise Linux kernel team to ensure its integrity and compatibility with supported hardware. Before Red Hat releases a kernel, it must first pass a rigorous set of quality assurance tests. Red Hat Enterprise Linux kernels are packaged in the RPM format so that they are easy to upgrade and verify using the Yum or PackageKit package managers. PackageKit automatically queries the Red Hat Content Delivery Network servers and informs you of packages with available updates, including kernel packages. This chapter is therefore only useful for users who need to manually update a kernel package using the rpm command instead of yum . Warning Whenever possible, use either the Yum or PackageKit package manager to install a new kernel because they always install a new kernel instead of replacing the current one, which could potentially leave your system unable to boot. Warning Custom kernels are not supported by Red Hat. However, guidance can be obtained from the solution article . For more information on installing kernel packages with yum , see the relevant section in the System Administrator's Guide . For information on Red Hat Content Delivery Network, see the relevant section in the System Administrator's Guide . 5.1. Overview of kernel packages Red Hat Enterprise Linux contains the following kernel packages: kernel - Contains the kernel for single-core, multi-core, and multi-processor systems. kernel-debug - Contains a kernel with numerous debugging options enabled for kernel diagnosis, at the expense of reduced performance. kernel-devel - Contains the kernel headers and makefiles sufficient to build modules against the kernel package. kernel-debug-devel - Contains the development version of the kernel with numerous debugging options enabled for kernel diagnosis, at the expense of reduced performance. kernel-doc - Documentation files from the kernel source. Various portions of the Linux kernel and the device drivers shipped with it are documented in these files. Installation of this package provides a reference to the options that can be passed to Linux kernel modules at load time. By default, these files are placed in the /usr/share/doc/kernel-doc- kernel_version / directory. kernel-headers - Includes the C header files that specify the interface between the Linux kernel and user-space libraries and programs. The header files define structures and constants that are needed for building most standard programs. linux-firmware - Contains all of the firmware files that are required by various devices to operate. perf - This package contains the perf tool, which enables performance monitoring of the Linux kernel. kernel-abi-whitelists - Contains information pertaining to the Red Hat Enterprise Linux kernel ABI, including a lists of kernel symbols that are needed by external Linux kernel modules and a yum plug-in to aid enforcement. kernel-tools - Contains tools for manipulating the Linux kernel and supporting documentation. 5.2. Preparing to upgrade Before upgrading the kernel, it is recommended that you take some precautionary steps. First, ensure that working boot media exists for the system in case a problem occurs. If the boot loader is not configured properly to boot the new kernel, you can use this media to boot into Red Hat Enterprise Linux USB media often comes in the form of flash devices sometimes called pen drives , thumb disks , or keys , or as an externally-connected hard disk device. Almost all media of this type is formatted as a VFAT file system. You can create bootable USB media on media formatted as ext2 , ext3 , ext4 , or VFAT . You can transfer a distribution image file or a minimal boot media image file to USB media. Make sure that sufficient free space is available on the device. Around 4 GB is required for a distribution DVD image, around 700 MB for a distribution CD image, or around 10 MB for a minimal boot media image. You must have a copy of the boot.iso file from a Red Hat Enterprise Linux installation DVD, or installation CD-ROM #1, and you need a USB storage device formatted with the VFAT file system and around 16 MB of free space. For more information on using USB storage devices, review How to format a USB key and How to manually mount a USB flash drive in a non-graphical environment solution articles. The following procedure does not affect existing files on the USB storage device unless they have the same path names as the files that you copy onto it. To create USB boot media, perform the following commands as the root user: Install the syslinux package if it is not installed on your system. To do so, as root, run the yum install syslinux command. Install the SYSLINUX bootloader on the USB storage device: ... where sdX is the device name. Create mount points for boot.iso and the USB storage device: Mount boot.iso : Mount the USB storage device: Copy the ISOLINUX files from the boot.iso to the USB storage device: Use the isolinux.cfg file from boot.iso as the syslinux.cfg file for the USB device: Unmount boot.iso and the USB storage device: Reboot the machine with the boot media and verify that you are able to boot with it before continuing. Alternatively, on systems with a floppy drive, you can create a boot diskette by installing the mkbootdisk package and running the mkbootdisk command as root . See man mkbootdisk man page after installing the package for usage information. To determine which kernel packages are installed, execute the command yum list installed "kernel-*" at a shell prompt. The output comprises some or all of the following packages, depending on the system's architecture, and the version numbers might differ: From the output, determine which packages need to be downloaded for the kernel upgrade. For a single processor system, the only required package is the kernel package. See Section 5.1, "Overview of kernel packages" for descriptions of the different packages. 5.3. Downloading the upgraded kernel There are several ways to determine if an updated kernel is available for the system. Security Errata - See Security Advisories in Red Hat Customer Portal for information on security errata, including kernel upgrades that fix security issues. The Red Hat Content Delivery Network - For a system subscribed to the Red Hat Content Delivery Network, the yum package manager can download the latest kernel and upgrade the kernel on the system. The Dracut utility creates an initial RAM file system image if needed, and configure the boot loader to boot the new kernel. For more information on installing packages from the Red Hat Content Delivery Network, see the relevant section of the System Administrator's Guide . For more information on subscribing a system to the Red Hat Content Delivery Network, see the relevant section of the System Administrator's Guide . If yum was used to download and install the updated kernel from the Red Hat Network, follow the instructions in Section 5.5, "Verifying the initial RAM file system image" and Section 5.6, "Verifying the boot loader" only, do not change the kernel to boot by default. Red Hat Network automatically changes the default kernel to the latest version. To install the kernel manually, continue to Section 5.4, "Performing the upgrade" . 5.4. Performing the upgrade After retrieving all of the necessary packages, it is time to upgrade the existing kernel. Important It is strongly recommended that you keep the old kernel in case there are problems with the new kernel. At a shell prompt, change to the directory that contains the kernel RPM packages. Use -i argument with the rpm command to keep the old kernel. Do not use the -U option, since it overwrites the currently installed kernel, which creates boot loader problems. For example: The step is to verify that the initial RAM file system image has been created. See Section 5.5, "Verifying the initial RAM file system image" for details. 5.5. Verifying the initial RAM file system image The job of the initial RAM file system image is to preload the block device modules, such as for IDE, SCSI or RAID, so that the root file system, on which those modules normally reside, can then be accessed and mounted. On Red Hat Enterprise Linux 7 systems, whenever a new kernel is installed using either the Yum , PackageKit , or RPM package manager, the Dracut utility is always called by the installation scripts to create an initramfs (initial RAM file system image). If you make changes to the kernel attributes by modifying the /etc/sysctl.conf file or another sysctl configuration file, and if the changed settings are used early in the boot process, then rebuilding the Initial RAM File System Image by running the dracut -f command might be necessary. An example is if you have made changes related to networking and are booting from network-attached storage. On all architectures other than IBM eServer System i (see the section called "Verifying the initial RAM file system image and kernel on IBM eServer System i" ), you can create an initramfs by running the dracut command. However, you usually do not need to create an initramfs manually: this step is automatically performed if the kernel and its associated packages are installed or upgraded from RPM packages distributed by Red Hat. You can verify that an initramfs corresponding to your current kernel version exists and is specified correctly in the grub.cfg configuration file by following this procedure: Verifying the initial RAM file system image As root , list the contents in the /boot directory and find the kernel ( vmlinuz- kernel_version ) and initramfs- kernel_version with the latest (most recent) version number: Example 5.1. Ensuring that the kernel and initramfs versions match Example 5.1, "Ensuring that the kernel and initramfs versions match" shows that: we have three kernels installed (or, more correctly, three kernel files are present in the /boot directory), the latest kernel is vmlinuz-3.10.0-78.el7.x86_64 , and an initramfs file matching our kernel version, initramfs-3.10.0-78.el7.x86_64kdump.img , also exists. Important In the /boot directory you might find several initramfs- kernel_version kdump.img files. These are special files created by the Kdump mechanism for kernel debugging purposes, are not used to boot the system, and can safely be ignored. For more information on kdump , see the Red Hat Enterprise Linux 7 Kernel Crash Dump Guide . If your initramfs- kernel_version file does not match the version of the latest kernel in the /boot directory, or, in certain other situations, you might need to generate an initramfs file with the Dracut utility. Simply invoking dracut as root without options causes it to generate an initramfs file in /boot for the latest kernel present in that directory: You must use the -f , --force option if you want dracut to overwrite an existing initramfs (for example, if your initramfs has become corrupt). Otherwise dracut refuses to overwrite the existing initramfs file: You can create an initramfs in the current directory by calling dracut initramfs_name kernel_version : If you need to specify specific kernel modules to be preloaded, add the names of those modules (minus any file name suffixes such as .ko ) inside the parentheses of the add_dracutmodules+=" module more_modules " directive of the /etc/dracut.conf configuration file. You can list the file contents of an initramfs image file created by dracut by using the lsinitrd initramfs_file command: See man dracut and man dracut.conf for more information on options and usage. Examine the /boot/grub2/grub.cfg configuration file to ensure that an initramfs- kernel_version .img file exists for the kernel version you are booting. For example: See Section 5.6, "Verifying the boot loader" for more information. Verifying the initial RAM file system image and kernel on IBM eServer System i On IBM eServer System i machines, the initial RAM file system and kernel files are combined into a single file, which is created with the addRamDisk command. This step is performed automatically if the kernel and its associated packages are installed or upgraded from the RPM packages distributed by Red Hat thus, it does not need to be executed manually. To verify that it was created, run the following command as root to make sure the /boot/vmlinitrd- kernel_version file already exists: The kernel_version needs to match the version of the kernel just installed. Reversing the changes made to the initial RAM file system image In some cases, for example, if you misconfigure the system and it no longer boots, you need to reverse the changes made to the Initial RAM File System Image by following this procedure: Reversing Changes Made to the Initial RAM File System Image Reboot the system choosing the rescue kernel in the GRUB menu. Change the incorrect setting that caused the initramfs to malfunction. Recreate the initramfs with the correct settings by running the following command as root: The above procedure might be useful if, for example, you incorrectly set the vm.nr_hugepages in the sysctl.conf file. Because the sysctl.conf file is included in initramfs , the new vm.nr_hugepages setting gets applied in initramfs and causes rebuilding of the initramfs . However, because the setting is incorrect, the new initramfs is broken and the newly built kernel does not boot, which necessitates correcting the setting using the above procedure. Listing the contents of the initial RAM file system image To list the files that are included in the initramfs , run the following command as root: To only list files in the /etc directory, use the following command: To output the contents of a specific file stored in the initramfs for the current kernel, use the -f option: For example, to output the contents of sysctl.conf , use the following command: To specify a kernel version, use the --kver option: For example, to list the information about kernel version 3.10.0-327.10.1.el7.x86_64, use the following command: 5.6. Verifying the boot loader You can install a kernel either with the yum command or with the rpm command. When you install a kernel using rpm , the kernel package creates an entry in the boot loader configuration file for that new kernel. Note that both commands configure the new kernel to boot as the default kernel only when you include the following setting in the /etc/sysconfig/kernel configuration file: The DEFAULTKERNEL option specifies the default kernel package type. The UPDATEDEFAULT option specifies whether the new kernel package makes the new kernels the default.
[ "# syslinux /dev/sdX1", "# mkdir /mnt/isoboot /mnt/diskboot", "# mount -o loop boot.iso /mnt/isoboot", "# mount /dev/sdX1 /mnt/diskboot", "# cp /mnt/isoboot/isolinux/* /mnt/diskboot", "# grep -v local /mnt/isoboot/isolinux/isolinux.cfg > /mnt/diskboot/syslinux.cfg", "# umount /mnt/isoboot /mnt/diskboot", "# yum list installed \"kernel-*\" kernel.x86_64 3.10.0-54.0.1.el7 @rhel7/7.0 kernel-devel.x86_64 3.10.0-54.0.1.el7 @rhel7 kernel-headers.x86_64 3.10.0-54.0.1.el7 @rhel7/7.0", "# rpm -ivh kernel-kernel_version.arch.rpm", "# ls /boot config-3.10.0-67.el7.x86_64 config-3.10.0-78.el7.x86_64 efi grub grub2 initramfs-0-rescue-07f43f20a54c4ce8ada8b70d33fd001c.img initramfs-3.10.0-67.el7.x86_64.img initramfs-3.10.0-67.el7.x86_64kdump.img initramfs-3.10.0-78.el7.x86_64.img initramfs-3.10.0-78.el7.x86_64kdump.img initrd-plymouth.img symvers-3.10.0-67.el7.x86_64.gz symvers-3.10.0-78.el7.x86_64.gz System.map-3.10.0-67.el7.x86_64 System.map-3.10.0-78.el7.x86_64 vmlinuz-0-rescue-07f43f20a54c4ce8ada8b70d33fd001c vmlinuz-3.10.0-67.el7.x86_64 vmlinuz-3.10.0-78.el7.x86_64", "# dracut", "# dracut Does not override existing initramfs (/boot/initramfs-3.10.0-78.el7.x86_64.img) without --force", "# dracut \"initramfs-USD(uname -r).img\" USD(uname -r)", "# lsinitrd /boot/initramfs-3.10.0-78.el7.x86_64.img Image: /boot/initramfs-3.10.0-78.el7.x86_64.img: 11M ======================================================================== dracut-033-68.el7 ======================================================================== drwxr-xr-x 12 root root 0 Feb 5 06:35 . drwxr-xr-x 2 root root 0 Feb 5 06:35 proc lrwxrwxrwx 1 root root 24 Feb 5 06:35 init -> /usr/lib/systemd/systemd drwxr-xr-x 10 root root 0 Feb 5 06:35 etc drwxr-xr-x 2 root root 0 Feb 5 06:35 usr/lib/modprobe.d [output truncated]", "# grep initramfs /boot/grub2/grub.cfg initrd16 /initramfs-3.10.0-123.el7.x86_64.img initrd16 /initramfs-0-rescue-6d547dbfd01c46f6a4c1baa8c4743f57.img", "# ls -l /boot/", "# dracut --kver kernel_version --force", "# lsinitrd", "# lsinitrd | grep etc/", "# lsinitrd -f filename", "# lsinitrd -f /etc/sysctl.conf", "# lsinitrd --kver kernel_version -f /etc/sysctl.conf", "# lsinitrd --kver 3.10.0-327.10.1.el7.x86_64 -f /etc/sysctl.conf", "DEFAULTKERNEL=kernel UPDATEDEFAULT=yes" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/kernel_administration_guide/ch-Manually_Upgrading_the_Kernel
Chapter 2. Using the Argo CD plugin
Chapter 2. Using the Argo CD plugin You can use the Argo CD plugin to visualize the Continuous Delivery (CD) workflows in OpenShift GitOps. This plugin provides a visual overview of the application's status, deployment details, commit message, author of the commit, container image promoted to environment and deployment history. Prerequisites You have enabled the Argo CD plugin in Red Hat Developer Hub RHDH. Procedures Select the Catalog tab and choose the component that you want to use. Select the CD tab to view insights into deployments managed by Argo CD. Select an appropriate card to view the deployment details (for example, commit message, author name, and deployment history). Click the link icon ( ) to open the deployment details in Argo CD. Select the Overview tab and navigate to the Deployment summary section to review the summary of your application's deployment across namespaces. Additionally, select an appropriate Argo CD app to open the deployment details in Argo CD, or select a commit ID from the Revision column to review the changes in GitLab or GitHub. Additional resources For more information on dynamic plugins, see Dynamic plugin installation .
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/using_dynamic_plugins/using-the-argo-cd-plugin
Chapter 99. Updating DNS records systematically when using external DNS
Chapter 99. Updating DNS records systematically when using external DNS When using external DNS, Identity Management (IdM) does not update the DNS records automatically after a change in the topology. You can update the DNS records managed by an external DNS service systematically, which reduces the need for manual DNS updates. Updating DNS records removes old or invalid DNS records and adds new records. You must update DNS records after a change in your topology, for example: After installing or uninstalling a replica After installing a CA, DNS, KRA, or Active Directory trust on an IdM server 99.1. Updating external DNS records with GUI If you have made any changes to your topology, you must update the external DNS records by using the external DNS GUI. Procedure Display the records that you must update: Use the external DNS GUI to update the records. 99.2. Updating external DNS records using nsupdate You can update external DNS records using the nsupdate utility. You can also add the command to a script to automate the process. To update with the nsupdate utility, you need to generate a file with the DNS records, and then proceed with either sending an nsupdate request secured using TSIG, or sending an nsupdate request secured using the GSS-TSIG. Procedure To generate a file with the DNS records for nsupdate, use the ` ipa dns-update-system-records --dry-run command with the --out option. The --out option specifies the path of the file to generate: The generated file contains the required DNS records in the format accepted by the nsupdate utility. The generated records rely on: Automatic detection of the zone in which the records are to be updated. Automatic detection of the zone's authoritative server. If you are using an atypical DNS setup or if zone delegations are missing, nsupdate might not be able to find the right zone and server. In this case, add the following options to the beginning of the generated file: server : specify the server name or port of the authoritative DNS server to which nsupdate sends the records. zone : specify the name of the zone where nsupdate places the records. Example 99.1. Generated record 99.3. Sending an nsupdate request secured using TSIG When sending a request using nsupdate , make sure you properly secure it. Transaction signature (TSIG) enables you to use nsupdate with a shared key. Prerequisites Your DNS server must be configured for TSIG. Both the DNS server and its client must have the shared key. Procedure Run the nsupdate command and provide the shared secret using one of these options: -k to provide the TSIG authentication key: -y to generate a signature from the name of the key and from the Base64-encoded shared secret: 99.4. Sending an nsupdate request secured using GSS-TSIG When sending a request using nsupdate , make sure you properly secure it. GSS-TSIG uses the GSS-API interface to obtain the secret TSIG key. GSS-TSIG is an extension to the TSIG protocol. Prerequisites Your DNS server must be configured for GSS-TSIG. Note This procedure assumes that Kerberos V5 protocol is used as the technology for GSS-API. Procedure Authenticate with a principal allowed to update the records: Run nsupdate with the -g option to enable the GSS-TSIG mode: 99.5. Additional resources nsupdate(8) man page RFC 2845 describes the TSIG protocol RFC 3645 describes the GSS-TSIG algorithm
[ "ipa dns-update-system-records --dry-run IPA DNS records: _kerberos-master._tcp.example.com. 86400 IN SRV 0 100 88 ipa.example.com. _kerberos-master._udp.example.com. 86400 IN SRV 0 100 88 ipa.example.com. [... output truncated ...]", "ipa dns-update-system-records --dry-run --out dns_records_file.nsupdate IPA DNS records: _kerberos-master._tcp.example.com. 86400 IN SRV 0 100 88 ipa.example.com. _kerberos-master._udp.example.com. 86400 IN SRV 0 100 88 ipa.example.com. [... output truncated ...]", "cat dns_records_file.nsupdate zone example.com . server 192.0.2.1 ; IPA DNS records update delete _kerberos-master._tcp.example.com. SRV update add _kerberos-master._tcp.example.com. 86400 IN SRV 0 100 88 ipa.example.com. [... output truncated ...]", "nsupdate -k tsig_key.file dns_records_file.nsupdate", "nsupdate -y algorithm:keyname:secret dns_records_file.nsupdate", "kinit principal_allowed_to_update_records@REALM", "nsupdate -g dns_records_file.nsupdate" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/updating-dns-records-systematically-when-using-external-dns_configuring-and-managing-idm
Chapter 2. Authentication [operator.openshift.io/v1]
Chapter 2. Authentication [operator.openshift.io/v1] Description Authentication provides information to configure an operator to manage authentication. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object status object 2.1.1. .spec Description Type object Property Type Description logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". unsupportedConfigOverrides `` unsupportedConfigOverrides holds a sparse config that will override any previously set options. It only needs to be the fields to override it will end up overlaying in the following order: 1. hardcoded defaults 2. observedConfig 3. unsupportedConfigOverrides 2.1.2. .status Description Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. oauthAPIServer object OAuthAPIServer holds status specific only to oauth-apiserver observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 2.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 2.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 2.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 2.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 2.1.7. .status.oauthAPIServer Description OAuthAPIServer holds status specific only to oauth-apiserver Type object Property Type Description latestAvailableRevision integer LatestAvailableRevision is the latest revision used as suffix of revisioned secrets like encryption-config. A new revision causes a new deployment of pods. 2.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/authentications DELETE : delete collection of Authentication GET : list objects of kind Authentication POST : create an Authentication /apis/operator.openshift.io/v1/authentications/{name} DELETE : delete an Authentication GET : read the specified Authentication PATCH : partially update the specified Authentication PUT : replace the specified Authentication /apis/operator.openshift.io/v1/authentications/{name}/status GET : read status of the specified Authentication PATCH : partially update status of the specified Authentication PUT : replace status of the specified Authentication 2.2.1. /apis/operator.openshift.io/v1/authentications Table 2.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Authentication Table 2.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Authentication Table 2.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.5. HTTP responses HTTP code Reponse body 200 - OK AuthenticationList schema 401 - Unauthorized Empty HTTP method POST Description create an Authentication Table 2.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.7. Body parameters Parameter Type Description body Authentication schema Table 2.8. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 201 - Created Authentication schema 202 - Accepted Authentication schema 401 - Unauthorized Empty 2.2.2. /apis/operator.openshift.io/v1/authentications/{name} Table 2.9. Global path parameters Parameter Type Description name string name of the Authentication Table 2.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an Authentication Table 2.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.12. Body parameters Parameter Type Description body DeleteOptions schema Table 2.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Authentication Table 2.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.15. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Authentication Table 2.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.17. Body parameters Parameter Type Description body Patch schema Table 2.18. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Authentication Table 2.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.20. Body parameters Parameter Type Description body Authentication schema Table 2.21. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 201 - Created Authentication schema 401 - Unauthorized Empty 2.2.3. /apis/operator.openshift.io/v1/authentications/{name}/status Table 2.22. Global path parameters Parameter Type Description name string name of the Authentication Table 2.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Authentication Table 2.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.25. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Authentication Table 2.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.27. Body parameters Parameter Type Description body Patch schema Table 2.28. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Authentication Table 2.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.30. Body parameters Parameter Type Description body Authentication schema Table 2.31. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 201 - Created Authentication schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/operator_apis/authentication-operator-openshift-io-v1
4.20. IBM BladeCenter over SNMP
4.20. IBM BladeCenter over SNMP Table 4.21, "IBM BladeCenter SNMP" lists the fence device parameters used by fence_ibmblade , the fence agent for IBM BladeCenter over SNMP. Table 4.21. IBM BladeCenter SNMP luci Field cluster.conf Attribute Description Name name A name for the IBM BladeCenter SNMP device connected to the cluster. IP Address or Hostname ipaddr The IP address or host name assigned to the device. UDP/TCP Port (optional) udpport UDP/TCP port to use for connections with the device; the default value is 161. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. SNMP Version snmp_version The SNMP version to use (1, 2c, 3); the default value is 1. SNMP Community community The SNMP community string. SNMP Security Level snmp_sec_level The SNMP security level (noAuthNoPriv, authNoPriv, authPriv). SNMP Authentication Protocol snmp_auth_prot The SNMP authentication protocol (MD5, SHA). SNMP Privacy Protocol snmp_priv_prot The SNMP privacy protocol (DES, AES). SNMP privacy protocol password snmp_priv_passwd The SNMP Privacy Protocol Password. SNMP Privacy Protocol Script snmp_priv_passwd_script The script that supplies a password for SNMP privacy protocol. Using this supersedes the SNMP privacy protocol password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port (Outlet) Number port Physical plug number or name of virtual machine. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Figure 4.15, "IBM BladeCenter SNMP" shows the configuration screen for adding an IBM BladeCenter SNMP fence device. Figure 4.15. IBM BladeCenter SNMP The following command creates a fence device instance for an IBM BladeCenter SNMP device: The following is the cluster.conf entry for the fence_ibmblade device:
[ "ccs -f cluster.conf --addfencedev bladesnmp1 agent=fence_ibmblade community=private ipaddr=192.168.0.1 login=root passwd=password123 snmp_priv_passwd=snmpasswd123 power_wait=60", "<fencedevices> <fencedevice agent=\"fence_ibmblade\" community=\"private\" ipaddr=\"192.168.0.1\" login=\"root\" name=\"bladesnmp1\" passwd=\"password123\" power_wait=\"60\" snmp_priv_passwd=\"snmpasswd123\" udpport=\"161\"/> </fencedevices>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/s1-software-fence-bladectr-snmp-ca
Chapter 3. Configuring and Setting Up Remote Jobs
Chapter 3. Configuring and Setting Up Remote Jobs Use this section as a guide to configuring Satellite to execute jobs on remote hosts. Any command that you want to apply to a remote host must be defined as a job template. After you have defined a job template you can execute it multiple times. 3.1. About Running Jobs on Hosts You can run jobs on hosts remotely from Capsules using shell scripts or Ansible tasks and playbooks. This is referred to as remote execution. For custom Ansible roles that you create, or roles that you download, you must install the package containing the roles on the Capsule base operating system. Before you can use Ansible roles, you must import the roles into Satellite from the Capsule where they are installed. Communication occurs through Capsule Server, which means that Satellite Server does not require direct access to the target host, and can scale to manage many hosts. Remote execution uses the SSH service that must be enabled and running on the target host. Ensure that the remote execution Capsule has access to port 22 on the target hosts. Satellite uses ERB syntax job templates. For more information, see Template Writing Reference in the Managing Hosts guide. Several job templates for shell scripts and Ansible are included by default. For more information, see Setting up Job Templates . Note Any Capsule Server base operating system is a client of Satellite Server's internal Capsule, and therefore this section applies to any type of host connected to Satellite Server, including Capsules. You can run jobs on multiple hosts at once, and you can use variables in your commands for more granular control over the jobs you run. You can use host facts and parameters to populate the variable values. In addition, you can specify custom values for templates when you run the command. For more information, see Executing a Remote Job . 3.2. Remote Execution Workflow When you run a remote job on hosts, for every host, Satellite performs the following actions to find a remote execution Capsule to use. Satellite searches only for Capsules that have the Ansible feature enabled. Satellite finds the host's interfaces that have the Remote execution checkbox selected. Satellite finds the subnets of these interfaces. Satellite finds remote execution Capsules assigned to these subnets. From this set of Capsules, Satellite selects the Capsule that has the least number of running jobs. By doing this, Satellite ensures that the jobs load is balanced between remote execution Capsules. If you have enabled Prefer registered through Capsule for remote execution , Satellite runs the REX job using the Capsule the host is registered to. By default, Prefer registered through Capsule for remote execution is set to No . To enable it, in the Satellite web UI, navigate to Administer > Settings , and on the Content tab, set Prefer registered through Capsule for remote execution to Yes . This ensures that Satellite performs REX jobs on hosts by the Capsule to which they are registered to. If Satellite does not find a remote execution Capsule at this stage, and if the Fallback to Any Capsule setting is enabled, Satellite adds another set of Capsules to select the remote execution Capsule from. Satellite selects the most lightly loaded Capsule from the following types of Capsules that are assigned to the host: DHCP, DNS and TFTP Capsules assigned to the host's subnets DNS Capsule assigned to the host's domain Realm Capsule assigned to the host's realm Puppet server Capsule Puppet CA Capsule OpenSCAP Capsule If Satellite does not find a remote execution Capsule at this stage, and if the Enable Global Capsule setting is enabled, Satellite selects the most lightly loaded remote execution Capsule from the set of all Capsules in the host's organization and location to execute a remote job. 3.3. Permissions for Remote Execution You can control which roles can run which jobs within your infrastructure, including which hosts they can target. The remote execution feature provides two built-in roles: Remote Execution Manager : Can access all remote execution features and functionality. Remote Execution User : Can only run jobs. You can clone the Remote Execution User role and customize its filter for increased granularity. If you adjust the filter with the view_job_templates permission on a customized role, you can only see and trigger jobs based on matching job templates. You can use the view_hosts and view_smart_proxies permissions to limit which hosts or Capsules are visible to the role. The execute_template_invocation permission is a special permission that is checked immediately before execution of a job begins. This permission defines which job template you can run on a particular host. This allows for even more granularity when specifying permissions. You can run remote execution jobs against Red Hat Satellite and Capsule registered as hosts to Red Hat Satellite with the execute_jobs_on_infrastructure_hosts permission. Standard Manager and Site Manager roles have this permission by default. If you use either the Manager or Site Manager role, or if you use a custom role with the execute_jobs_on_infrastructure_hosts permission, you can execute remote jobs against registered Red Hat Satellite and Capsule hosts. For more information on working with roles and permissions, see Creating and Managing Roles in the Administering Red Hat Satellite guide. The following example shows filters for the execute_template_invocation permission: Use the first line in this example to apply the Reboot template to one selected host. Use the second line to define a pool of hosts with names ending with .staging.example.com . Use the third line to bind the template with a host group. Note Permissions assigned to users with these roles can change over time. If you have already scheduled some jobs to run in the future, and the permissions change, this can result in execution failure because permissions are checked immediately before job execution. 3.4. Creating a Job Template Use this procedure to create a job template. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Job templates . Click New Job Template . Click the Template tab, and in the Name field, enter a unique name for your job template. Select Default to make the template available for all organizations and locations. Create the template directly in the template editor or upload it from a text file by clicking Import . Optional: In the Audit Comment field, add information about the change. Click the Job tab, and in the Job category field, enter your own category or select from the default categories listed in Default Job Template Categories . Optional: In the Description Format field, enter a description template. For example, Install package %{package_name} . You can also use %{template_name} and %{job_category} in your template. From the Provider Type list, select SSH for shell scripts and Ansible for Ansible tasks or playbooks. Optional: In the Timeout to kill field, enter a timeout value to terminate the job if it does not complete. Optional: Click Add Input to define an input parameter. Parameters are requested when executing the job and do not have to be defined in the template. For examples, see the Help tab. Optional: Click Foreign input set to include other templates in this job. Optional: In the Effective user area, configure a user if the command cannot use the default remote_execution_effective_user setting. Optional: If this template is a snippet to be included in other templates, click the Type tab and select Snippet . Click the Location tab and add the locations where you want to use the template. Click the Organizations tab and add the organizations where you want to use the template. Click Submit to save your changes. You can extend and customize job templates by including other templates in the template syntax. For more information, see the appendices in the Managing Hosts guide. CLI procedure To create a job template using a template-definition file, enter the following command: 3.5. Configuring the Fallback to Any Capsule Remote Execution Setting in Satellite You can enable the Fallback to Any Capsule setting to configure Satellite to search for remote execution Capsules from the list of Capsules that are assigned to hosts. This can be useful if you need to run remote jobs on hosts that have no subnets configured or if the hosts' subnets are assigned to Capsules that do not have the remote execution feature enabled. If the Fallback to Any Capsule setting is enabled, Satellite adds another set of Capsules to select the remote execution Capsule from. Satellite also selects the most lightly loaded Capsule from the set of all Capsules assigned to the host, such as the following: DHCP, DNS and TFTP Capsules assigned to the host's subnets DNS Capsule assigned to the host's domain Realm Capsule assigned to the host's realm Puppet server Capsule Puppet CA Capsule OpenSCAP Capsule Procedure In the Satellite web UI, navigate to Administer > Settings . Click Remote Execution . Configure the Fallback to Any Capsule setting. CLI procedure Enter the hammer settings set command on Satellite to configure the Fallback to Any Capsule setting. For example, to set the value to true , enter the following command: 3.6. Configuring the Global Capsule Remote Execution Setting in Satellite By default, Satellite searches for remote execution Capsules in hosts' organizations and locations regardless of whether Capsules are assigned to hosts' subnets or not. You can disable the Enable Global Capsule setting if you want to limit the search to the Capsules that are assigned to hosts' subnets. If the Enable Global Capsule setting is enabled, Satellite adds another set of Capsules to select the remote execution Capsule from. Satellite also selects the most lightly loaded remote execution Capsule from the set of all Capsules in the host's organization and location to execute a remote job. Procedure In the Satellite web UI, navigate to Administer > Settings . Click Remote Execution . Configure the Enable Global Capsule setting. CLI procedure Enter the hammer settings set command on Satellite to configure the Enable Global Capsule setting. For example, to set the value to true , enter the following command: 3.7. Configuring Satellite to Use an Alternative Directory to Execute Remote Jobs on Hosts Ansible puts its own files it requires into the USDHOME/.ansible/tmp directory, where USDHOME is the home directory of the remote user. You have the option to set a different directory if required. Procedure Create a new directory, for example new_place : Copy the SELinux context from the default var directory: Configure the system: 3.8. Distributing SSH Keys for Remote Execution To use SSH keys for authenticating remote execution connections, you must distribute the public SSH key from Capsule to its attached hosts that you want to manage. Ensure that the SSH service is enabled and running on the hosts. Configure any network or host-based firewalls to enable access to port 22. Use one of the following methods to distribute the public SSH key from Capsule to target hosts: Section 3.9, "Distributing SSH Keys for Remote Execution Manually" . Section 3.10, "Using the Satellite API to Obtain SSH Keys for Remote Execution" . Section 3.11, "Configuring a Kickstart Template to Distribute SSH Keys during Provisioning" . For new Satellite hosts, you can deploy SSH keys to Satellite hosts during registration using the global registration template. For more information, see Registering a Host to Red Hat Satellite Using the Global Registration Template . Satellite distributes SSH keys for the remote execution feature to the hosts provisioned from Satellite by default. If the hosts are running on Amazon Web Services, enable password authentication. For more information, see https://aws.amazon.com/premiumsupport/knowledge-center/new-user-accounts-linux-instance . 3.9. Distributing SSH Keys for Remote Execution Manually To distribute SSH keys manually, complete the following steps: Procedure Enter the following command on Capsule. Repeat for each target host you want to manage: To confirm that the key was successfully copied to the target host, enter the following command on Capsule: 3.10. Using the Satellite API to Obtain SSH Keys for Remote Execution To use the Satellite API to download the public key from Capsule, complete this procedure on each target host. Procedure On the target host, create the ~/.ssh directory to store the SSH key: Download the SSH key from Capsule: Configure permissions for the ~/.ssh directory: Configure permissions for the authorized_keys file: 3.11. Configuring a Kickstart Template to Distribute SSH Keys during Provisioning You can add a remote_execution_ssh_keys snippet to your custom kickstart template to deploy SSH Keys to hosts during provisioning. Kickstart templates that Satellite ships include this snippet by default. Therefore, Satellite copies the SSH key for remote execution to the systems during provisioning. Procedure To include the public key in newly-provisioned hosts, add the following snippet to the Kickstart template that you use: 3.12. Configuring a keytab for Kerberos Ticket Granting Tickets Use this procedure to configure Satellite to use a keytab to obtain Kerberos ticket granting tickets. If you do not set up a keytab, you must manually retrieve tickets. Procedure Find the ID of the foreman-proxy user: Modify the umask value so that new files have the permissions 600 : Create the directory for the keytab: Create a keytab or copy an existing keytab to the directory: Change the directory owner to the foreman-proxy user: Ensure that the keytab file is read-only: Restore the SELinux context: 3.13. Configuring Kerberos Authentication for Remote Execution You can use Kerberos authentication to establish an SSH connection for remote execution on Satellite hosts. Prerequisites Enroll Satellite Server on the Kerberos server Enroll the Satellite target host on the Kerberos server Configure and initialize a Kerberos user account for remote execution Ensure that the foreman-proxy user on Satellite has a valid Kerberos ticket granting ticket Procedure To install and enable Kerberos authentication for remote execution, enter the following command: To edit the default user for remote execution, in the Satellite web UI, navigate to Administer > Settings and click the Remote Execution tab. In the SSH User row, edit the second column and add the user name for the Kerberos account. Navigate to remote_execution_effective_user and edit the second column to add the user name for the Kerberos account. To confirm that Kerberos authentication is ready to use, run a remote job on the host. 3.14. Setting up Job Templates Satellite provides default job templates that you can use for executing jobs. To view the list of job templates, navigate to Hosts > Job templates . If you want to use a template without making changes, proceed to Executing a Remote Job . You can use default templates as a base for developing your own. Default job templates are locked for editing. Clone the template and edit the clone. Procedure To clone a template, in the Actions column, select Clone . Enter a unique name for the clone and click Submit to save the changes. Job templates use the Embedded Ruby (ERB) syntax. For more information about writing templates, see the Template Writing Reference in the Managing Hosts guide. Ansible Considerations To create an Ansible job template, use the following procedure and instead of ERB syntax, use YAML syntax. Begin the template with --- . You can embed an Ansible playbook YAML file into the job template body. You can also add ERB syntax to customize your YAML Ansible template. You can also import Ansible playbooks in Satellite. For more information, see Synchronizing Repository Templates in the Managing Hosts guide. Parameter Variables At run time, job templates can accept parameter variables that you define for a host. Note that only the parameters visible on the Parameters tab at the host's edit page can be used as input parameters for job templates. If you do not want your Ansible job template to accept parameter variables at run time, in the Satellite web UI, navigate to Administer > Settings and click the Ansible tab. In the Top level Ansible variables row, change the Value parameter to No . 3.15. Executing a Remote Job You can execute a job that is based on a job template against one or more hosts. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > All Hosts and select the target hosts on which you want to execute a remote job. You can use the search field to filter the host list. From the Select Action list, select Schedule Remote Job . On the Job invocation page, define the main job settings: Select the Job category and the Job template you want to use. Optional: Select a stored search string in the Bookmark list to specify the target hosts. Optional: Further limit the targeted hosts by entering a Search query . The Resolves to line displays the number of hosts affected by your query. Use the refresh button to recalculate the number after changing the query. The preview icon lists the targeted hosts. The remaining settings depend on the selected job template. See Creating a Job Template for information on adding custom parameters to a template. Optional: To configure advanced settings for the job, click Display advanced fields . Some of the advanced settings depend on the job template, the following settings are general: Effective user defines the user for executing the job, by default it is the SSH user. Concurrency level defines the maximum number of jobs executed at once, which can prevent overload of systems' resources in a case of executing the job on a large number of hosts. Timeout to kill defines time interval in seconds after which the job should be killed, if it is not finished already. A task which could not be started during the defined interval, for example, if the task took too long to finish, is canceled. Type of query defines when the search query is evaluated. This helps to keep the query up to date for scheduled tasks. Execution ordering determines the order in which the job is executed on hosts: alphabetical or randomized. Concurrency level and Timeout to kill settings enable you to tailor job execution to fit your infrastructure hardware and needs. To run the job immediately, ensure that Schedule is set to Execute now . You can also define a one-time future job, or set up a recurring job. For recurring tasks, you can define start and end dates, number and frequency of runs. You can also use cron syntax to define repetition. For more information about cron, see the Automating System Tasks section of the Red Hat Enterprise Linux 7 System Administrator's Guide . Click Submit . This displays the Job Overview page, and when the job completes, also displays the status of the job. CLI procedure Enter the following command on Satellite: To execute a remote job with custom parameters, complete the following steps: Find the ID of the job template you want to use: Show the template details to see parameters required by your template: Execute a remote job with custom parameters: Replace query with the filter expression that defines hosts, for example "name ~ rex01" . For more information about executing remote commands with hammer, enter hammer job-template --help and hammer job-invocation --help . 3.16. Scheduling a Recurring Ansible Job for a Host You can schedule a recurring job to run Ansible roles on hosts. Procedure In the Satellite web UI, navigate to Hosts > All Hosts and select the target host on which you want to execute a remote job. On the Ansible tab, select Jobs . Click Schedule recurring job . Define the repetition frequency, start time, and date of the first run in the Create New Recurring Ansible Run window. Click Submit . Optional: View the scheduled Ansible job in host overview or by navigating to Ansible > Jobs . 3.17. Scheduling a Recurring Ansible Job for a Host Group You can schedule a recurring job to run Ansible roles on host groups. Procedure In the Satellite web UI, navigate to Configure > Host groups . In the Actions column, select Configure Ansible Job for the host group you want to schedule an Ansible roles run for. Click Schedule recurring job . Define the repetition frequency, start time, and date of the first run in the Create New Recurring Ansible Run window. Click Submit . 3.18. Monitoring Jobs You can monitor the progress of the job while it is running. This can help in any troubleshooting that may be required. Ansible jobs run on batches of 100 hosts, so you cannot cancel a job running on a specific host. A job completes only after the Ansible playbook runs on all hosts in the batch. Procedure In the Satellite web UI, navigate to Monitor > Jobs . This page is automatically displayed if you triggered the job with the Execute now setting. To monitor scheduled jobs, navigate to Monitor > Jobs and select the job run you wish to inspect. On the Job page, click the Hosts tab. This displays the list of hosts on which the job is running. In the Host column, click the name of the host that you want to inspect. This displays the Detail of Commands page where you can monitor the job execution in real time. Click Back to Job at any time to return to the Job Details page. CLI procedure To monitor the progress of a job while it is running, complete the following steps: Find the ID of a job: Monitor the job output: Optional: to cancel a job, enter the following command:
[ "name = Reboot and host.name = staging.example.com name = Reboot and host.name ~ *.staging.example.com name = \"Restart service\" and host_group.name = webservers", "# hammer job-template create --file \" path_to_template_file \" --name \" template_name \" --provider-type SSH --job-category \" category_name \"", "hammer settings set --name=remote_execution_fallback_proxy --value=true", "hammer settings set --name=remote_execution_global_proxy --value=true", "mkdir / remote_working_dir", "chcon --reference=/var /remote_working_dir", "satellite-installer --foreman-proxy-plugin-ansible-working-dir /remote_working_dir", "ssh-copy-id -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy.pub [email protected]", "ssh -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy [email protected]", "mkdir ~/.ssh", "curl https:// capsule.example.com :9090/ssh/pubkey >> ~/.ssh/authorized_keys", "chmod 700 ~/.ssh", "chmod 600 ~/.ssh/authorized_keys", "<%= snippet 'remote_execution_ssh_keys' %>", "id -u foreman-proxy", "umask 077", "mkdir -p \"/var/kerberos/krb5/user/ USER_ID \"", "cp your_client.keytab /var/kerberos/krb5/user/ USER_ID /client.keytab", "chown -R foreman-proxy:foreman-proxy \"/var/kerberos/krb5/user/ USER_ID \"", "chmod -wx \"/var/kerberos/krb5/user/ USER_ID /client.keytab\"", "restorecon -RvF /var/kerberos/krb5", "satellite-installer --scenario satellite --foreman-proxy-plugin-remote-execution-ssh-ssh-kerberos-auth true", "hammer settings set --name=remote_execution_global_proxy --value=false", "hammer job-template list", "hammer job-template info --id template_ID", "# hammer job-invocation create --job-template \" template_name \" --inputs key1 =\" value \", key2 =\" value \",... --search-query \" query \"", "# hammer job-invocation list", "# hammer job-invocation output --id job_ID --host host_name", "# hammer job-invocation cancel --id job_ID" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/configuring_red_hat_satellite_to_use_ansible/Configuring_and_Setting_Up_Remote_Jobs_ansible
2.5.5. Setting Up NFS Over GFS2
2.5.5. Setting Up NFS Over GFS2 Due to the added complexity of the GFS2 locking subsystem and its clustered nature, setting up NFS over GFS2 requires taking many precautions and careful configuration. This section describes the caveats you should take into account when configuring an NFS service over a GFS2 file system. Warning If the GFS2 file system is NFS exported, and NFS client applications use POSIX locks, then you must mount the file system with the localflocks option. The intended effect of this is to force POSIX locks from each server to be local: that is, non-clustered, independent of each other. (A number of problems exist if GFS2 attempts to implement POSIX locks from NFS across the nodes of a cluster.) For applications running on NFS clients, localized POSIX locks means that two clients can hold the same lock concurrently if the two clients are mounting from different servers. If all clients mount NFS from one server, then the problem of separate servers granting the same locks independently goes away. If you are not sure whether to mount your file system with the localflocks option, you should not use the option; it is always safer to have the locks working on a clustered basis. In addition to the locking considerations, you should take the following into account when configuring an NFS service over a GFS2 file system. Red Hat supports only Red Hat High Availability Add-On configurations using NFSv3 with locking in an active/passive configuration with the following characteristics: The back-end file system is a GFS2 file system running on a 2 to 16 node cluster. An NFSv3 server is defined as a service exporting the entire GFS2 file system from a single cluster node at a time. The NFS server can fail over from one cluster node to another (active/passive configuration). No access to the GFS2 file system is allowed except through the NFS server. This includes both local GFS2 file system access as well as access through Samba or Clustered Samba. There is no NFS quota support on the system. This configuration provides HA for the file system and reduces system downtime since a failed node does not result in the requirement to execute the fsck command when failing the NFS server from one node to another. The fsid= NFS option is mandatory for NFS exports of GFS2. If problems arise with your cluster (for example, the cluster becomes inquorate and fencing is not successful), the clustered logical volumes and the GFS2 file system will be frozen and no access is possible until the cluster is quorate. You should consider this possibility when determining whether a simple failover solution such as the one defined in this procedure is the most appropriate for your system.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/s2-nfs-gfs-issues
Chapter 3. Feature enhancements
Chapter 3. Feature enhancements Cryostat 2.1 includes feature enhancements that build upon the Cryostat 2.1 offerings. Archives view The Cryostat 2.1 web console includes an Archives menu item. After you select this menu item, an Archives Recording table displays on your console. This table improves upon the Cryostat 2.0 table in that it adheres to a split view and uses a GraphQL query to populate table data. The Archived Recordings table on the Archives menu item is different from the Archived Recordings table that displays on the Recordings menu in that it displays archives for all target JVMs. Figure 3.1. Archives view on the Cryostat web console cert-manager API Cryostat 2.1 supports version 1.5.3 , so that the Cryostat Operator now uses the cert-manager API to set TLS certificates for a target JVM. See, v1.5.3 ( cert-manager ) Create Target dialog box Cryostat 2.1 disables the Create button on the Create Target dialog box until you enter a value in the Connection URL field. Additionally, the Connection URL field includes an example of a JMX Service URL that you can refer to when you need to enter a valid URL in the field. Figure 3.2. The Create Target dialog box in the Dashboard menu item Cryostat Operator service customization The Cryostat Operator now includes a spec.serviceOptions property in its YAML configuration file, so that you can change the following service options for the operator: Annotations Labels Port numbers Service type After you make changes to the default service option values, the Cryostat Operator creates services for the following components: Cryostat application Grafana application Report generator microservice Cryostat Operator details The Cryostat Operator details page on the OpenShift Container Platform (OCP) web console includes the following enhancements: An updated name reference for the Cryostat Operator. Before the Cryostat 2.1 release, the Cryostat application and Cryostat Operator were named similarly on OCP. A link to the Cryostat website Figure 3.3. New enhancements on the Cryostat Operator details page Download file behavior of Cryostat When you downloaded a file from your Cryostat 2.0 web console, such as selecting the Download Recording item from the Active Recordings overflow menu, you would need to complete the following steps: Download the remote file into your default web browser's memory. Create a local object URL for the blob file item. This behavior voided the purpose of the Cancel option from your web browser's downloads menu. This might be problematic if you wanted to cancel the download operation of the JFR binary file. Cryostat 2.1 relies on the HTML 5 download attribute that is available with your web browser to manage a file download. This attribute reads the anchor element from the href attribute and then instructs your web browser to download the file. This download operation decreases the time it takes for your web browser to display the Save File menu, so that you can choose to cancel the download operation before saving the file to your local system. File upload functionality During a large file upload operation on Cryostat 2.1, such as re-uploading an archived recording from the Re-Upload Archived Recordings dialog box, you can click the Cancel button to stop the file upload operation. An Upload in Progress dialog box displays, where you must choose to proceed with the cancel operation. Figure 3.4. Cancel button on the Re-Upload Archived Recordings dialog box After the cancel operation completes, your web browser displays the size of the JFR file that was not transferred to your Cryostat application. jfr-datasource container Cryostat 2.0 contained an issue where the Cryostat web console would not display the version number on the About page. Cryostat 2.1 fixes this issue by changing the codebase to ensure that the version number displays on this page, regardless of the jfr-datasource or grafana dashboard configuration settings. Figure 3.5. About page on the Cryostat web console Netty performance regression Handler implementations that use the Vert.x BodyHandler class no longer experience the performance issues that were evident in Cryostat 2.0, such as accepting file uploads where a standard HTTP form upload was expected by a handler. These file uploads could lead to resource constraints for Vert.x, because the handlers might permanently store such files in the temporary file-upload storage location on Vert.x. Additionally, Netty's parsing of POST form bodies could lead to higher than expected memory usage while processing API requests. Cryostat 2.1 uses Vert.x version 3.9.9 , which includes an upgrade to Netty, version 4.1.67 . This upgrade improves both the speed and logic on how handlers upload files to Vert.x. Red Hat OpenShift cluster connections with external JVMs Cryostat 2.0 had a known issue with connecting an Red Hat OpenShift cluster with JVMs located on nodes different to those running on a Cryostat node. Cryostat 2.1 resolves this issue with the new CRYOSTYAT_ENABLE_JDP_BROADCAST environment variable, which is set to false as default. The default configuration of this environment variable disables the Java Discovery Protocol (JDP) on Red Hat OpenShift, so that Cryostat 2.1 can now connect to JVMs that are located on any nodes. See, Known issues (Cryostat 2.0) RecordingPostHandler behavior change Cryostat 2.1 enhances the RecordingPostHandler implementation so that it now sequentially parses a JFR binary. The implementation in Cryostat 2.0 parsed data and then constructed a list of events. The new implementation has the following advantages: Provides a simple method Requires less resources to run Validates uploaded data much faster than the behavior Security menu item After you select the Security menu item on your Cryostat 2.1 instance, you can access the Store JMX Credentials tile. Figure 3.6. Store JMX Credentials on the Security menu item The Store JMX Credentials tile provides a convenient way to easily view any target JVMs that have stored JMX credentials. Additionally, on this tile item, you can add stored credentials to a specific target JVM. For a target JVM that requires JMX authentication, you must provide your username and password when prompted. Cryostat can use stored credentials when attempting to open JMX connections to a target JVM. setCachedTargetSelect implementation Before the Cryostat 2.1 release, when you logged into your Cryostat web console and navigated to the Dashboard , the JVM you selected from your session would display as the default value under the Target JVM drop-down list. This would occur even if Cryostat can no longer connect to this JVM. Cryostat 2.1 resolves this issue by refreshing the list of target JVMs at the start of each new session and then only lists JVMs where it can establish a connection. You can configure the refresh period for your Cryostat web console by navigating to Settings > Auto-Refresh . In the provided field, you can specify a value in seconds, minutes, or hours. You must select the Enable checkbox to complete the configuration. Username on GUI masthead Cryostat 2.1 fetches a username from the /v2.1/auth endpoint, so it can display a username in the Cryostat web console masthead. In Cryostat 2.0, you could only view your username when you start a Cryostat instance in basic authentication mode. Figure 3.7. Username displaying on the masthead of the Cryostat web console WebSocket API Cryostat 2.1 updates its WebSocket API to support unlimited WebSocket client connections. Before this release, the WebSocket API could only support a maximum of 64 client connections. For Cryostat 2.1, the WebSocket API can now automatically receive information about actions performed by an unlimited number of connected clients that are using the same one-way push Notification Channel (NC) channel.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/release_notes_for_the_red_hat_build_of_cryostat_2.1/cryostat-feature-enhancements-2-1_cryostat
Appendix E. S3 unsupported header fields
Appendix E. S3 unsupported header fields Table E.1. Unsupported Header Fields Name Type x-amz-security-token Request Server Response x-amz-delete-marker Response x-amz-id-2 Response x-amz-request-id Response x-amz-version-id Response
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/developer_guide/s3-unsupported-header-fields_dev
Chapter 194. Krati Component (deprecated)
Chapter 194. Krati Component (deprecated) Available as of Camel version 2.9 This component allows the use krati datastores and datasets inside Camel. Krati is a simple persistent data store with very low latency and high throughput. It is designed for easy integration with read-write-intensive applications with little effort in tuning configuration, performance and JVM garbage collection. Camel provides a producer and consumer for krati datastore_(key/value engine)_. It also provides an idempotent repository for filtering out duplicate messages. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-krati</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 194.1. URI format krati:[the path of the datastore][?options] The path of the datastore is the relative path of the folder that krati will use for its datastore. You can append query options to the URI in the following format, ?option=value&option=value&... 194.2. Krati Options The Krati component has no options. The Krati endpoint is configured using URI syntax: with the following path and query parameters: 194.2.1. Path Parameters (1 parameters): Name Description Default Type path Required Path of the datastore is the relative path of the folder that krati will use for its datastore. String 194.2.2. Query Parameters (29 parameters): Name Description Default Type hashFunction (common) The hash function to use. HashFunction initialCapacity (common) The inital capcity of the store. 100 int keySerializer (common) The serializer that will be used to serialize the key. Serializer segmentFactory (common) Sets the segment factory of the target store. SegmentFactory segmentFileSize (common) Data store segments size in MB. 64 int valueSerializer (common) The serializer that will be used to serialize the value. Serializer bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean maxMessagesPerPoll (consumer) The maximum number of messages which can be received in one poll. This can be used to avoid reading in too much data and taking up too much memory. int sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern pollStrategy (consumer) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPoll Strategy key (producer) The key. String operation (producer) Specifies the type of operation that will be performed to the datastore. String value (producer) The Value. String synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 1000 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutor Service scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz2 component none ScheduledPollConsumer Scheduler schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz2, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean 194.3. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.krati.enabled Enable krati component true Boolean camel.component.krati.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean krati:/tmp/krati?operation=CamelKratiGet&initialCapacity=10000&keySerializer=#myCustomSerializer For producer endpoint you can override all of the above URI options by passing the appropriate headers to the message. 194.3.1. Message Headers for datastore Header Description CamelKratiOperation The operation to be performed on the datastore. The valid options are CamelKratiAdd, CamelKratiGet, CamelKratiDelete, CamelKratiDeleteAll CamelKratiKey The key. CamelKratiValue The value. 194.4. Usage Samples 194.4.1. Example 1: Putting to the datastore. This example will show you how you can store any message inside a datastore. from("direct:put").to("krati:target/test/producertest"); In the above example you can override any of the URI parameters with headers on the message. Here is how the above example would look like using xml to define our route. <route> <from uri="direct:put"/> <to uri="krati:target/test/producerspringtest"/> </route> 194.4.2. Example 2: Getting/Reading from a datastore This example will show you how you can read the contnet of a datastore. from("direct:get") .setHeader(KratiConstants.KRATI_OPERATION, constant(KratiConstants.KRATI_OPERATION_GET)) .to("krati:target/test/producertest"); In the above example you can override any of the URI parameters with headers on the message. Here is how the above example would look like using xml to define our route. <route> <from uri="direct:get"/> <to uri="krati:target/test/producerspringtest?operation=CamelKratiGet"/> </route> 194.4.3. Example 3: Consuming from a datastore This example will consume all items that are under the specified datastore. from("krati:target/test/consumertest") .to("direct:"); You can achieve the same goal by using xml, as you can see below. <route> <from uri="krati:target/test/consumerspringtest"/> <to uri="mock:results"/> </route> 194.5. Idempotent Repository As already mentioned this component also offers and idemptonet repository which can be used for filtering out duplicate messages. from("direct://in").idempotentConsumer(header("messageId"), new KratiIdempotentRepositroy("/tmp/idempotent").to("log://out"); 194.5.1. See also Krati Website
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-krati</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "krati:[the path of the datastore][?options]", "krati:path", "krati:/tmp/krati?operation=CamelKratiGet&initialCapacity=10000&keySerializer=#myCustomSerializer", "from(\"direct:put\").to(\"krati:target/test/producertest\");", "<route> <from uri=\"direct:put\"/> <to uri=\"krati:target/test/producerspringtest\"/> </route>", "from(\"direct:get\") .setHeader(KratiConstants.KRATI_OPERATION, constant(KratiConstants.KRATI_OPERATION_GET)) .to(\"krati:target/test/producertest\");", "<route> <from uri=\"direct:get\"/> <to uri=\"krati:target/test/producerspringtest?operation=CamelKratiGet\"/> </route>", "from(\"krati:target/test/consumertest\") .to(\"direct:next\");", "<route> <from uri=\"krati:target/test/consumerspringtest\"/> <to uri=\"mock:results\"/> </route>", "from(\"direct://in\").idempotentConsumer(header(\"messageId\"), new KratiIdempotentRepositroy(\"/tmp/idempotent\").to(\"log://out\");" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/krati-component
Chapter 6. Installation configuration parameters for bare metal
Chapter 6. Installation configuration parameters for bare metal Before you deploy an OpenShift Container Platform cluster, you provide a customized install-config.yaml installation configuration file that describes the details for your environment. 6.1. Available installation configuration parameters for bare metal The following tables specify the required, optional, and bare metal-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 6.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 6.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters and hyphens ( - ), such as dev . The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 6.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. If you use the Red Hat OpenShift Networking OVN-Kubernetes network plugin, both IPv4 and IPv6 address families are supported. If you configure your cluster to use both IP address families, review the following requirements: Both IP families must use the same network interface for the default gateway. Both IP families must have the default gateway. You must specify IPv4 and IPv6 addresses in the same order for all network configuration parameters. For example, in the following configuration IPv4 addresses are listed before IPv6 addresses. networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112 Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 6.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. OVNKubernetes . OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 Required if you use networking.clusterNetwork . An IP address block. If you use the OVN-Kubernetes network plugin, you can specify IPv4 and IPv6 networks. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The prefix length for an IPv6 block is between 0 and 128 . For example, 10.128.0.0/14 or fd01::/48 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. For an IPv4 network the default value is 23 . For an IPv6 network the default value is 64 . The default value is also the minimum value for IPv6. The IP address block for services. The default value is 172.30.0.0/16 . The OVN-Kubernetes network plugins supports only a single IP address block for the service network. If you use the OVN-Kubernetes network plugin, you can specify an IP address block for both of the IPv4 and IPv6 address families. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 - fd02::/112 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 or fd00::/48 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 6.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 6.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Mint , Passthrough , Manual or an empty string ( "" ). [1] Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content.
[ "apiVersion:", "baseDomain:", "metadata:", "metadata: name:", "platform:", "pullSecret:", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112", "networking:", "networking: networkType:", "networking: clusterNetwork:", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64", "networking: clusterNetwork: cidr:", "networking: clusterNetwork: hostPrefix:", "networking: serviceNetwork:", "networking: serviceNetwork: - 172.30.0.0/16 - fd02::/112", "networking: machineNetwork:", "networking: machineNetwork: - cidr: 10.0.0.0/16", "networking: machineNetwork: cidr:", "additionalTrustBundle:", "capabilities:", "capabilities: baselineCapabilitySet:", "capabilities: additionalEnabledCapabilities:", "cpuPartitioningMode:", "compute:", "compute: architecture:", "compute: hyperthreading:", "compute: name:", "compute: platform:", "compute: replicas:", "featureSet:", "controlPlane:", "controlPlane: architecture:", "controlPlane: hyperthreading:", "controlPlane: name:", "controlPlane: platform:", "controlPlane: replicas:", "credentialsMode:", "fips:", "imageContentSources:", "imageContentSources: source:", "imageContentSources: mirrors:", "publish:", "sshKey:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_bare_metal/installation-config-parameters-bare-metal
Chapter 7. User Storage SPI
Chapter 7. User Storage SPI You can use the User Storage SPI to write extensions to Red Hat build of Keycloak to connect to external user databases and credential stores. The built-in LDAP and ActiveDirectory support is an implementation of this SPI in action. Out of the box, Red Hat build of Keycloak uses its local database to create, update, and look up users and validate credentials. Often though, organizations have existing external proprietary user databases that they cannot migrate to Red Hat build of Keycloak's data model. For those situations, application developers can write implementations of the User Storage SPI to bridge the external user store and the internal user object model that Red Hat build of Keycloak uses to log in users and manage them. When the Red Hat build of Keycloak runtime needs to look up a user, such as when a user is logging in, it performs a number of steps to locate the user. It first looks to see if the user is in the user cache; if the user is found it uses that in-memory representation. Then it looks for the user within the Red Hat build of Keycloak local database. If the user is not found, it then loops through User Storage SPI provider implementations to perform the user query until one of them returns the user the runtime is looking for. The provider queries the external user store for the user and maps the external data representation of the user to Red Hat build of Keycloak's user metamodel. User Storage SPI provider implementations can also perform complex criteria queries, perform CRUD operations on users, validate and manage credentials, or perform bulk updates of many users at once. It depends on the capabilities of the external store. User Storage SPI provider implementations are packaged and deployed similarly to (and often are) Jakarta EE components. They are not enabled by default, but instead must be enabled and configured per realm under the User Federation tab in the administration console. Warning If your user provider implementation is using some user attributes as the metadata attributes for linking/establishing the user identity, then please make sure that users are not able to edit the attributes and the corresponding attributes are read-only. The example is the LDAP_ID attribute, which the built-in Red Hat build of Keycloak LDAP provider is using for to store the ID of the user on the LDAP server side. See the details in the Threat model mitigation chapter . There are two sample projects in Red Hat build of Keycloak Quickstarts Repository . Each quickstart has a README file with instructions on how to build, deploy, and test the sample project. The following table provides a brief description of the available User Storage SPI quickstarts: Table 7.1. User Storage SPI Quickstarts Name Description user-storage-jpa Demonstrates implementing a user storage provider using JPA. user-storage-simple Demonstrates implementing a user storage provider using a simple properties file that contains username/password key pairs. 7.1. Provider interfaces When building an implementation of the User Storage SPI you have to define a provider class and a provider factory. Provider class instances are created per transaction by provider factories. Provider classes do all the heavy lifting of user lookup and other user operations. They must implement the org.keycloak.storage.UserStorageProvider interface. package org.keycloak.storage; public interface UserStorageProvider extends Provider { /** * Callback when a realm is removed. Implement this if, for example, you want to do some * cleanup in your user storage when a realm is removed * * @param realm */ default void preRemove(RealmModel realm) { } /** * Callback when a group is removed. Allows you to do things like remove a user * group mapping in your external store if appropriate * * @param realm * @param group */ default void preRemove(RealmModel realm, GroupModel group) { } /** * Callback when a role is removed. Allows you to do things like remove a user * role mapping in your external store if appropriate * @param realm * @param role */ default void preRemove(RealmModel realm, RoleModel role) { } } You may be thinking that the UserStorageProvider interface is pretty sparse? You'll see later in this chapter that there are other mix-in interfaces your provider class may implement to support the meat of user integration. UserStorageProvider instances are created once per transaction. When the transaction is complete, the UserStorageProvider.close() method is invoked and the instance is then garbage collected. Instances are created by provider factories. Provider factories implement the org.keycloak.storage.UserStorageProviderFactory interface. package org.keycloak.storage; /** * @author <a href="mailto:[email protected]">Bill Burke</a> * @version USDRevision: 1 USD */ public interface UserStorageProviderFactory<T extends UserStorageProvider> extends ComponentFactory<T, UserStorageProvider> { /** * This is the name of the provider and will be shown in the admin console as an option. * * @return */ @Override String getId(); /** * called per Keycloak transaction. * * @param session * @param model * @return */ T create(KeycloakSession session, ComponentModel model); ... } Provider factory classes must specify the concrete provider class as a template parameter when implementing the UserStorageProviderFactory . This is a must as the runtime will introspect this class to scan for its capabilities (the other interfaces it implements). So for example, if your provider class is named FileProvider , then the factory class should look like this: public class FileProviderFactory implements UserStorageProviderFactory<FileProvider> { public String getId() { return "file-provider"; } public FileProvider create(KeycloakSession session, ComponentModel model) { ... } The getId() method returns the name of the User Storage provider. This id will be displayed in the admin console's User Federation page when you want to enable the provider for a specific realm. The create() method is responsible for allocating an instance of the provider class. It takes a org.keycloak.models.KeycloakSession parameter. This object can be used to look up other information and metadata as well as provide access to various other components within the runtime. The ComponentModel parameter represents how the provider was enabled and configured within a specific realm. It contains the instance id of the enabled provider as well as any configuration you may have specified for it when you enabled through the admin console. The UserStorageProviderFactory has other capabilities as well which we will go over later in this chapter. 7.2. Provider capability interfaces If you have examined the UserStorageProvider interface closely you might notice that it does not define any methods for locating or managing users. These methods are actually defined in other capability interfaces depending on what scope of capabilities your external user store can provide and execute on. For example, some external stores are read-only and can only do simple queries and credential validation. You will only be required to implement the capability interfaces for the features you are able to. You can implement these interfaces: SPI Description org.keycloak.storage.user.UserLookupProvider This interface is required if you want to be able to log in with users from this external store. Most (all?) providers implement this interface. org.keycloak.storage.user.UserQueryMethodsProvider Defines complex queries that are used to locate one or more users. You must implement this interface if you want to view and manage users from the administration console. org.keycloak.storage.user.UserCountMethodsProvider Implement this interface if your provider supports count queries. org.keycloak.storage.user.UserQueryProvider This interface is combined capability of UserQueryMethodsProvider and UserCountMethodsProvider . org.keycloak.storage.user.UserRegistrationProvider Implement this interface if your provider supports adding and removing users. org.keycloak.storage.user.UserBulkUpdateProvider Implement this interface if your provider supports bulk update of a set of users. org.keycloak.credential.CredentialInputValidator Implement this interface if your provider can validate one or more different credential types (for example, if your provider can validate a password). org.keycloak.credential.CredentialInputUpdater Implement this interface if your provider supports updating one or more different credential types. 7.3. Model interfaces Most of the methods defined in the capability interfaces either return or are passed in representations of a user. These representations are defined by the org.keycloak.models.UserModel interface. App developers are required to implement this interface. It provides a mapping between the external user store and the user metamodel that Red Hat build of Keycloak uses. package org.keycloak.models; public interface UserModel extends RoleMapperModel { String getId(); String getUsername(); void setUsername(String username); String getFirstName(); void setFirstName(String firstName); String getLastName(); void setLastName(String lastName); String getEmail(); void setEmail(String email); ... } UserModel implementations provide access to read and update metadata about the user including things like username, name, email, role and group mappings, as well as other arbitrary attributes. There are other model classes within the org.keycloak.models package that represent other parts of the Red Hat build of Keycloak metamodel: RealmModel , RoleModel , GroupModel , and ClientModel . 7.3.1. Storage Ids One important method of UserModel is the getId() method. When implementing UserModel developers must be aware of the user id format. The format must be: The Red Hat build of Keycloak runtime often has to look up users by their user id. The user id contains enough information so that the runtime does not have to query every single UserStorageProvider in the system to find the user. The component id is the id returned from ComponentModel.getId() . The ComponentModel is passed in as a parameter when creating the provider class so you can get it from there. The external id is information your provider class needs to find the user in the external store. This is often a username or a uid. For example, it might look something like this: When the runtime does a lookup by id, the id is parsed to obtain the component id. The component id is used to locate the UserStorageProvider that was originally used to load the user. That provider is then passed the id. The provider again parses the id to obtain the external id and it will use to locate the user in external user storage. This format has the drawback that it can generate long IDs for the external storage users. This is specially important when combined with the WebAuthn authentication , which limits the user handle ID to 64 bytes. For that reason, if the storage users are going to use WebAuthn authentication, it is important to limit the full storage ID to 64 characters. The method validateConfiguration can be used to assign a short ID for the provider component on creation, giving some space to the user IDs within the 64 byte limitation. @Override void validateConfiguration(KeycloakSession session, RealmModel realm, ComponentModel model) throws ComponentValidationException { // ... if (model.getId() == null) { // On creation use short UUID of 22 chars, 40 chars left for the user ID model.setId(KeycloakModelUtils.generateShortId()); } } 7.4. Packaging and deployment In order for Red Hat build of Keycloak to recognize the provider, you need to add a file to the JAR: META-INF/services/org.keycloak.storage.UserStorageProviderFactory . This file must contain a line-separated list of fully qualified classnames of the UserStorageProviderFactory implementations: To deploy this jar, copy it to the providers/ directory, then run bin/kc.[sh|bat] build . 7.5. Simple read-only, lookup example To illustrate the basics of implementing the User Storage SPI let's walk through a simple example. In this chapter you'll see the implementation of a simple UserStorageProvider that looks up users in a simple property file. The property file contains username and password definitions and is hardcoded to a specific location on the classpath. The provider will be able to look up the user by ID and username and also be able to validate passwords. Users that originate from this provider will be read-only. 7.5.1. Provider class The first thing we will walk through is the UserStorageProvider class. public class PropertyFileUserStorageProvider implements UserStorageProvider, UserLookupProvider, CredentialInputValidator, CredentialInputUpdater { ... } Our provider class, PropertyFileUserStorageProvider , implements many interfaces. It implements the UserStorageProvider as that is a base requirement of the SPI. It implements the UserLookupProvider interface because we want to be able to log in with users stored by this provider. It implements the CredentialInputValidator interface because we want to be able to validate passwords entered in using the login screen. Our property file is read-only. We implement the CredentialInputUpdater because we want to post an error condition when the user attempts to update his password. protected KeycloakSession session; protected Properties properties; protected ComponentModel model; // map of loaded users in this transaction protected Map<String, UserModel> loadedUsers = new HashMap<>(); public PropertyFileUserStorageProvider(KeycloakSession session, ComponentModel model, Properties properties) { this.session = session; this.model = model; this.properties = properties; } The constructor for this provider class is going to store the reference to the KeycloakSession , ComponentModel , and property file. We'll use all of these later. Also notice that there is a map of loaded users. Whenever we find a user we will store it in this map so that we avoid re-creating it again within the same transaction. This is a good practice to follow as many providers will need to do this (that is, any provider that integrates with JPA). Remember also that provider class instances are created once per transaction and are closed after the transaction completes. 7.5.1.1. UserLookupProvider implementation @Override public UserModel getUserByUsername(RealmModel realm, String username) { UserModel adapter = loadedUsers.get(username); if (adapter == null) { String password = properties.getProperty(username); if (password != null) { adapter = createAdapter(realm, username); loadedUsers.put(username, adapter); } } return adapter; } protected UserModel createAdapter(RealmModel realm, String username) { return new AbstractUserAdapter(session, realm, model) { @Override public String getUsername() { return username; } }; } @Override public UserModel getUserById(RealmModel realm, String id) { StorageId storageId = new StorageId(id); String username = storageId.getExternalId(); return getUserByUsername(realm, username); } @Override public UserModel getUserByEmail(RealmModel realm, String email) { return null; } The getUserByUsername() method is invoked by the Red Hat build of Keycloak login page when a user logs in. In our implementation we first check the loadedUsers map to see if the user has already been loaded within this transaction. If it hasn't been loaded we look in the property file for the username. If it exists we create an implementation of UserModel , store it in loadedUsers for future reference, and return this instance. The createAdapter() method uses the helper class org.keycloak.storage.adapter.AbstractUserAdapter . This provides a base implementation for UserModel . It automatically generates a user id based on the required storage id format using the username of the user as the external id. Every get method of AbstractUserAdapter either returns null or empty collections. However, methods that return role and group mappings will return the default roles and groups configured for the realm for every user. Every set method of AbstractUserAdapter will throw a org.keycloak.storage.ReadOnlyException . So if you attempt to modify the user in the Admin Console, you will get an error. The getUserById() method parses the id parameter using the org.keycloak.storage.StorageId helper class. The StorageId.getExternalId() method is invoked to obtain the username embedded in the id parameter. The method then delegates to getUserByUsername() . Emails are not stored, so the getUserByEmail() method returns null. 7.5.1.2. CredentialInputValidator implementation let's look at the method implementations for CredentialInputValidator . @Override public boolean isConfiguredFor(RealmModel realm, UserModel user, String credentialType) { String password = properties.getProperty(user.getUsername()); return credentialType.equals(PasswordCredentialModel.TYPE) && password != null; } @Override public boolean supportsCredentialType(String credentialType) { return credentialType.equals(PasswordCredentialModel.TYPE); } @Override public boolean isValid(RealmModel realm, UserModel user, CredentialInput input) { if (!supportsCredentialType(input.getType())) return false; String password = properties.getProperty(user.getUsername()); if (password == null) return false; return password.equals(input.getChallengeResponse()); } The isConfiguredFor() method is called by the runtime to determine if a specific credential type is configured for the user. This method checks to see that the password is set for the user. The supportsCredentialType() method returns whether validation is supported for a specific credential type. We check to see if the credential type is password . The isValid() method is responsible for validating passwords. The CredentialInput parameter is really just an abstract interface for all credential types. We make sure that we support the credential type and also that it is an instance of UserCredentialModel . When a user logs in through the login page, the plain text of the password input is put into an instance of UserCredentialModel . The isValid() method checks this value against the plain text password stored in the properties file. A return value of true means the password is valid. 7.5.1.3. CredentialInputUpdater implementation As noted before, the only reason we implement the CredentialInputUpdater interface in this example is to forbid modifications of user passwords. The reason we have to do this is because otherwise the runtime would allow the password to be overridden in Red Hat build of Keycloak local storage. We'll talk more about this later in this chapter. @Override public boolean updateCredential(RealmModel realm, UserModel user, CredentialInput input) { if (input.getType().equals(PasswordCredentialModel.TYPE)) throw new ReadOnlyException("user is read only for this update"); return false; } @Override public void disableCredentialType(RealmModel realm, UserModel user, String credentialType) { } @Override public Stream<String> getDisableableCredentialTypesStream(RealmModel realm, UserModel user) { return Stream.empty(); } The updateCredential() method just checks to see if the credential type is password. If it is, a ReadOnlyException is thrown. 7.5.2. Provider factory implementation Now that the provider class is complete, we now turn our attention to the provider factory class. public class PropertyFileUserStorageProviderFactory implements UserStorageProviderFactory<PropertyFileUserStorageProvider> { public static final String PROVIDER_NAME = "readonly-property-file"; @Override public String getId() { return PROVIDER_NAME; } First thing to notice is that when implementing the UserStorageProviderFactory class, you must pass in the concrete provider class implementation as a template parameter. Here we specify the provider class we defined before: PropertyFileUserStorageProvider . Warning If you do not specify the template parameter, your provider will not function. The runtime does class introspection to determine the capability interfaces that the provider implements. The getId() method identifies the factory in the runtime and will also be the string shown in the admin console when you want to enable a user storage provider for the realm. 7.5.2.1. Initialization private static final Logger logger = Logger.getLogger(PropertyFileUserStorageProviderFactory.class); protected Properties properties = new Properties(); @Override public void init(Config.Scope config) { InputStream is = getClass().getClassLoader().getResourceAsStream("/users.properties"); if (is == null) { logger.warn("Could not find users.properties in classpath"); } else { try { properties.load(is); } catch (IOException ex) { logger.error("Failed to load users.properties file", ex); } } } @Override public PropertyFileUserStorageProvider create(KeycloakSession session, ComponentModel model) { return new PropertyFileUserStorageProvider(session, model, properties); } The UserStorageProviderFactory interface has an optional init() method you can implement. When Red Hat build of Keycloak boots up, only one instance of each provider factory is created. Also at boot time, the init() method is called on each of these factory instances. There's also a postInit() method you can implement as well. After each factory's init() method is invoked, their postInit() methods are called. In our init() method implementation, we find the property file containing our user declarations from the classpath. We then load the properties field with the username and password combinations stored there. The Config.Scope parameter is factory configuration that configured through server configuration. For example, by running the server with the following argument: kc.[sh|bat] start --spi-storage-readonly-property-file-path=/other-users.properties We can specify the classpath of the user property file instead of hardcoding it. Then you can retrieve the configuration in the PropertyFileUserStorageProviderFactory.init() : public void init(Config.Scope config) { String path = config.get("path"); InputStream is = getClass().getClassLoader().getResourceAsStream(path); ... } 7.5.2.2. Create method Our last step in creating the provider factory is the create() method. @Override public PropertyFileUserStorageProvider create(KeycloakSession session, ComponentModel model) { return new PropertyFileUserStorageProvider(session, model, properties); } We simply allocate the PropertyFileUserStorageProvider class. This create method will be called once per transaction. 7.5.3. Packaging and deployment The class files for our provider implementation should be placed in a jar. You also have to declare the provider factory class within the META-INF/services/org.keycloak.storage.UserStorageProviderFactory file. To deploy this jar, copy it to the providers/ directory, then run bin/kc.[sh|bat] build . 7.5.4. Enabling the provider in the Admin Console You enable user storage providers per realm within the User Federation page in the Admin Console. User Federation Procedure Select the provider we just created from the list: readonly-property-file . The configuration page for our provider displays. Click Save because we have nothing to configure. Configured Provider Return to the main User Federation page You now see your provider listed. User Federation You will now be able to log in with a user declared in the users.properties file. This user will only be able to view the account page after logging in. 7.6. Configuration techniques Our PropertyFileUserStorageProvider example is a bit contrived. It is hardcoded to a property file that is embedded in the jar of the provider, which is not terribly useful. We might want to make the location of this file configurable per instance of the provider. In other words, we might want to reuse this provider multiple times in multiple different realms and point to completely different user property files. We'll also want to perform this configuration within the Admin Console UI. The UserStorageProviderFactory has additional methods you can implement that handle provider configuration. You describe the variables you want to configure per provider and the Admin Console automatically renders a generic input page to gather this configuration. When implemented, callback methods also validate the configuration before it is saved, when a provider is created for the first time, and when it is updated. UserStorageProviderFactory inherits these methods from the org.keycloak.component.ComponentFactory interface. List<ProviderConfigProperty> getConfigProperties(); default void validateConfiguration(KeycloakSession session, RealmModel realm, ComponentModel model) throws ComponentValidationException { } default void onCreate(KeycloakSession session, RealmModel realm, ComponentModel model) { } default void onUpdate(KeycloakSession session, RealmModel realm, ComponentModel model) { } The ComponentFactory.getConfigProperties() method returns a list of org.keycloak.provider.ProviderConfigProperty instances. These instances declare metadata that is needed to render and store each configuration variable of the provider. 7.6.1. Configuration example Let's expand our PropertyFileUserStorageProviderFactory example to allow you to point a provider instance to a specific file on disk. PropertyFileUserStorageProviderFactory public class PropertyFileUserStorageProviderFactory implements UserStorageProviderFactory<PropertyFileUserStorageProvider> { protected static final List<ProviderConfigProperty> configMetadata; static { configMetadata = ProviderConfigurationBuilder.create() .property().name("path") .type(ProviderConfigProperty.STRING_TYPE) .label("Path") .defaultValue("USD{jboss.server.config.dir}/example-users.properties") .helpText("File path to properties file") .add().build(); } @Override public List<ProviderConfigProperty> getConfigProperties() { return configMetadata; } The ProviderConfigurationBuilder class is a great helper class to create a list of configuration properties. Here we specify a variable named path that is a String type. On the Admin Console configuration page for this provider, this configuration variable is labeled as Path and has a default value of USD{jboss.server.config.dir}/example-users.properties . When you hover over the tooltip of this configuration option, it displays the help text, File path to properties file . The thing we want to do is to verify that this file exists on disk. We do not want to enable an instance of this provider in the realm unless it points to a valid user property file. To do this, we implement the validateConfiguration() method. @Override public void validateConfiguration(KeycloakSession session, RealmModel realm, ComponentModel config) throws ComponentValidationException { String fp = config.getConfig().getFirst("path"); if (fp == null) throw new ComponentValidationException("user property file does not exist"); fp = EnvUtil.replace(fp); File file = new File(fp); if (!file.exists()) { throw new ComponentValidationException("user property file does not exist"); } } The validateConfiguration() method provides the configuration variable from the ComponentModel to verify if that file exists on disk. Notice that the use of the org.keycloak.common.util.EnvUtil.replace() method. With this method any string that includes USD{} will replace that value with a system property value. The USD{jboss.server.config.dir} string corresponds to the conf/ directory of our server and is really useful for this example. thing we have to do is remove the old init() method. We do this because user property files are going to be unique per provider instance. We move this logic to the create() method. @Override public PropertyFileUserStorageProvider create(KeycloakSession session, ComponentModel model) { String path = model.getConfig().getFirst("path"); Properties props = new Properties(); try { InputStream is = new FileInputStream(path); props.load(is); is.close(); } catch (IOException e) { throw new RuntimeException(e); } return new PropertyFileUserStorageProvider(session, model, props); } This logic is, of course, inefficient as every transaction reads the entire user property file from disk, but hopefully this illustrates, in a simple way, how to hook in configuration variables. 7.6.2. Configuring the provider in the Admin Console Now that the configuration is enabled, you can set the path variable when you configure the provider in the Admin Console. 7.7. Add/Remove user and query capability interfaces One thing we have not done with our example is allow it to add and remove users or change passwords. Users defined in our example are also not queryable or viewable in the Admin Console. To add these enhancements, our example provider must implement the UserQueryMethodsProvider (or UserQueryProvider ) and UserRegistrationProvider interfaces. 7.7.1. Implementing UserRegistrationProvider Use this procedure to implement adding and removing users from the particular store, we first have to be able to save our properties file to disk. PropertyFileUserStorageProvider public void save() { String path = model.getConfig().getFirst("path"); path = EnvUtil.replace(path); try { FileOutputStream fos = new FileOutputStream(path); properties.store(fos, ""); fos.close(); } catch (IOException e) { throw new RuntimeException(e); } } Then, the implementation of the addUser() and removeUser() methods becomes simple. PropertyFileUserStorageProvider public static final String UNSET_PASSWORD="#USD!-UNSET-PASSWORD"; @Override public UserModel addUser(RealmModel realm, String username) { synchronized (properties) { properties.setProperty(username, UNSET_PASSWORD); save(); } return createAdapter(realm, username); } @Override public boolean removeUser(RealmModel realm, UserModel user) { synchronized (properties) { if (properties.remove(user.getUsername()) == null) return false; save(); return true; } } Notice that when adding a user we set the password value of the property map to be UNSET_PASSWORD . We do this as we can't have null values for a property in the property value. We also have to modify the CredentialInputValidator methods to reflect this. The addUser() method will be called if the provider implements the UserRegistrationProvider interface. If your provider has a configuration switch to turn off adding a user, returning null from this method will skip the provider and call the one. PropertyFileUserStorageProvider @Override public boolean isValid(RealmModel realm, UserModel user, CredentialInput input) { if (!supportsCredentialType(input.getType()) || !(input instanceof UserCredentialModel)) return false; UserCredentialModel cred = (UserCredentialModel)input; String password = properties.getProperty(user.getUsername()); if (password == null || UNSET_PASSWORD.equals(password)) return false; return password.equals(cred.getValue()); } Since we can now save our property file, it also makes sense to allow password updates. PropertyFileUserStorageProvider @Override public boolean updateCredential(RealmModel realm, UserModel user, CredentialInput input) { if (!(input instanceof UserCredentialModel)) return false; if (!input.getType().equals(PasswordCredentialModel.TYPE)) return false; UserCredentialModel cred = (UserCredentialModel)input; synchronized (properties) { properties.setProperty(user.getUsername(), cred.getValue()); save(); } return true; } We can now also implement disabling a password. PropertyFileUserStorageProvider @Override public void disableCredentialType(RealmModel realm, UserModel user, String credentialType) { if (!credentialType.equals(PasswordCredentialModel.TYPE)) return; synchronized (properties) { properties.setProperty(user.getUsername(), UNSET_PASSWORD); save(); } } private static final Set<String> disableableTypes = new HashSet<>(); static { disableableTypes.add(PasswordCredentialModel.TYPE); } @Override public Stream<String> getDisableableCredentialTypes(RealmModel realm, UserModel user) { return disableableTypes.stream(); } With these methods implemented, you'll now be able to change and disable the password for the user in the Admin Console. 7.7.2. Implementing UserQueryProvider UserQueryProvider is combination of UserQueryMethodsProvider and UserCountMethodsProvider . Without implementing UserQueryMethodsProvider the Admin Console would not be able to view and manage users that were loaded by our example provider. Let's look at implementing this interface. PropertyFileUserStorageProvider @Override public int getUsersCount(RealmModel realm) { return properties.size(); } @Override public Stream<UserModel> searchForUserStream(RealmModel realm, String search, Integer firstResult, Integer maxResults) { Predicate<String> predicate = "*".equals(search) ? username -> true : username -> username.contains(search); return properties.keySet().stream() .map(String.class::cast) .filter(predicate) .skip(firstResult) .map(username -> getUserByUsername(realm, username)) .limit(maxResults); } The first declaration of searchForUserStream() takes a String parameter. In this example, the parameter represents a username that you want to search by. This string can be a substring, which explains the choice of the String.contains() method when doing the search. Notice the use of * to indicate to request a list of all users. The method iterates over the key set of the property file, delegating to getUserByUsername() to load a user. Notice that we are indexing this call based on the firstResult and maxResults parameter. If your external store does not support pagination, you will have to do similar logic. PropertyFileUserStorageProvider @Override public Stream<UserModel> searchForUserStream(RealmModel realm, Map<String, String> params, Integer firstResult, Integer maxResults) { // only support searching by username String usernameSearchString = params.get("username"); if (usernameSearchString != null) return searchForUserStream(realm, usernameSearchString, firstResult, maxResults); // if we are not searching by username, return all users return searchForUserStream(realm, "*", firstResult, maxResults); } The searchForUserStream() method that takes a Map parameter can search for a user based on first, last name, username, and email. Only usernames are stored, so the search is based only on usernames except when the Map parameter does not contain the username attribute. In this case, all users are returned. In that situation, the searchForUserStream(realm, search, firstResult, maxResults) is used. PropertyFileUserStorageProvider @Override public Stream<UserModel> getGroupMembersStream(RealmModel realm, GroupModel group, Integer firstResult, Integer maxResults) { return Stream.empty(); } @Override public Stream<UserModel> searchForUserByUserAttributeStream(RealmModel realm, String attrName, String attrValue) { return Stream.empty(); } Groups or attributes are not stored, so the other methods return an empty stream. 7.8. Augmenting external storage The PropertyFileUserStorageProvider example is really limited. While we will be able to log in with users stored in a property file, we won't be able to do much else. If users loaded by this provider need special role or group mappings to fully access particular applications there is no way for us to add additional role mappings to these users. You also can't modify or add additional important attributes like email, first and last name. For these types of situations, Red Hat build of Keycloak allows you to augment your external store by storing extra information in Red Hat build of Keycloak's database. This is called federated user storage and is encapsulated within the org.keycloak.storage.federated.UserFederatedStorageProvider class. UserFederatedStorageProvider package org.keycloak.storage.federated; public interface UserFederatedStorageProvider extends Provider, UserAttributeFederatedStorage, UserBrokerLinkFederatedStorage, UserConsentFederatedStorage, UserNotBeforeFederatedStorage, UserGroupMembershipFederatedStorage, UserRequiredActionsFederatedStorage, UserRoleMappingsFederatedStorage, UserFederatedUserCredentialStore { ... } The UserFederatedStorageProvider instance is available on the UserStorageUtil.userFederatedStorage(KeycloakSession) method. It has all different kinds of methods for storing attributes, group and role mappings, different credential types, and required actions. If your external store's datamodel cannot support the full Red Hat build of Keycloak feature set, then this service can fill in the gaps. Red Hat build of Keycloak comes with a helper class org.keycloak.storage.adapter.AbstractUserAdapterFederatedStorage that will delegate every single UserModel method except get/set of username to user federated storage. Override the methods you need to override to delegate to your external storage representations. It is strongly suggested you read the javadoc of this class as it has smaller protected methods you may want to override. Specifically surrounding group membership and role mappings. 7.8.1. Augmentation example In our PropertyFileUserStorageProvider example, we just need a simple change to our provider to use the AbstractUserAdapterFederatedStorage . PropertyFileUserStorageProvider protected UserModel createAdapter(RealmModel realm, String username) { return new AbstractUserAdapterFederatedStorage(session, realm, model) { @Override public String getUsername() { return username; } @Override public void setUsername(String username) { String pw = (String)properties.remove(username); if (pw != null) { properties.put(username, pw); save(); } } }; } We instead define an anonymous class implementation of AbstractUserAdapterFederatedStorage . The setUsername() method makes changes to the properties file and saves it. 7.9. Import implementation strategy When implementing a user storage provider, there's another strategy you can take. Instead of using user federated storage, you can create a user locally in the Red Hat build of Keycloak built-in user database and copy attributes from your external store into this local copy. There are many advantages to this approach. Red Hat build of Keycloak basically becomes a persistence user cache for your external store. Once the user is imported you'll no longer hit the external store thus taking load off of it. If you are moving to Red Hat build of Keycloak as your official user store and deprecating the old external store, you can slowly migrate applications to use Red Hat build of Keycloak. When all applications have been migrated, unlink the imported user, and retire the old legacy external store. There are some obvious disadvantages though to using an import strategy: Looking up a user for the first time will require multiple updates to Red Hat build of Keycloak database. This can be a big performance loss under load and put a lot of strain on the Red Hat build of Keycloak database. The user federated storage approach will only store extra data as needed and may never be used depending on the capabilities of your external store. With the import approach, you have to keep local Red Hat build of Keycloak storage and external storage in sync. The User Storage SPI has capability interfaces that you can implement to support synchronization, but this can quickly become painful and messy. To implement the import strategy you simply check to see first if the user has been imported locally. If so return the local user, if not create the user locally and import data from the external store. You can also proxy the local user so that most changes are automatically synchronized. This will be a bit contrived, but we can extend our PropertyFileUserStorageProvider to take this approach. We begin first by modifying the createAdapter() method. PropertyFileUserStorageProvider protected UserModel createAdapter(RealmModel realm, String username) { UserModel local = UserStoragePrivateUtil.userLocalStorage(session).getUserByUsername(realm, username); if (local == null) { local = UserStoragePrivateUtil.userLocalStorage(session).addUser(realm, username); local.setFederationLink(model.getId()); } return new UserModelDelegate(local) { @Override public void setUsername(String username) { String pw = (String)properties.remove(username); if (pw != null) { properties.put(username, pw); save(); } super.setUsername(username); } }; } In this method we call the UserStoragePrivateUtil.userLocalStorage(session) method to obtain a reference to local Red Hat build of Keycloak user storage. We see if the user is stored locally, if not, we add it locally. Do not set the id of the local user. Let Red Hat build of Keycloak automatically generate the id . Also note that we call UserModel.setFederationLink() and pass in the ID of the ComponentModel of our provider. This sets a link between the provider and the imported user. Note When a user storage provider is removed, any user imported by it will also be removed. This is one of the purposes of calling UserModel.setFederationLink() . Another thing to note is that if a local user is linked, your storage provider will still be delegated to for methods that it implements from the CredentialInputValidator and CredentialInputUpdater interfaces. Returning false from a validation or update will just result in Red Hat build of Keycloak seeing if it can validate or update using local storage. Also notice that we are proxying the local user using the org.keycloak.models.utils.UserModelDelegate class. This class is an implementation of UserModel . Every method just delegates to the UserModel it was instantiated with. We override the setUsername() method of this delegate class to synchronize automatically with the property file. For your providers, you can use this to intercept other methods on the local UserModel to perform synchronization with your external store. For example, get methods could make sure that the local store is in sync. Set methods keep the external store in sync with the local one. One thing to note is that the getId() method should always return the id that was auto generated when you created the user locally. You should not return a federated id as shown in the other non-import examples. Note If your provider is implementing the UserRegistrationProvider interface, your removeUser() method does not need to remove the user from local storage. The runtime will automatically perform this operation. Also note that removeUser() will be invoked before it is removed from local storage. 7.9.1. ImportedUserValidation interface If you remember earlier in this chapter, we discussed how querying for a user worked. Local storage is queried first, if the user is found there, then the query ends. This is a problem for our above implementation as we want to proxy the local UserModel so that we can keep usernames in sync. The User Storage SPI has a callback for whenever a linked local user is loaded from the local database. package org.keycloak.storage.user; public interface ImportedUserValidation { /** * If this method returns null, then the user in local storage will be removed * * @param realm * @param user * @return null if user no longer valid */ UserModel validate(RealmModel realm, UserModel user); } Whenever a linked local user is loaded, if the user storage provider class implements this interface, then the validate() method is called. Here you can proxy the local user passed in as a parameter and return it. That new UserModel will be used. You can also optionally do a check to see if the user still exists in the external store. If validate() returns null , then the local user will be removed from the database. 7.9.2. ImportSynchronization interface With the import strategy you can see that it is possible for the local user copy to get out of sync with external storage. For example, maybe a user has been removed from the external store. The User Storage SPI has an additional interface you can implement to deal with this, org.keycloak.storage.user.ImportSynchronization : package org.keycloak.storage.user; public interface ImportSynchronization { SynchronizationResult sync(KeycloakSessionFactory sessionFactory, String realmId, UserStorageProviderModel model); SynchronizationResult syncSince(Date lastSync, KeycloakSessionFactory sessionFactory, String realmId, UserStorageProviderModel model); } This interface is implemented by the provider factory. Once this interface is implemented by the provider factory, the administration console management page for the provider shows additional options. You can manually force a synchronization by clicking a button. This invokes the ImportSynchronization.sync() method. Also, additional configuration options are displayed that allow you to automatically schedule a synchronization. Automatic synchronizations invoke the syncSince() method. 7.10. User caches When a user object is loaded by ID, username, or email queries it is cached. When a user object is being cached, it iterates through the entire UserModel interface and pulls this information to a local in-memory-only cache. In a cluster, this cache is still local, but it becomes an invalidation cache. When a user object is modified, it is evicted. This eviction event is propagated to the entire cluster so that the other nodes' user cache is also invalidated. 7.10.1. Managing the user cache You can access the user cache by calling KeycloakSession.getProvider(UserCache.class) . /** * All these methods effect an entire cluster of Keycloak instances. * * @author <a href="mailto:[email protected]">Bill Burke</a> * @version USDRevision: 1 USD */ public interface UserCache extends UserProvider { /** * Evict user from cache. * * @param user */ void evict(RealmModel realm, UserModel user); /** * Evict users of a specific realm * * @param realm */ void evict(RealmModel realm); /** * Clear cache entirely. * */ void clear(); } There are methods for evicting specific users, users contained in a specific realm, or the entire cache. 7.10.2. OnUserCache callback interface You might want to cache additional information that is specific to your provider implementation. The User Storage SPI has a callback whenever a user is cached: org.keycloak.models.cache.OnUserCache . public interface OnUserCache { void onCache(RealmModel realm, CachedUserModel user, UserModel delegate); } Your provider class should implement this interface if it wants this callback. The UserModel delegate parameter is the UserModel instance returned by your provider. The CachedUserModel is an expanded UserModel interface. This is the instance that is cached locally in local storage. public interface CachedUserModel extends UserModel { /** * Invalidates the cache for this user and returns a delegate that represents the actual data provider * * @return */ UserModel getDelegateForUpdate(); boolean isMarkedForEviction(); /** * Invalidate the cache for this model * */ void invalidate(); /** * When was the model was loaded from database. * * @return */ long getCacheTimestamp(); /** * Returns a map that contains custom things that are cached along with this model. You can write to this map. * * @return */ ConcurrentHashMap getCachedWith(); } This CachedUserModel interface allows you to evict the user from the cache and get the provider UserModel instance. The getCachedWith() method returns a map that allows you to cache additional information pertaining to the user. For example, credentials are not part of the UserModel interface. If you wanted to cache credentials in memory, you would implement OnUserCache and cache your user's credentials using the getCachedWith() method. 7.10.3. Cache policies On the administration console management page for your user storage provider, you can specify a unique cache policy. 7.11. Leveraging Jakarta EE Since version 20, Keycloak relies only on Quarkus. Unlike WildFly, Quarkus is not an Application Server. Therefore, the User Storage Providers cannot be packaged within any Jakarta EE component or make it an EJB as was the case when Keycloak ran over WildFly in versions. Providers implementations are required to be plain java objects which implement the suitable User Storage SPI interfaces, as was explained in the sections. They must be packaged and deployed as stated in the Migration Guide. See Migrating custom providers . You can still implement your custom UserStorageProvider class, which is able to integrate an external database by JPA Entity Manager, as shown in this example: https://github.com/redhat-developer/rhbk-quickstarts/tree/26.x/extension/user-storage-jpa CDI is not supported. 7.12. REST management API You can create, remove, and update your user storage provider deployments through the administrator REST API. The User Storage SPI is built on top of a generic component interface so you will be using that generic API to manage your providers. The REST Component API lives under your realm admin resource. We will only show this REST API interaction with the Java client. Hopefully you can extract how to do this from curl from this API. public interface ComponentsResource { @GET @Produces(MediaType.APPLICATION_JSON) public List<ComponentRepresentation> query(); @GET @Produces(MediaType.APPLICATION_JSON) public List<ComponentRepresentation> query(@QueryParam("parent") String parent); @GET @Produces(MediaType.APPLICATION_JSON) public List<ComponentRepresentation> query(@QueryParam("parent") String parent, @QueryParam("type") String type); @GET @Produces(MediaType.APPLICATION_JSON) public List<ComponentRepresentation> query(@QueryParam("parent") String parent, @QueryParam("type") String type, @QueryParam("name") String name); @POST @Consumes(MediaType.APPLICATION_JSON) Response add(ComponentRepresentation rep); @Path("{id}") ComponentResource component(@PathParam("id") String id); } public interface ComponentResource { @GET public ComponentRepresentation toRepresentation(); @PUT @Consumes(MediaType.APPLICATION_JSON) public void update(ComponentRepresentation rep); @DELETE public void remove(); } To create a user storage provider, you must specify the provider id, a provider type of the string org.keycloak.storage.UserStorageProvider , as well as the configuration. import org.keycloak.admin.client.Keycloak; import org.keycloak.representations.idm.RealmRepresentation; ... Keycloak keycloak = Keycloak.getInstance( "http://localhost:8080", "master", "admin", "password", "admin-cli"); RealmResource realmResource = keycloak.realm("master"); RealmRepresentation realm = realmResource.toRepresentation(); ComponentRepresentation component = new ComponentRepresentation(); component.setName("home"); component.setProviderId("readonly-property-file"); component.setProviderType("org.keycloak.storage.UserStorageProvider"); component.setParentId(realm.getId()); component.setConfig(new MultivaluedHashMap()); component.getConfig().putSingle("path", "~/users.properties"); realmResource.components().add(component); // retrieve a component List<ComponentRepresentation> components = realmResource.components().query(realm.getId(), "org.keycloak.storage.UserStorageProvider", "home"); component = components.get(0); // Update a component component.getConfig().putSingle("path", "~/my-users.properties"); realmResource.components().component(component.getId()).update(component); // Remove a component realmREsource.components().component(component.getId()).remove(); 7.13. Migrating from an earlier user federation SPI Note This chapter is only applicable if you have implemented a provider using the earlier (and now removed) User Federation SPI. In Keycloak version 2.4.0 and earlier there was a User Federation SPI. Red Hat Single Sign-On version 7.0, although unsupported, had this earlier SPI available as well. This earlier User Federation SPI has been removed from Keycloak version 2.5.0 and Red Hat Single Sign-On version 7.1. However, if you have written a provider with this earlier SPI, this chapter discusses some strategies you can use to port it. 7.13.1. Import versus non-import The earlier User Federation SPI required you to create a local copy of a user in the Red Hat build of Keycloak's database and import information from your external store to the local copy. However, this is no longer a requirement. You can still port your earlier provider as-is, but you should consider whether a non-import strategy might be a better approach. Advantages of the import strategy: Red Hat build of Keycloak basically becomes a persistence user cache for your external store. Once the user is imported you'll no longer hit the external store, thus taking load off of it. If you are moving to Red Hat build of Keycloak as your official user store and deprecating the earlier external store, you can slowly migrate applications to use Red Hat build of Keycloak. When all applications have been migrated, unlink the imported user, and retire the earlier legacy external store. There are some obvious disadvantages though to using an import strategy: Looking up a user for the first time will require multiple updates to Red Hat build of Keycloak database. This can be a big performance loss under load and put a lot of strain on the Red Hat build of Keycloak database. The user federated storage approach will only store extra data as needed and might never be used depending on the capabilities of your external store. With the import approach, you have to keep local Red Hat build of Keycloak storage and external storage in sync. The User Storage SPI has capability interfaces that you can implement to support synchronization, but this can quickly become painful and messy. 7.13.2. UserFederationProvider versus UserStorageProvider The first thing to notice is that UserFederationProvider was a complete interface. You implemented every method in this interface. However, UserStorageProvider has instead broken up this interface into multiple capability interfaces that you implement as needed. UserFederationProvider.getUserByUsername() and getUserByEmail() have exact equivalents in the new SPI. The difference between the two is how you import. If you are going to continue with an import strategy, you no longer call KeycloakSession.userStorage().addUser() to create the user locally. Instead you call KeycloakSession.userLocalStorage().addUser() . The userStorage() method no longer exists. The UserFederationProvider.validateAndProxy() method has been moved to an optional capability interface, ImportedUserValidation . You want to implement this interface if you are porting your earlier provider as-is. Also note that in the earlier SPI, this method was called every time the user was accessed, even if the local user is in the cache. In the later SPI, this method is only called when the local user is loaded from local storage. If the local user is cached, then the ImportedUserValidation.validate() method is not called at all. The UserFederationProvider.isValid() method no longer exists in the later SPI. The UserFederationProvider methods synchronizeRegistrations() , registerUser() , and removeUser() have been moved to the UserRegistrationProvider capability interface. This new interface is optional to implement so if your provider does not support creating and removing users, you don't have to implement it. If your earlier provider had switch to toggle support for registering new users, this is supported in the new SPI, returning null from UserRegistrationProvider.addUser() if the provider doesn't support adding users. The earlier UserFederationProvider methods centered around credentials are now encapsulated in the CredentialInputValidator and CredentialInputUpdater interfaces, which are also optional to implement depending on if you support validating or updating credentials. Credential management used to exist in UserModel methods. These also have been moved to the CredentialInputValidator and CredentialInputUpdater interfaces. One thing to note that if you do not implement the CredentialInputUpdater interface, then any credentials provided by your provider can be overridden locally in Red Hat build of Keycloak storage. So if you want your credentials to be read-only, implement the CredentialInputUpdater.updateCredential() method and return a ReadOnlyException . The UserFederationProvider query methods such as searchByAttributes() and getGroupMembers() are now encapsulated in an optional interface UserQueryProvider . If you do not implement this interface, then users will not be viewable in the admin console. You'll still be able to log in though. 7.13.3. UserFederationProviderFactory versus UserStorageProviderFactory The synchronization methods in the earlier SPI are now encapsulated within an optional ImportSynchronization interface. If you have implemented synchronization logic, then have your new UserStorageProviderFactory implement the ImportSynchronization interface. 7.13.4. Upgrading to a new model The User Storage SPI instances are stored in a different set of relational tables. Red Hat build of Keycloak automatically runs a migration script. If any earlier User Federation providers are deployed for a realm, they are converted to the later storage model as is, including the id of the data. This migration will only happen if a User Storage provider exists with the same provider ID (i.e., "ldap", "kerberos") as the earlier User Federation provider. So, knowing this there are different approaches you can take. You can remove the earlier provider in your earlier Red Hat build of Keycloak deployment. This will remove the local linked copies of all users you imported. Then, when you upgrade Red Hat build of Keycloak, just deploy and configure your new provider for your realm. The second option is to write your new provider making sure it has the same provider ID: UserStorageProviderFactory.getId() . Make sure this provider is deployed to the server. Boot the server, and have the built-in migration script convert from the earlier data model to the later data model. In this case all your earlier linked imported users will work and be the same. If you have decided to get rid of the import strategy and rewrite your User Storage provider, we suggest that you remove the earlier provider before upgrading Red Hat build of Keycloak. This will remove linked local imported copies of any user you imported. 7.14. Stream-based interfaces Many of the user storage interfaces in Red Hat build of Keycloak contain query methods that can return potentially large sets of objects, which might lead to significant impacts in terms of memory consumption and processing time. This is especially true when only a small subset of the objects' internal state is used in the query method's logic. To provide developers with a more efficient alternative to process large data sets in these query methods, a Streams sub-interface has been added to user storage interfaces. These Streams sub-interfaces replace the original collection-based methods in the super-interfaces with stream-based variants, making the collection-based methods default. The default implementation of a collection-based query method invokes its Stream counterpart and collects the result into the proper collection type. The Streams sub-interfaces allow for implementations to focus on the stream-based approach for processing sets of data and benefit from the potential memory and performance optimizations of that approach. The interfaces that offer a Streams sub-interface to be implemented include a few capability interfaces , all interfaces in the org.keycloak.storage.federated package and a few others that might be implemented depending on the scope of the custom storage implementation. See this list of the interfaces that offer a Streams sub-interface to developers. Package Classes org.keycloak.credential CredentialInputUpdater (*) org.keycloak.models GroupModel , RoleMapperModel , UserModel org.keycloak.storage.federated All interfaces org.keycloak.storage.user UserQueryProvider (*) (*) indicates the interface is a capability interface Custom user storage implementation that want to benefit from the streams approach should simply implement the Streams sub-interfaces instead of the original interfaces. For example, the following code uses the Streams variant of the UserQueryProvider interface: public class CustomQueryProvider extends UserQueryProvider.Streams { ... @Override Stream<UserModel> getUsersStream(RealmModel realm, Integer firstResult, Integer maxResults) { // custom logic here } @Override Stream<UserModel> searchForUserStream(String search, RealmModel realm) { // custom logic here } ... }
[ "package org.keycloak.storage; public interface UserStorageProvider extends Provider { /** * Callback when a realm is removed. Implement this if, for example, you want to do some * cleanup in your user storage when a realm is removed * * @param realm */ default void preRemove(RealmModel realm) { } /** * Callback when a group is removed. Allows you to do things like remove a user * group mapping in your external store if appropriate * * @param realm * @param group */ default void preRemove(RealmModel realm, GroupModel group) { } /** * Callback when a role is removed. Allows you to do things like remove a user * role mapping in your external store if appropriate * @param realm * @param role */ default void preRemove(RealmModel realm, RoleModel role) { } }", "package org.keycloak.storage; /** * @author <a href=\"mailto:[email protected]\">Bill Burke</a> * @version USDRevision: 1 USD */ public interface UserStorageProviderFactory<T extends UserStorageProvider> extends ComponentFactory<T, UserStorageProvider> { /** * This is the name of the provider and will be shown in the admin console as an option. * * @return */ @Override String getId(); /** * called per Keycloak transaction. * * @param session * @param model * @return */ T create(KeycloakSession session, ComponentModel model); }", "public class FileProviderFactory implements UserStorageProviderFactory<FileProvider> { public String getId() { return \"file-provider\"; } public FileProvider create(KeycloakSession session, ComponentModel model) { }", "package org.keycloak.models; public interface UserModel extends RoleMapperModel { String getId(); String getUsername(); void setUsername(String username); String getFirstName(); void setFirstName(String firstName); String getLastName(); void setLastName(String lastName); String getEmail(); void setEmail(String email); }", "\"f:\" + component id + \":\" + external id", "f:332a234e31234:wburke", "@Override void validateConfiguration(KeycloakSession session, RealmModel realm, ComponentModel model) throws ComponentValidationException { // if (model.getId() == null) { // On creation use short UUID of 22 chars, 40 chars left for the user ID model.setId(KeycloakModelUtils.generateShortId()); } }", "org.keycloak.examples.federation.properties.ClasspathPropertiesStorageFactory org.keycloak.examples.federation.properties.FilePropertiesStorageFactory", "public class PropertyFileUserStorageProvider implements UserStorageProvider, UserLookupProvider, CredentialInputValidator, CredentialInputUpdater { }", "protected KeycloakSession session; protected Properties properties; protected ComponentModel model; // map of loaded users in this transaction protected Map<String, UserModel> loadedUsers = new HashMap<>(); public PropertyFileUserStorageProvider(KeycloakSession session, ComponentModel model, Properties properties) { this.session = session; this.model = model; this.properties = properties; }", "@Override public UserModel getUserByUsername(RealmModel realm, String username) { UserModel adapter = loadedUsers.get(username); if (adapter == null) { String password = properties.getProperty(username); if (password != null) { adapter = createAdapter(realm, username); loadedUsers.put(username, adapter); } } return adapter; } protected UserModel createAdapter(RealmModel realm, String username) { return new AbstractUserAdapter(session, realm, model) { @Override public String getUsername() { return username; } }; } @Override public UserModel getUserById(RealmModel realm, String id) { StorageId storageId = new StorageId(id); String username = storageId.getExternalId(); return getUserByUsername(realm, username); } @Override public UserModel getUserByEmail(RealmModel realm, String email) { return null; }", "\"f:\" + component id + \":\" + username", "@Override public boolean isConfiguredFor(RealmModel realm, UserModel user, String credentialType) { String password = properties.getProperty(user.getUsername()); return credentialType.equals(PasswordCredentialModel.TYPE) && password != null; } @Override public boolean supportsCredentialType(String credentialType) { return credentialType.equals(PasswordCredentialModel.TYPE); } @Override public boolean isValid(RealmModel realm, UserModel user, CredentialInput input) { if (!supportsCredentialType(input.getType())) return false; String password = properties.getProperty(user.getUsername()); if (password == null) return false; return password.equals(input.getChallengeResponse()); }", "@Override public boolean updateCredential(RealmModel realm, UserModel user, CredentialInput input) { if (input.getType().equals(PasswordCredentialModel.TYPE)) throw new ReadOnlyException(\"user is read only for this update\"); return false; } @Override public void disableCredentialType(RealmModel realm, UserModel user, String credentialType) { } @Override public Stream<String> getDisableableCredentialTypesStream(RealmModel realm, UserModel user) { return Stream.empty(); }", "public class PropertyFileUserStorageProviderFactory implements UserStorageProviderFactory<PropertyFileUserStorageProvider> { public static final String PROVIDER_NAME = \"readonly-property-file\"; @Override public String getId() { return PROVIDER_NAME; }", "private static final Logger logger = Logger.getLogger(PropertyFileUserStorageProviderFactory.class); protected Properties properties = new Properties(); @Override public void init(Config.Scope config) { InputStream is = getClass().getClassLoader().getResourceAsStream(\"/users.properties\"); if (is == null) { logger.warn(\"Could not find users.properties in classpath\"); } else { try { properties.load(is); } catch (IOException ex) { logger.error(\"Failed to load users.properties file\", ex); } } } @Override public PropertyFileUserStorageProvider create(KeycloakSession session, ComponentModel model) { return new PropertyFileUserStorageProvider(session, model, properties); }", "kc.[sh|bat] start --spi-storage-readonly-property-file-path=/other-users.properties", "public void init(Config.Scope config) { String path = config.get(\"path\"); InputStream is = getClass().getClassLoader().getResourceAsStream(path); }", "@Override public PropertyFileUserStorageProvider create(KeycloakSession session, ComponentModel model) { return new PropertyFileUserStorageProvider(session, model, properties); }", "org.keycloak.examples.federation.properties.FilePropertiesStorageFactory", "List<ProviderConfigProperty> getConfigProperties(); default void validateConfiguration(KeycloakSession session, RealmModel realm, ComponentModel model) throws ComponentValidationException { } default void onCreate(KeycloakSession session, RealmModel realm, ComponentModel model) { } default void onUpdate(KeycloakSession session, RealmModel realm, ComponentModel model) { }", "public class PropertyFileUserStorageProviderFactory implements UserStorageProviderFactory<PropertyFileUserStorageProvider> { protected static final List<ProviderConfigProperty> configMetadata; static { configMetadata = ProviderConfigurationBuilder.create() .property().name(\"path\") .type(ProviderConfigProperty.STRING_TYPE) .label(\"Path\") .defaultValue(\"USD{jboss.server.config.dir}/example-users.properties\") .helpText(\"File path to properties file\") .add().build(); } @Override public List<ProviderConfigProperty> getConfigProperties() { return configMetadata; }", "@Override public void validateConfiguration(KeycloakSession session, RealmModel realm, ComponentModel config) throws ComponentValidationException { String fp = config.getConfig().getFirst(\"path\"); if (fp == null) throw new ComponentValidationException(\"user property file does not exist\"); fp = EnvUtil.replace(fp); File file = new File(fp); if (!file.exists()) { throw new ComponentValidationException(\"user property file does not exist\"); } }", "@Override public PropertyFileUserStorageProvider create(KeycloakSession session, ComponentModel model) { String path = model.getConfig().getFirst(\"path\"); Properties props = new Properties(); try { InputStream is = new FileInputStream(path); props.load(is); is.close(); } catch (IOException e) { throw new RuntimeException(e); } return new PropertyFileUserStorageProvider(session, model, props); }", "public void save() { String path = model.getConfig().getFirst(\"path\"); path = EnvUtil.replace(path); try { FileOutputStream fos = new FileOutputStream(path); properties.store(fos, \"\"); fos.close(); } catch (IOException e) { throw new RuntimeException(e); } }", "public static final String UNSET_PASSWORD=\"#USD!-UNSET-PASSWORD\"; @Override public UserModel addUser(RealmModel realm, String username) { synchronized (properties) { properties.setProperty(username, UNSET_PASSWORD); save(); } return createAdapter(realm, username); } @Override public boolean removeUser(RealmModel realm, UserModel user) { synchronized (properties) { if (properties.remove(user.getUsername()) == null) return false; save(); return true; } }", "@Override public boolean isValid(RealmModel realm, UserModel user, CredentialInput input) { if (!supportsCredentialType(input.getType()) || !(input instanceof UserCredentialModel)) return false; UserCredentialModel cred = (UserCredentialModel)input; String password = properties.getProperty(user.getUsername()); if (password == null || UNSET_PASSWORD.equals(password)) return false; return password.equals(cred.getValue()); }", "@Override public boolean updateCredential(RealmModel realm, UserModel user, CredentialInput input) { if (!(input instanceof UserCredentialModel)) return false; if (!input.getType().equals(PasswordCredentialModel.TYPE)) return false; UserCredentialModel cred = (UserCredentialModel)input; synchronized (properties) { properties.setProperty(user.getUsername(), cred.getValue()); save(); } return true; }", "@Override public void disableCredentialType(RealmModel realm, UserModel user, String credentialType) { if (!credentialType.equals(PasswordCredentialModel.TYPE)) return; synchronized (properties) { properties.setProperty(user.getUsername(), UNSET_PASSWORD); save(); } } private static final Set<String> disableableTypes = new HashSet<>(); static { disableableTypes.add(PasswordCredentialModel.TYPE); } @Override public Stream<String> getDisableableCredentialTypes(RealmModel realm, UserModel user) { return disableableTypes.stream(); }", "@Override public int getUsersCount(RealmModel realm) { return properties.size(); } @Override public Stream<UserModel> searchForUserStream(RealmModel realm, String search, Integer firstResult, Integer maxResults) { Predicate<String> predicate = \"*\".equals(search) ? username -> true : username -> username.contains(search); return properties.keySet().stream() .map(String.class::cast) .filter(predicate) .skip(firstResult) .map(username -> getUserByUsername(realm, username)) .limit(maxResults); }", "@Override public Stream<UserModel> searchForUserStream(RealmModel realm, Map<String, String> params, Integer firstResult, Integer maxResults) { // only support searching by username String usernameSearchString = params.get(\"username\"); if (usernameSearchString != null) return searchForUserStream(realm, usernameSearchString, firstResult, maxResults); // if we are not searching by username, return all users return searchForUserStream(realm, \"*\", firstResult, maxResults); }", "@Override public Stream<UserModel> getGroupMembersStream(RealmModel realm, GroupModel group, Integer firstResult, Integer maxResults) { return Stream.empty(); } @Override public Stream<UserModel> searchForUserByUserAttributeStream(RealmModel realm, String attrName, String attrValue) { return Stream.empty(); }", "package org.keycloak.storage.federated; public interface UserFederatedStorageProvider extends Provider, UserAttributeFederatedStorage, UserBrokerLinkFederatedStorage, UserConsentFederatedStorage, UserNotBeforeFederatedStorage, UserGroupMembershipFederatedStorage, UserRequiredActionsFederatedStorage, UserRoleMappingsFederatedStorage, UserFederatedUserCredentialStore { }", "protected UserModel createAdapter(RealmModel realm, String username) { return new AbstractUserAdapterFederatedStorage(session, realm, model) { @Override public String getUsername() { return username; } @Override public void setUsername(String username) { String pw = (String)properties.remove(username); if (pw != null) { properties.put(username, pw); save(); } } }; }", "protected UserModel createAdapter(RealmModel realm, String username) { UserModel local = UserStoragePrivateUtil.userLocalStorage(session).getUserByUsername(realm, username); if (local == null) { local = UserStoragePrivateUtil.userLocalStorage(session).addUser(realm, username); local.setFederationLink(model.getId()); } return new UserModelDelegate(local) { @Override public void setUsername(String username) { String pw = (String)properties.remove(username); if (pw != null) { properties.put(username, pw); save(); } super.setUsername(username); } }; }", "package org.keycloak.storage.user; public interface ImportedUserValidation { /** * If this method returns null, then the user in local storage will be removed * * @param realm * @param user * @return null if user no longer valid */ UserModel validate(RealmModel realm, UserModel user); }", "package org.keycloak.storage.user; public interface ImportSynchronization { SynchronizationResult sync(KeycloakSessionFactory sessionFactory, String realmId, UserStorageProviderModel model); SynchronizationResult syncSince(Date lastSync, KeycloakSessionFactory sessionFactory, String realmId, UserStorageProviderModel model); }", "/** * All these methods effect an entire cluster of Keycloak instances. * * @author <a href=\"mailto:[email protected]\">Bill Burke</a> * @version USDRevision: 1 USD */ public interface UserCache extends UserProvider { /** * Evict user from cache. * * @param user */ void evict(RealmModel realm, UserModel user); /** * Evict users of a specific realm * * @param realm */ void evict(RealmModel realm); /** * Clear cache entirely. * */ void clear(); }", "public interface OnUserCache { void onCache(RealmModel realm, CachedUserModel user, UserModel delegate); }", "public interface CachedUserModel extends UserModel { /** * Invalidates the cache for this user and returns a delegate that represents the actual data provider * * @return */ UserModel getDelegateForUpdate(); boolean isMarkedForEviction(); /** * Invalidate the cache for this model * */ void invalidate(); /** * When was the model was loaded from database. * * @return */ long getCacheTimestamp(); /** * Returns a map that contains custom things that are cached along with this model. You can write to this map. * * @return */ ConcurrentHashMap getCachedWith(); }", "/admin/realms/{realm-name}/components", "public interface ComponentsResource { @GET @Produces(MediaType.APPLICATION_JSON) public List<ComponentRepresentation> query(); @GET @Produces(MediaType.APPLICATION_JSON) public List<ComponentRepresentation> query(@QueryParam(\"parent\") String parent); @GET @Produces(MediaType.APPLICATION_JSON) public List<ComponentRepresentation> query(@QueryParam(\"parent\") String parent, @QueryParam(\"type\") String type); @GET @Produces(MediaType.APPLICATION_JSON) public List<ComponentRepresentation> query(@QueryParam(\"parent\") String parent, @QueryParam(\"type\") String type, @QueryParam(\"name\") String name); @POST @Consumes(MediaType.APPLICATION_JSON) Response add(ComponentRepresentation rep); @Path(\"{id}\") ComponentResource component(@PathParam(\"id\") String id); } public interface ComponentResource { @GET public ComponentRepresentation toRepresentation(); @PUT @Consumes(MediaType.APPLICATION_JSON) public void update(ComponentRepresentation rep); @DELETE public void remove(); }", "import org.keycloak.admin.client.Keycloak; import org.keycloak.representations.idm.RealmRepresentation; Keycloak keycloak = Keycloak.getInstance( \"http://localhost:8080\", \"master\", \"admin\", \"password\", \"admin-cli\"); RealmResource realmResource = keycloak.realm(\"master\"); RealmRepresentation realm = realmResource.toRepresentation(); ComponentRepresentation component = new ComponentRepresentation(); component.setName(\"home\"); component.setProviderId(\"readonly-property-file\"); component.setProviderType(\"org.keycloak.storage.UserStorageProvider\"); component.setParentId(realm.getId()); component.setConfig(new MultivaluedHashMap()); component.getConfig().putSingle(\"path\", \"~/users.properties\"); realmResource.components().add(component); // retrieve a component List<ComponentRepresentation> components = realmResource.components().query(realm.getId(), \"org.keycloak.storage.UserStorageProvider\", \"home\"); component = components.get(0); // Update a component component.getConfig().putSingle(\"path\", \"~/my-users.properties\"); realmResource.components().component(component.getId()).update(component); // Remove a component realmREsource.components().component(component.getId()).remove();", "public class CustomQueryProvider extends UserQueryProvider.Streams { @Override Stream<UserModel> getUsersStream(RealmModel realm, Integer firstResult, Integer maxResults) { // custom logic here } @Override Stream<UserModel> searchForUserStream(String search, RealmModel realm) { // custom logic here } }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/server_developer_guide/user-storage-spi
Chapter 21. Encrypting block devices using LUKS
Chapter 21. Encrypting block devices using LUKS By using the disk encryption, you can protect the data on a block device by encrypting it. To access the device's decrypted contents, enter a passphrase or key as authentication. This is important for mobile computers and removable media because it helps to protect the device's contents even if it has been physically removed from the system. The LUKS format is a default implementation of block device encryption in Red Hat Enterprise Linux. 21.1. LUKS disk encryption Linux Unified Key Setup-on-disk-format (LUKS) provides a set of tools that simplifies managing the encrypted devices. With LUKS, you can encrypt block devices and enable multiple user keys to decrypt a master key. For bulk encryption of the partition, use this master key. Red Hat Enterprise Linux uses LUKS to perform block device encryption. By default, the option to encrypt the block device is unchecked during the installation. If you select the option to encrypt your disk, the system prompts you for a passphrase every time you boot the computer. This passphrase unlocks the bulk encryption key that decrypts your partition. If you want to modify the default partition table, you can select the partitions that you want to encrypt. This is set in the partition table settings. Ciphers The default cipher used for LUKS is aes-xts-plain64 . The default key size for LUKS is 512 bits. The default key size for LUKS with Anaconda XTS mode is 512 bits. The following are the available ciphers: Advanced Encryption Standard (AES) Twofish Serpent Operations performed by LUKS LUKS encrypts entire block devices and is therefore well-suited for protecting contents of mobile devices such as removable storage media or laptop disk drives. The underlying contents of the encrypted block device are arbitrary, which makes it useful for encrypting swap devices. This can also be useful with certain databases that use specially formatted block devices for data storage. LUKS uses the existing device mapper kernel subsystem. LUKS provides passphrase strengthening, which protects against dictionary attacks. LUKS devices contain multiple key slots, which means you can add backup keys or passphrases. Important LUKS is not recommended for the following scenarios: Disk-encryption solutions such as LUKS protect the data only when your system is off. After the system is on and LUKS has decrypted the disk, the files on that disk are available to anyone who have access to them. Scenarios that require multiple users to have distinct access keys to the same device. The LUKS1 format provides eight key slots and LUKS2 provides up to 32 key slots. Applications that require file-level encryption. Additional resources LUKS Project Home Page LUKS On-Disk Format Specification FIPS 197: Advanced Encryption Standard (AES) 21.2. LUKS versions in RHEL In Red Hat Enterprise Linux, the default format for LUKS encryption is LUKS2. The old LUKS1 format remains fully supported and it is provided as a format compatible with earlier Red Hat Enterprise Linux releases. LUKS2 re-encryption is considered more robust and safe to use as compared to LUKS1 re-encryption. The LUKS2 format enables future updates of various parts without a need to modify binary structures. Internally it uses JSON text format for metadata, provides redundancy of metadata, detects metadata corruption, and automatically repairs from a metadata copy. Important Do not use LUKS2 in systems that support only LUKS1 because LUKS2 and LUKS1 use different commands to encrypt the disk. Using the wrong command for a LUKS version might cause data loss. Table 21.1. Encryption commands depending on the LUKS version LUKS version Encryption command LUKS2 cryptsetup reencrypt LUKS1 cryptsetup-reencrypt Online re-encryption The LUKS2 format supports re-encrypting encrypted devices while the devices are in use. For example, you do not have to unmount the file system on the device to perform the following tasks: Changing the volume key Changing the encryption algorithm When encrypting a non-encrypted device, you must still unmount the file system. You can remount the file system after a short initialization of the encryption. The LUKS1 format does not support online re-encryption. Conversion In certain situations, you can convert LUKS1 to LUKS2. The conversion is not possible specifically in the following scenarios: A LUKS1 device is marked as being used by a Policy-Based Decryption (PBD) Clevis solution. The cryptsetup tool does not convert the device when some luksmeta metadata are detected. A device is active. The device must be in an inactive state before any conversion is possible. 21.3. Options for data protection during LUKS2 re-encryption LUKS2 provides several options that prioritize performance or data protection during the re-encryption process. It provides the following modes for the resilience option, and you can select any of these modes by using the cryptsetup reencrypt --resilience resilience-mode /dev/sdx command: checksum The default mode. It balances data protection and performance. This mode stores individual checksums of the sectors in the re-encryption area, which the recovery process can detect for the sectors that were re-encrypted by LUKS2. The mode requires that the block device sector write is atomic. journal The safest mode but also the slowest. Since this mode journals the re-encryption area in the binary area, the LUKS2 writes the data twice. none The none mode prioritizes performance and provides no data protection. It protects the data only against safe process termination, such as the SIGTERM signal or the user pressing Ctrl + C key. Any unexpected system failure or application failure might result in data corruption. If a LUKS2 re-encryption process terminates unexpectedly by force, LUKS2 can perform the recovery in one of the following ways: Automatically By performing any one of the following actions triggers the automatic recovery action during the LUKS2 device open action: Executing the cryptsetup open command. Attaching the device with the systemd-cryptsetup command. Manually By using the cryptsetup repair /dev/sdx command on the LUKS2 device. Additional resources cryptsetup-reencrypt(8) and cryptsetup-repair(8) man pages on your system 21.4. Encrypting existing data on a block device using LUKS2 You can encrypt the existing data on a not yet encrypted device by using the LUKS2 format. A new LUKS header is stored in the head of the device. Prerequisites The block device has a file system. You have backed up your data. Warning You might lose your data during the encryption process due to a hardware, kernel, or human failure. Ensure that you have a reliable backup before you start encrypting the data. Procedure Unmount all file systems on the device that you plan to encrypt, for example: Make free space for storing a LUKS header. Use one of the following options that suits your scenario: In the case of encrypting a logical volume, you can extend the logical volume without resizing the file system. For example: Extend the partition by using partition management tools, such as parted . Shrink the file system on the device. You can use the resize2fs utility for the ext2, ext3, or ext4 file systems. Note that you cannot shrink the XFS file system. Initialize the encryption: Mount the device: Add an entry for a persistent mapping to the /etc/crypttab file: Find the luksUUID : Open /etc/crypttab in a text editor of your choice and add a device in this file: Replace a52e2cc9-a5be-47b8-a95d-6bdf4f2d9325 with your device's luksUUID . Refresh initramfs with dracut : Add an entry for a persistent mounting to the /etc/fstab file: Find the file system's UUID of the active LUKS block device: Open /etc/fstab in a text editor of your choice and add a device in this file, for example: Replace 37bc2492-d8fa-4969-9d9b-bb64d3685aa9 with your file system's UUID. Resume the online encryption: Verification Verify if the existing data was encrypted: View the status of the encrypted blank block device: Additional resources cryptsetup(8) , cryptsetup-reencrypt(8) , lvextend(8) , resize2fs(8) , and parted(8) man pages on your system 21.5. Encrypting existing data on a block device using LUKS2 with a detached header You can encrypt existing data on a block device without creating free space for storing a LUKS header. The header is stored in a detached location, which also serves as an additional layer of security. The procedure uses the LUKS2 encryption format. Prerequisites The block device has a file system. You have backed up your data. Warning You might lose your data during the encryption process due to a hardware, kernel, or human failure. Ensure that you have a reliable backup before you start encrypting the data. Procedure Unmount all file systems on the device, for example: Initialize the encryption: Replace /home/header with a path to the file with a detached LUKS header. The detached LUKS header has to be accessible to unlock the encrypted device later. Mount the device: Resume the online encryption: Verification Verify if the existing data on a block device using LUKS2 with a detached header is encrypted: View the status of the encrypted blank block device: Additional resources cryptsetup(8) and cryptsetup-reencrypt(8) man pages on your system 21.6. Encrypting a blank block device using LUKS2 You can encrypt a blank block device, which you can use for an encrypted storage by using the LUKS2 format. Prerequisites A blank block device. You can use commands such as lsblk to find if there is no real data on that device, for example, a file system. Procedure Setup a partition as an encrypted LUKS partition: Open an encrypted LUKS partition: This unlocks the partition and maps it to a new device by using the device mapper. To not overwrite the encrypted data, this command alerts the kernel that the device is an encrypted device and addressed through LUKS by using the /dev/mapper/ device_mapped_name path. Create a file system to write encrypted data to the partition, which must be accessed through the device mapped name: Mount the device: Verification Verify if the blank block device is encrypted: View the status of the encrypted blank block device: Additional resources cryptsetup(8) , cryptsetup-open (8) , and cryptsetup-lusFormat(8) man pages on your system 21.7. Configuring the LUKS passphrase in the web console If you want to add encryption to an existing logical volume on your system, you can only do so through formatting the volume. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The cockpit-storaged package is installed on your system. Available existing logical volume without encryption. Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . In the panel, click Storage . In the Storage table, click the menu button ... for the storage device you want to encrypt and click Format . In the Encryption field , select the encryption specification, LUKS1 or LUKS2 . Set and confirm your new passphrase. Optional: Modify further encryption options. Finalize formatting settings. Click Format . 21.8. Changing the LUKS passphrase in the web console Change a LUKS passphrase on an encrypted disk or partition in the web console. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The cockpit-storaged package is installed on your system. Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . In the panel, click Storage . In the Storage table, select the disk with encrypted data. On the disk page, scroll to the Keys section and click the edit button. In the Change passphrase dialog window: Enter your current passphrase. Enter your new passphrase. Confirm your new passphrase. Click Save . 21.9. Creating a LUKS2 encrypted volume by using the storage RHEL system role You can use the storage role to create and configure a volume encrypted with LUKS by running an Ansible playbook. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: luks_password: <password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Manage local storage hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Create and configure a volume encrypted with LUKS ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs fs_label: <label> mount_point: /mnt/data encryption: true encryption_password: "{{ luks_password }}" For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.storage/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Find the luksUUID value of the LUKS encrypted volume: View the encryption status of the volume: Verify the created LUKS encrypted volume: Additional resources /usr/share/ansible/roles/rhel-system-roles.storage/README.md file /usr/share/doc/rhel-system-roles/storage/ directory Encrypting block devices by using LUKS Ansible vault
[ "umount /dev/mapper/vg00-lv00", "lvextend -L+ 32M /dev/mapper/vg00-lv00", "cryptsetup reencrypt --encrypt --init-only --reduce-device-size 32M /dev/mapper/ vg00-lv00 lv00_encrypted /dev/mapper/ lv00_encrypted is now active and ready for online encryption.", "mount /dev/mapper/ lv00_encrypted /mnt/lv00_encrypted", "cryptsetup luksUUID /dev/mapper/ vg00-lv00 a52e2cc9-a5be-47b8-a95d-6bdf4f2d9325", "vi /etc/crypttab lv00_encrypted UUID= a52e2cc9-a5be-47b8-a95d-6bdf4f2d9325 none", "dracut -f --regenerate-all", "blkid -p /dev/mapper/ lv00_encrypted /dev/mapper/ lv00-encrypted : UUID=\" 37bc2492-d8fa-4969-9d9b-bb64d3685aa9 \" BLOCK_SIZE=\"4096\" TYPE=\"xfs\" USAGE=\"filesystem\"", "vi /etc/fstab UUID= 37bc2492-d8fa-4969-9d9b-bb64d3685aa9 /home auto rw,user,auto 0", "cryptsetup reencrypt --resume-only /dev/mapper/ vg00-lv00 Enter passphrase for /dev/mapper/ vg00-lv00 : Auto-detected active dm device ' lv00_encrypted ' for data device /dev/mapper/ vg00-lv00 . Finished, time 00:31.130, 10272 MiB written, speed 330.0 MiB/s", "cryptsetup luksDump /dev/mapper/ vg00-lv00 LUKS header information Version: 2 Epoch: 4 Metadata area: 16384 [bytes] Keyslots area: 16744448 [bytes] UUID: a52e2cc9-a5be-47b8-a95d-6bdf4f2d9325 Label: (no label) Subsystem: (no subsystem) Flags: (no flags) Data segments: 0: crypt offset: 33554432 [bytes] length: (whole device) cipher: aes-xts-plain64 [...]", "cryptsetup status lv00_encrypted /dev/mapper/ lv00_encrypted is active and is in use. type: LUKS2 cipher: aes-xts-plain64 keysize: 512 bits key location: keyring device: /dev/mapper/ vg00-lv00", "umount /dev/ nvme0n1p1", "cryptsetup reencrypt --encrypt --init-only --header /home/header /dev/ nvme0n1p1 nvme_encrypted WARNING! ======== Header file does not exist, do you want to create it? Are you sure? (Type 'yes' in capital letters): YES Enter passphrase for /home/header : Verify passphrase: /dev/mapper/ nvme_encrypted is now active and ready for online encryption.", "mount /dev/mapper/ nvme_encrypted /mnt/nvme_encrypted", "cryptsetup reencrypt --resume-only --header /home/header /dev/ nvme0n1p1 Enter passphrase for /dev/ nvme0n1p1 : Auto-detected active dm device 'nvme_encrypted' for data device /dev/ nvme0n1p1 . Finished, time 00m51s, 10 GiB written, speed 198.2 MiB/s", "cryptsetup luksDump /home/header LUKS header information Version: 2 Epoch: 88 Metadata area: 16384 [bytes] Keyslots area: 16744448 [bytes] UUID: c4f5d274-f4c0-41e3-ac36-22a917ab0386 Label: (no label) Subsystem: (no subsystem) Flags: (no flags) Data segments: 0: crypt offset: 0 [bytes] length: (whole device) cipher: aes-xts-plain64 sector: 512 [bytes] [...]", "cryptsetup status nvme_encrypted /dev/mapper/ nvme_encrypted is active and is in use. type: LUKS2 cipher: aes-xts-plain64 keysize: 512 bits key location: keyring device: /dev/ nvme0n1p1", "cryptsetup luksFormat /dev/ nvme0n1p1 WARNING! ======== This will overwrite data on /dev/nvme0n1p1 irrevocably. Are you sure? (Type 'yes' in capital letters): YES Enter passphrase for /dev/ nvme0n1p1 : Verify passphrase:", "cryptsetup open /dev/ nvme0n1p1 nvme0n1p1_encrypted Enter passphrase for /dev/ nvme0n1p1 :", "mkfs -t ext4 /dev/mapper/ nvme0n1p1_encrypted", "mount /dev/mapper/ nvme0n1p1_encrypted mount-point", "cryptsetup luksDump /dev/ nvme0n1p1 LUKS header information Version: 2 Epoch: 3 Metadata area: 16384 [bytes] Keyslots area: 16744448 [bytes] UUID: 34ce4870-ffdf-467c-9a9e-345a53ed8a25 Label: (no label) Subsystem: (no subsystem) Flags: (no flags) Data segments: 0: crypt offset: 16777216 [bytes] length: (whole device) cipher: aes-xts-plain64 sector: 512 [bytes] [...]", "cryptsetup status nvme0n1p1_encrypted /dev/mapper/ nvme0n1p1_encrypted is active and is in use. type: LUKS2 cipher: aes-xts-plain64 keysize: 512 bits key location: keyring device: /dev/ nvme0n1p1 sector size: 512 offset: 32768 sectors size: 20938752 sectors mode: read/write", "ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>", "luks_password: <password>", "--- - name: Manage local storage hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Create and configure a volume encrypted with LUKS ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs fs_label: <label> mount_point: /mnt/data encryption: true encryption_password: \"{{ luks_password }}\"", "ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml", "ansible-playbook --ask-vault-pass ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'cryptsetup luksUUID /dev/sdb' 4e4e7970-1822-470e-b55a-e91efe5d0f5c", "ansible managed-node-01.example.com -m command -a 'cryptsetup status luks-4e4e7970-1822-470e-b55a-e91efe5d0f5c' /dev/mapper/luks-4e4e7970-1822-470e-b55a-e91efe5d0f5c is active and is in use. type: LUKS2 cipher: aes-xts-plain64 keysize: 512 bits key location: keyring device: /dev/sdb", "ansible managed-node-01.example.com -m command -a 'cryptsetup luksDump /dev/sdb' LUKS header information Version: 2 Epoch: 3 Metadata area: 16384 [bytes] Keyslots area: 16744448 [bytes] UUID: 4e4e7970-1822-470e-b55a-e91efe5d0f5c Label: (no label) Subsystem: (no subsystem) Flags: (no flags) Data segments: 0: crypt offset: 16777216 [bytes] length: (whole device) cipher: aes-xts-plain64 sector: 512 [bytes]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_storage_devices/encrypting-block-devices-using-luks_managing-storage-devices
Chapter 2. Interacting with the Data Grid REST API
Chapter 2. Interacting with the Data Grid REST API The Data Grid REST API lets you monitor, maintain, and manage Data Grid deployments and provides access to your data. Note By default Data Grid REST API operations return 200 (OK) when successful. However, when some operations are processed successfully, they return an HTTP status code such as 204 or 202 instead of 200 . 2.1. Creating and Managing Caches Create and manage Data Grid caches and perform operations on data. 2.1.1. Creating Caches Create named caches across Data Grid clusters with POST requests that include XML or JSON configuration in the payload. Table 2.1. Headers Header Required or Optional Parameter Content-Type REQUIRED Sets the MediaType for the Data Grid configuration payload; either application/xml or application/json . Flags OPTIONAL Used to set AdminFlags 2.1.1.1. Cache configuration You can create declarative cache configuration in XML, JSON, and YAML format. All declarative caches must conform to the Data Grid schema. Configuration in JSON format must follow the structure of an XML configuration, elements correspond to objects and attributes correspond to fields. Important Data Grid restricts characters to a maximum of 255 for a cache name or a cache template name. If you exceed this character limit, Data Grid throws an exception. Write succinct cache names and cache template names. Important A file system might set a limitation for the length of a file name, so ensure that a cache's name does not exceed this limitation. If a cache name exceeds a file system's naming limitation, general operations or initialing operations towards that cache might fail. Write succinct file names. Distributed caches XML <distributed-cache owners="2" segments="256" capacity-factor="1.0" l1-lifespan="5000" mode="SYNC" statistics="true"> <encoding media-type="application/x-protostream"/> <locking isolation="REPEATABLE_READ"/> <transaction mode="FULL_XA" locking="OPTIMISTIC"/> <expiration lifespan="5000" max-idle="1000" /> <memory max-count="1000000" when-full="REMOVE"/> <indexing enabled="true" storage="local-heap"> <index-reader refresh-interval="1000"/> <indexed-entities> <indexed-entity>org.infinispan.Person</indexed-entity> </indexed-entities> </indexing> <partition-handling when-split="ALLOW_READ_WRITES" merge-policy="PREFERRED_NON_NULL"/> <persistence passivation="false"> <!-- Persistent storage configuration. --> </persistence> </distributed-cache> JSON { "distributed-cache": { "mode": "SYNC", "owners": "2", "segments": "256", "capacity-factor": "1.0", "l1-lifespan": "5000", "statistics": "true", "encoding": { "media-type": "application/x-protostream" }, "locking": { "isolation": "REPEATABLE_READ" }, "transaction": { "mode": "FULL_XA", "locking": "OPTIMISTIC" }, "expiration" : { "lifespan" : "5000", "max-idle" : "1000" }, "memory": { "max-count": "1000000", "when-full": "REMOVE" }, "indexing" : { "enabled" : true, "storage" : "local-heap", "index-reader" : { "refresh-interval" : "1000" }, "indexed-entities": [ "org.infinispan.Person" ] }, "partition-handling" : { "when-split" : "ALLOW_READ_WRITES", "merge-policy" : "PREFERRED_NON_NULL" }, "persistence" : { "passivation" : false } } } YAML distributedCache: mode: "SYNC" owners: "2" segments: "256" capacityFactor: "1.0" l1Lifespan: "5000" statistics: "true" encoding: mediaType: "application/x-protostream" locking: isolation: "REPEATABLE_READ" transaction: mode: "FULL_XA" locking: "OPTIMISTIC" expiration: lifespan: "5000" maxIdle: "1000" memory: maxCount: "1000000" whenFull: "REMOVE" indexing: enabled: "true" storage: "local-heap" indexReader: refreshInterval: "1000" indexedEntities: - "org.infinispan.Person" partitionHandling: whenSplit: "ALLOW_READ_WRITES" mergePolicy: "PREFERRED_NON_NULL" persistence: passivation: "false" # Persistent storage configuration. Replicated caches XML <replicated-cache segments="256" mode="SYNC" statistics="true"> <encoding media-type="application/x-protostream"/> <locking isolation="REPEATABLE_READ"/> <transaction mode="FULL_XA" locking="OPTIMISTIC"/> <expiration lifespan="5000" max-idle="1000" /> <memory max-count="1000000" when-full="REMOVE"/> <indexing enabled="true" storage="local-heap"> <index-reader refresh-interval="1000"/> <indexed-entities> <indexed-entity>org.infinispan.Person</indexed-entity> </indexed-entities> </indexing> <partition-handling when-split="ALLOW_READ_WRITES" merge-policy="PREFERRED_NON_NULL"/> <persistence passivation="false"> <!-- Persistent storage configuration. --> </persistence> </replicated-cache> JSON { "replicated-cache": { "mode": "SYNC", "segments": "256", "statistics": "true", "encoding": { "media-type": "application/x-protostream" }, "locking": { "isolation": "REPEATABLE_READ" }, "transaction": { "mode": "FULL_XA", "locking": "OPTIMISTIC" }, "expiration" : { "lifespan" : "5000", "max-idle" : "1000" }, "memory": { "max-count": "1000000", "when-full": "REMOVE" }, "indexing" : { "enabled" : true, "storage" : "local-heap", "index-reader" : { "refresh-interval" : "1000" }, "indexed-entities": [ "org.infinispan.Person" ] }, "partition-handling" : { "when-split" : "ALLOW_READ_WRITES", "merge-policy" : "PREFERRED_NON_NULL" }, "persistence" : { "passivation" : false } } } YAML replicatedCache: mode: "SYNC" segments: "256" statistics: "true" encoding: mediaType: "application/x-protostream" locking: isolation: "REPEATABLE_READ" transaction: mode: "FULL_XA" locking: "OPTIMISTIC" expiration: lifespan: "5000" maxIdle: "1000" memory: maxCount: "1000000" whenFull: "REMOVE" indexing: enabled: "true" storage: "local-heap" indexReader: refreshInterval: "1000" indexedEntities: - "org.infinispan.Person" partitionHandling: whenSplit: "ALLOW_READ_WRITES" mergePolicy: "PREFERRED_NON_NULL" persistence: passivation: "false" # Persistent storage configuration. Multiple caches XML <infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:config:15.0 https://infinispan.org/schemas/infinispan-config-15.0.xsd urn:infinispan:server:15.0 https://infinispan.org/schemas/infinispan-server-15.0.xsd" xmlns="urn:infinispan:config:15.0" xmlns:server="urn:infinispan:server:15.0"> <cache-container name="default" statistics="true"> <distributed-cache name="mycacheone" mode="ASYNC" statistics="true"> <encoding media-type="application/x-protostream"/> <expiration lifespan="300000"/> <memory max-size="400MB" when-full="REMOVE"/> </distributed-cache> <distributed-cache name="mycachetwo" mode="SYNC" statistics="true"> <encoding media-type="application/x-protostream"/> <expiration lifespan="300000"/> <memory max-size="400MB" when-full="REMOVE"/> </distributed-cache> </cache-container> </infinispan> JSON { "infinispan" : { "cache-container" : { "name" : "default", "statistics" : "true", "caches" : { "mycacheone" : { "distributed-cache" : { "mode": "ASYNC", "statistics": "true", "encoding": { "media-type": "application/x-protostream" }, "expiration" : { "lifespan" : "300000" }, "memory": { "max-size": "400MB", "when-full": "REMOVE" } } }, "mycachetwo" : { "distributed-cache" : { "mode": "SYNC", "statistics": "true", "encoding": { "media-type": "application/x-protostream" }, "expiration" : { "lifespan" : "300000" }, "memory": { "max-size": "400MB", "when-full": "REMOVE" } } } } } } } YAML infinispan: cacheContainer: name: "default" statistics: "true" caches: mycacheone: distributedCache: mode: "ASYNC" statistics: "true" encoding: mediaType: "application/x-protostream" expiration: lifespan: "300000" memory: maxSize: "400MB" whenFull: "REMOVE" mycachetwo: distributedCache: mode: "SYNC" statistics: "true" encoding: mediaType: "application/x-protostream" expiration: lifespan: "300000" memory: maxSize: "400MB" whenFull: "REMOVE" Additional resources Data Grid configuration schema reference infinispan-config-15.0.xsd 2.1.2. Modifying Caches Make changes to attributes in cache configurations across Data Grid clusters with PUT requests that include XML or JSON configuration in the payload. Note You can modify a cache only if the changes are compatible with the existing configuration. For example you cannot use a replicated cache configuration to modify a distributed cache. Likewise if you create a cache configuration with a specific attribute, you cannot modify the configuration to use a different attribute instead. For example, attempting to modify cache configuration by specifying a value for the max-count attribute results in invalid configuration if the max-size is already set. Table 2.2. Headers Header Required or Optional Parameter Content-Type REQUIRED Sets the MediaType for the Data Grid configuration payload; either application/xml or application/json . Flags OPTIONAL Used to set AdminFlags 2.1.3. Verifying Caches Check if a cache exists in Data Grid clusters with HEAD requests. Retrieve a caches health with GET requests. 2.1.4. Creating Caches with Templates Create caches from Data Grid templates with POST requests and the ?template= parameter. Tip See Listing Available Cache Templates. 2.1.5. Retrieving Cache Configuration Retrieve Data Grid cache configurations with GET requests. Table 2.3. Headers Header Required or Optional Parameter Accept OPTIONAL Sets the required format to return content. Supported formats are application/xml and application/json . The default is application/json . See Accept for more information. 2.1.6. Converting Cache Configurations between XML, JSON and YAML Invoke a POST request with valid configuration and the ?action=convert parameter. Data Grid responds with the equivalent representation of the configuration in the type specified by the Accept header. To convert cache configuration you must specify the input format for the configuration with the Content-Type header and the desired output format with the Accept header. For example, the following command converts the replicated cache configuration from XML to YAML: 2.1.7. Comparing Cache Configurations Invoke a POST request with a multipart/form-data body containing two cache configurations and the ?action=compare parameter. Tip Add the ignoreMutable=true parameter to ignore mutable attributes in the comparison. Data Grid responds with 204 (No Content) in case the configurations are equal, and 409 (Conflict) in case they are different. 2.1.8. Retrieving All Cache Details Invoke a GET request to retrieve all details for Data Grid caches. Data Grid provides a JSON response such as the following: { "stats": { "time_since_start": -1, "time_since_reset": -1, "hits": -1, "current_number_of_entries": -1, "current_number_of_entries_in_memory": -1, "stores": -1, "off_heap_memory_used": -1, "data_memory_used": -1, "retrievals": -1, "misses": -1, "remove_hits": -1, "remove_misses": -1, "evictions": -1, "average_read_time": -1, "average_read_time_nanos": -1, "average_write_time": -1, "average_write_time_nanos": -1, "average_remove_time": -1, "average_remove_time_nanos": -1, "required_minimum_number_of_nodes": -1 }, "size": 0, "configuration": { "distributed-cache": { "mode": "SYNC", "transaction": { "stop-timeout": 0, "mode": "NONE" } } }, "rehash_in_progress": false, "rebalancing_enabled": true, "bounded": false, "indexed": false, "persistent": false, "transactional": false, "secured": false, "has_remote_backup": false, "indexing_in_progress": false, "statistics": false, "mode" : "DIST_SYNC", "storage_type": "HEAP", "max_size": "", "max_size_bytes" : -1 } stats current stats of the cache. size the estimated size for the cache. configuration the cache configuration. rehash_in_progress true when a rehashing is in progress. indexing_in_progress true when indexing is in progress. rebalancing_enabled is true if rebalancing is enabled. Fetching this property might fail on the server. In that case the property won't be present in the payload. bounded when expiration is enabled. indexed true if the cache is indexed. persistent true if the cache is persisted. transactional true if the cache is transactional. secured true if the cache is secured. has_remote_backup true if the cache has remote backups. key_storage the media type of the cache keys. value_storage the media type of the cache values. Note key_storage and value_storage matches encoding configuration of the cache. For server caches with no encoding, Data Grid assumes application/x-protostream when a cache is indexed and application/unknown otherwise. 2.1.9. Resetting All Cache Statistics Invoke a POST request to reset all statistics for Data Grid caches. 2.1.10. Retrieving Data Distribution of a Cache Invoke a GET request to retrieve all details for data distribution of Data Grid caches. Data Grid provides a JSON response such as the following: [ { "node_name": "NodeA", "node_addresses": [ "127.0.0.1:44175" ], "memory_entries": 0, "total_entries": 0, "memory_used": 528512 }, { "node_name":"NodeB", "node_addresses": [ "127.0.0.1:44187" ], "memory_entries": 0, "total_entries": 0, "memory_used": 528512 } ] Each element in the list represents a node. The properties are: node_name is the node name node_addresses is a list with all the node's physical addresses. memory_entries the number of entries the node holds in memory belonging to the cache. total_entries the number of entries the node has in memory and disk belonging to the cache. memory_used the value in bytes the eviction algorithm estimates the cache occupies. Returns -1 if eviction is not enabled. 2.1.11. Retrieving all mutable cache configuration attributes Invoke a GET request to retrieve all mutable cache configuration attributes for Data Grid caches. Data Grid provides a JSON response such as the following: [ "jmx-statistics.statistics", "locking.acquire-timeout", "transaction.single-phase-auto-commit", "expiration.max-idle", "transaction.stop-timeout", "clustering.remote-timeout", "expiration.lifespan", "expiration.interval", "memory.max-count", "memory.max-size" ] Add the full parameter to obtain values and type information: Data Grid provides a JSON response such as the following: { "jmx-statistics.statistics": { "value": true, "type": "boolean" }, "locking.acquire-timeout": { "value": 15000, "type": "long" }, "transaction.single-phase-auto-commit": { "value": false, "type": "boolean" }, "expiration.max-idle": { "value": -1, "type": "long" }, "transaction.stop-timeout": { "value": 30000, "type": "long" }, "clustering.remote-timeout": { "value": 17500, "type": "long" }, "expiration.lifespan": { "value": -1, "type": "long" }, "expiration.interval": { "value": 60000, "type": "long" }, "memory.max-count": { "value": -1, "type": "long" }, "memory.max-size": { "value": null, "type": "string" } } For attributes of type enum , an additional universe property will contain the set of possible values. 2.1.12. Updating cache configuration attributes Invoke a POST request to change a mutable cache configuration attribute. 2.1.13. Adding Entries Add entries to caches with POST requests. The preceding request places the payload, or request body, in the cacheName cache with the cacheKey key. The request replaces any data that already exists and updates the Time-To-Live and Last-Modified values, if they apply. If the entry is created successfully, the service returns 204 (No Content) . If a value already exists for the specified key, the POST request returns 409 (Conflict) and does not modify the value. To update values, you should use PUT requests. See Replacing Entries . Table 2.4. Headers Header Required or Optional Parameter Key-Content-Type OPTIONAL Sets the content type for the key in the request. See Key-Content-Type for more information. Content-Type OPTIONAL Sets the MediaType of the value for the key. timeToLiveSeconds OPTIONAL Sets the number of seconds before the entry is automatically deleted. If you do not set this parameter, Data Grid uses the default value from the configuration. If you set a negative value, the entry is never deleted. maxIdleTimeSeconds OPTIONAL Sets the number of seconds that entries can be idle. If a read or write operation does not occur for an entry after the maximum idle time elapses, the entry is automatically deleted. If you do not set this parameter, Data Grid uses the default value from the configuration. If you set a negative value, the entry is never deleted. flags OPTIONAL The flags used to add the entry. See Flag for more information. Note The flags header also applies to all other operations involving data manipulation on the cache, Note If both timeToLiveSeconds and maxIdleTimeSeconds have a value of 0 , Data Grid uses the default lifespan and maxIdle values from the configuration. If only maxIdleTimeSeconds has a value of 0 , Data Grid uses: the default maxIdle value from the configuration. the value for timeToLiveSeconds that you pass as a request parameter or a value of -1 if you do not pass a value. If only timeToLiveSeconds has a value of 0 , Data Grid uses: the default lifespan value from the configuration. the value for maxIdle that you pass as a request parameter or a value of -1 if you do not pass a value. 2.1.14. Replacing Entries Replace entries in caches with PUT requests. If a value already exists for the specified key, the PUT request updates the value. If you do not want to modify existing values, use POST requests that return 409 (Conflict) instead of modifying values. See Adding Values . 2.1.15. Retrieving Data By Keys Retrieve data for specific keys with GET requests. The server returns data from the given cache, cacheName , under the given key, cacheKey , in the response body. Responses contain Content-Type headers that correspond to the MediaType negotiation. Note Browsers can also access caches directly, for example as a content delivery network (CDN). Data Grid returns a unique ETag for each entry along with the Last-Modified and Expires header fields. These fields provide information about the state of the data that is returned in your request. ETags allow browsers and other clients to request only data that has changed, which conserves bandwidth. Table 2.5. Headers Header Required or Optional Parameter Key-Content-Type OPTIONAL Sets the content type for the key in the request. The default is application/x-java-object; type=java.lang.String . See Key-Content-Type for more information. Accept OPTIONAL Sets the required format to return content. See Accept for more information. Tip Append the extended parameter to the query string to get additional information: The preceding request returns custom headers: Cluster-Primary-Owner returns the node name that is the primary owner of the key. Cluster-Node-Name returns the JGroups node name of the server that handled the request. Cluster-Physical-Address returns the physical JGroups address of the server that handled the request. 2.1.16. Checking if Entries Exist Verify that specific entries exists with HEAD requests. The preceding request returns only the header fields and the same content that you stored with the entry. For example, if you stored a String, the request returns a String. If you stored binary, base64-encoded, blobs or serialized Java objects, Data Grid does not de-serialize the content in the request. Note HEAD requests also support the extended parameter. Table 2.6. Headers Header Required or Optional Parameter Key-Content-Type OPTIONAL Sets the content type for the key in the request. The default is application/x-java-object; type=java.lang.String . See Key-Content-Type for more information. 2.1.17. Deleting Entries Remove entries from caches with DELETE requests. Table 2.7. Headers Header Required or Optional Parameter Key-Content-Type OPTIONAL Sets the content type for the key in the request. The default is application/x-java-object; type=java.lang.String . See Key-Content-Type for more information. 2.1.18. Checking distribution of cache entries Invoke this endpoint to retrieve details for data distribution of Data Grid cache entry. Data Grid provides a JSON response such as the following: { "contains_key": true, "owners": [ { "node_name": "NodeA", "primary": true, "node_addresses": [ "127.0.0.1:39492" ] }, { "node_name": "NodeB", "primary": false, "node_addresses": [ "127.0.0.1:38195" ] } ] } contains_key returns true if the cache contains the key owners provides a list of nodes that contain the key List of owners includes the following properties: node_name shows the name of the node primary identifies a node that is the primary owner node_addresses shows the IP addresses and ports where the node can be accessed 2.1.19. Deleting Caches Remove caches from Data Grid clusters with DELETE requests. 2.1.20. Retrieving All Keys from Caches Invoke GET requests to retrieve all the keys in a cache in JSON format. Table 2.8. Request Parameters Parameter Required or Optional Value limit OPTIONAL Specifies the maximum number of keys to retrieve using an InputStream. A negative value retrieves all keys. The default value is -1 . batch OPTIONAL Specifies the internal batch size when retrieving the keys. The default value is 1000 . 2.1.21. Retrieving All Entries from Caches Invoke GET requests to retrieve all the entries in a cache in JSON format. Table 2.9. Request Parameters Parameter Required or Optional Value metadata OPTIONAL Includes metadata for each entry in the response. The default value is false . limit OPTIONAL Specifies the maximum number of keys to include in the response. A negative value retrieves all keys. The default value is -1 . batch OPTIONAL Specifies the internal batch size when retrieving the keys. The default value is 1000 . content-negotiation OPTIONAL If true , will convert keys and values to a readable format. For caches with text encodings (e.g., text/plain, xml, json), the server returns keys and values as plain text. For caches with binary encodings, the server will return the entries as JSON if the conversion is supported, otherwise in a text hexadecimal format, e.g., 0xA123CF98 . When content-negotiation is used, the response will contain two headers: key-content-type and value-content-type to described the negotiated format. Data Grid provides a JSON response such as the following: [ { "key": 1, "value": "value1", "timeToLiveSeconds": -1, "maxIdleTimeSeconds": -1, "created": -1, "lastUsed": -1, "expireTime": -1 }, { "key": 2, "value": "value2", "timeToLiveSeconds": 10, "maxIdleTimeSeconds": 45, "created": 1607966017944, "lastUsed": 1607966017944, "expireTime": 1607966027944, "version": 7 }, { "key": 3, "value": "value2", "timeToLiveSeconds": 10, "maxIdleTimeSeconds": 45, "created": 1607966017944, "lastUsed": 1607966017944, "expireTime": 1607966027944, "version": 7, "topologyId": 9 } ] key The key for the entry. value The value of the entry. timeToLiveSeconds Based on the entry lifespan but in seconds, or -1 if the entry never expires. It's not returned unless you set metadata="true". maxIdleTimeSeconds Maximum idle time, in seconds, or -1 if entry never expires. It's not returned unless you set metadata="true". created Time the entry was created or or -1 for immortal entries. It's not returned unless you set metadata="true". lastUsed Last time an operation was performed on the entry or -1 for immortal entries. It's not returned unless you set metadata="true". expireTime Time when the entry expires or -1 for immortal entries. It's not returned unless you set metadata="true". version The metadata version related to the cache entry. Only if the value is present. topologyId The topology Id of a clustered version metadata. Only if the value is present. 2.1.22. Clearing Caches To delete all data from a cache, invoke a POST request with the ?action=clear parameter. If the operation successfully completes, the service returns 204 (No Content) . 2.1.23. Getting Cache Size Retrieve the size of caches across the entire cluster with GET requests and the ?action=size parameter. 2.1.24. Getting Cache Statistics Obtain runtime statistics for caches with GET requests. 2.1.25. Listing Caches List all available caches in Data Grid clusters with GET requests. 2.1.26. Obtaining Caches Status and Information Retrieve a list of all available caches for the cache manager, along with cache statuses and details, with GET requests. Data Grid responds with JSON arrays that lists and describes each available cache, as in the following example: [ { "status" : "RUNNING", "name" : "cache1", "type" : "local-cache", "simple_cache" : false, "transactional" : false, "persistent" : false, "bounded": false, "secured": false, "indexed": true, "has_remote_backup": true, "health":"HEALTHY", "rebalancing_enabled": true }, { "status" : "RUNNING", "name" : "cache2", "type" : "distributed-cache", "simple_cache" : false, "transactional" : true, "persistent" : false, "bounded": false, "secured": false, "indexed": true, "has_remote_backup": true, "health":"HEALTHY", "rebalancing_enabled": false }] Table 2.10. Request parameters Parameter Required or Optional Description pretty OPTIONAL If true returns formatted content, including additional spacing and line separators which improve readability but increase payload size. The default is false . 2.1.27. Listing accessible caches for a role When security is enabled, retrieve a list of all the accessible caches for a role. This operation requires ADMIN permission. Data Grid responds with JSON as in the following example: { "secured" : ["securedCache1", "securedCache2"], "non-secured" : ["cache1", "cache2", "cache3"] } Table 2.11. Request parameters Parameter Required or Optional Description pretty OPTIONAL If true returns formatted content, including additional spacing and line separators which improve readability but increase payload size. The default is false . 2.1.28. Listening to cache events Receive cache events using Server-Sent Events . The event value will be one of cache-entry-created , cache-entry-removed , cache-entry-updated , cache-entry-expired . The data value will contain the key of the entry that has fired the event in the format set by the Accept header. Table 2.12. Headers Header Required or Optional Parameter Accept OPTIONAL Sets the required format to return content. Supported formats are text/plain and application/json . The default is application/json . See Accept for more information. 2.1.29. Enabling rebalancing Turn on automatic rebalancing for a specific cache. 2.1.30. Disabling rebalancing Turn off automatic rebalancing for a specific cache. 2.1.31. Getting Cache Availability Retrieve the availability of a cache. Note You can get the availability of internal caches but this is subject to change in future Data Grid versions. 2.1.32. Setting Cache Availability Change the availability of clustered caches when using either the DENY_READ_WRITES or ALLOW_READS partition handling strategy. Table 2.13. Request Parameters Parameter Required or Optional Value availability REQUIRED AVAILABLE or DEGRADED_MODE AVAILABLE makes caches available to all nodes in a network partition. DEGRADED_MODE prevents read and write operations on caches when network partitions occur. Note You can set the availability of internal caches but this is subject to change in future Data Grid versions. 2.1.33. Set a Stable Topology By default, after a cluster shutdown, Data Grid waits for all nodes to join the cluster and restore the topology. However, you can define the current cluster topology as stable for a specific cache using a REST operation. Table 2.14. Request Parameters Parameter Required or Optional Value force OPTIONAL true or false. force is required when the number of missing nodes in the current topology is greater or equal to the number of owners. Important Manually installing a topology can lead to data loss, only perform this operation if the initial topology cannot be recreated. 2.1.34. Indexing and Querying with the REST API Query remote caches with GET requests and the ?action=search&query parameter from any HTTP client. Data Grid response { "hit_count" : 150, "hit_count_exact" : true, "hits" : [ { "hit" : { "name" : "user1", "age" : 35 } }, { "hit" : { "name" : "user2", "age" : 42 } }, { "hit" : { "name" : "user3", "age" : 12 } } ] } hit_count shows the total number of results from the query. hit_count_exact is true which means the hit_count is exact. When it's false , it implies that the hit count value is a lower bound. hits represents an array of individual matches from the query. hit refers to each object that corresponds to a match in the query. Tip Hits can contain all fields or a subset of fields if you use a Select clause. Table 2.15. Request Parameters Parameter Required or Optional Value query REQUIRED Specifies the query string. offset OPTIONAL Specifies the index of the first result to return. The default is 0 . max_results OPTIONAL Sets the number of results to return. The default is 10 . hit_count_accuracy OPTIONAL Limits the required accuracy of the hit count for the indexed queries to an upper-bound. The default is 10_000 . You can change the default limit by setting the query.hit-count-accuracy cache property. local OPTIONAL When true , the query is restricted to the data present in node that process the request. The default is false . To use the body of the request instead of specifying query parameters, invoke POST requests as follows: Query in request body { "query":"from Entity where name:\"user1\"", "max_results":20, "offset":10 } 2.1.34.1. Rebuilding indexes When you delete fields or change index field definitions, you must rebuild the index to ensure the index is consistent with data in the cache. Note Rebuilding Protobuf schema using REST, CLI, Data Grid Console or remote client might lead to inconsistencies. Remote clients might have different versions of the Protostream entity and this might lead to unreliable behavior. Reindex all data in caches with POST requests and the ?action=reindex parameter. Table 2.16. Request Parameters Parameter Required or Optional Value mode OPTIONAL Values for the mode parameter are as follows: * sync returns 204 (No Content) only after the re-indexing operation is complete. * async returns 204 (No Content) immediately and the re-indexing operation continues running in the cluster. You can check the status with the Index Statistics REST call. local OPTIONAL When true , only the data from node that process the request is re-indexed. The default is false , meaning all data cluster-wide is re-indexed. 2.1.34.2. Updating index schema The update index schema operation lets you add schema changes with a minimal downtime. Instead of removing previously indexed data and recreating the index schema, Data Grid adds new fields to the existing schema. Update the index schema of values in your cache using POST requests and the ?action=updateSchema parameter. 2.1.34.3. Purging indexes Delete all indexes from caches with POST requests and the ?action=clear parameter. If the operation successfully completes, the service returns 204 (No Content) . 2.1.34.4. Get Indexes Metamodel Present the full index schema metamodel of all indexes defined on this cache. Data Grid response [{ "entity-name": "org.infinispan.query.test.Book", "java-class": "org.infinispan.query.test.Book", "index-name": "org.infinispan.query.test.Book", "value-fields": { "description": { "multi-valued": false, "multi-valued-in-root": false, "type": "java.lang.String", "projection-type": "java.lang.String", "argument-type": "java.lang.String", "searchable": true, "sortable": false, "projectable": false, "aggregable": false, "analyzer": "standard" }, "name": { "multi-valued": false, "multi-valued-in-root": true, "type": "java.lang.String", "projection-type": "java.lang.String", "argument-type": "java.lang.String", "searchable": true, "sortable": false, "projectable": false, "aggregable": false, "analyzer": "standard" }, "surname": { "multi-valued": false, "multi-valued-in-root": true, "type": "java.lang.String", "projection-type": "java.lang.String", "argument-type": "java.lang.String", "searchable": true, "sortable": false, "projectable": false, "aggregable": false }, "title": { "multi-valued": false, "multi-valued-in-root": false, "type": "java.lang.String", "projection-type": "java.lang.String", "argument-type": "java.lang.String", "searchable": true, "sortable": false, "projectable": false, "aggregable": false } }, "object-fields": { "authors": { "multi-valued": true, "multi-valued-in-root": true, "nested": true, "value-fields": { "name": { "multi-valued": false, "multi-valued-in-root": true, "type": "java.lang.String", "projection-type": "java.lang.String", "argument-type": "java.lang.String", "searchable": true, "sortable": false, "projectable": false, "aggregable": false, "analyzer": "standard" }, "surname": { "multi-valued": false, "multi-valued-in-root": true, "type": "java.lang.String", "projection-type": "java.lang.String", "argument-type": "java.lang.String", "searchable": true, "sortable": false, "projectable": false, "aggregable": false } } } } }, { "entity-name": "org.infinispan.query.test.Author", "java-class": "org.infinispan.query.test.Author", "index-name": "org.infinispan.query.test.Author", "value-fields": { "surname": { "multi-valued": false, "multi-valued-in-root": false, "type": "java.lang.String", "projection-type": "java.lang.String", "argument-type": "java.lang.String", "searchable": true, "sortable": false, "projectable": false, "aggregable": false }, "name": { "multi-valued": false, "multi-valued-in-root": false, "type": "java.lang.String", "projection-type": "java.lang.String", "argument-type": "java.lang.String", "searchable": true, "sortable": false, "projectable": false, "aggregable": false, "analyzer": "standard" } } }] 2.1.34.5. Retrieving Query and Index Statistics Obtain information about queries and indexes in caches with GET requests. Note You must enable statistics in the cache configuration or results are empty. Table 2.17. Request Parameters Parameter Required or Optional Value scope OPTIONAL Use cluster to retrieve consolidated statistics for all members of the cluster. When omitted, Data Grid returns statistics for the local queries and indexes. Data Grid response { "query": { "indexed_local": { "count": 1, "average": 12344.2, "max": 122324, "slowest": "FROM Entity WHERE field > 4" }, "indexed_distributed": { "count": 0, "average": 0.0, "max": -1, "slowest": "FROM Entity WHERE field > 4" }, "hybrid": { "count": 0, "average": 0.0, "max": -1, "slowest": "FROM Entity WHERE field > 4 AND desc = 'value'" }, "non_indexed": { "count": 0, "average": 0.0, "max": -1, "slowest": "FROM Entity WHERE desc = 'value'" }, "entity_load": { "count": 123, "average": 10.0, "max": 120 } }, "index": { "types": { "org.infinispan.same.test.Entity": { "count": 5660001, "size": 0 }, "org.infinispan.same.test.AnotherEntity": { "count": 40, "size": 345560 } }, "reindexing": false } } In the query section: indexed_local Provides details about indexed queries. indexed_distributed Provides details about distributed indexed queries. hybrid Provides details about queries that used the index only partially. non_indexed Provides details about queries that didn't use the index. entity_load Provides details about cache operations to fetch objects after indexed queries execution. Note Time is always measured in nanoseconds. In the index section: types Provide details about each indexed type (class name or protobuf message) that is configured in the cache. count The number of entities indexed for the type. size Usage in bytes of the type. reindexing If the value is true , the Indexer is running in the cache. 2.1.34.6. Clearing Query Statistics Reset runtime statistics with POST requests and the ?action=clear parameter. Data Grid resets only query execution times for the local node only. This operation does not clear index statistics. 2.1.34.7. Retrieving Index Statistics (Deprecated) Obtain information about indexes in caches with GET requests. Data Grid response { "indexed_class_names": ["org.infinispan.sample.User"], "indexed_entities_count": { "org.infinispan.sample.User": 4 }, "index_sizes": { "cacheName_protobuf": 14551 }, "reindexing": false } indexed_class_names Provides the class names of the indexes present in the cache. For Protobuf the value is always org.infinispan.query.remote.impl.indexing.ProtobufValueWrapper . indexed_entities_count Provides the number of entities indexed per class. index_sizes Provides the size, in bytes, for each index in the cache. reindexing Indicates if a re-indexing operation was performed for the cache. If the value is true , the MassIndexer was started in the cache. 2.1.34.8. Retrieving Query Statistics (Deprecated) Get information about the queries that have been run in caches with GET requests. Data Grid response { "search_query_execution_count":20, "search_query_total_time":5, "search_query_execution_max_time":154, "search_query_execution_avg_time":2, "object_loading_total_time":1, "object_loading_execution_max_time":1, "object_loading_execution_avg_time":1, "objects_loaded_count":20, "search_query_execution_max_time_query_string": "FROM entity" } search_query_execution_count Provides the number of queries that have been run. search_query_total_time Provides the total time spent on queries. search_query_execution_max_time Provides the maximum time taken for a query. search_query_execution_avg_time Provides the average query time. object_loading_total_time Provides the total time spent loading objects from the cache after query execution. object_loading_execution_max_time Provides the maximum time spent loading objects execution. object_loading_execution_avg_time Provides the average time spent loading objects execution. objects_loaded_count Provides the count of objects loaded. search_query_execution_max_time_query_string Provides the slowest query executed. 2.1.34.9. Clearing Query Statistics (Deprecated) Reset runtime statistics with POST requests and the ?action=clear parameter. 2.1.35. Cross-Site Operations with Caches Perform cross-site replication operations with the Data Grid REST API. 2.1.35.1. Getting status of all backup locations Retrieve the status of all backup locations with GET requests. Data Grid responds with the status of each backup location in JSON format, as in the following example: { "NYC": { "status": "online" }, "LON": { "status": "mixed", "online": [ "NodeA" ], "offline": [ "NodeB" ] } } Table 2.18. Returned Status Value Description online All nodes in the local cluster have a cross-site view with the backup location. offline No nodes in the local cluster have a cross-site view with the backup location. mixed Some nodes in the local cluster have a cross-site view with the backup location, other nodes in the local cluster do not have a cross-site view. The response indicates status for each node. 2.1.35.2. Getting status of specific backup locations Retrieve the status of a backup location with GET requests. Data Grid responds with the status of each node in the site in JSON format, as in the following example: { "NodeA":"offline", "NodeB":"online" } Table 2.19. Returned Status Value Description online The node is online. offline The node is offline. failed Not possible to retrieve status. The remote cache could be shutting down or a network error occurred during the request. 2.1.35.3. Taking backup locations offline Take backup locations offline with POST requests and the ?action=take-offline parameter. 2.1.35.4. Bringing backup locations online Bring backup locations online with the ?action=bring-online parameter. 2.1.35.5. Pushing state to backup locations Push cache state to a backup location with the ?action=start-push-state parameter. 2.1.35.6. Canceling state transfer Cancel state transfer operations with the ?action=cancel-push-state parameter. 2.1.35.7. Getting state transfer status Retrieve status of state transfer operations with the ?action=push-state-status parameter. Data Grid responds with the status of state transfer for each backup location in JSON format, as in the following example: { "NYC":"CANCELED", "LON":"OK" } Table 2.20. Returned status Value Description SENDING State transfer to the backup location is in progress. OK State transfer completed successfully. ERROR An error occurred with state transfer. Check log files. CANCELLING State transfer cancellation is in progress. 2.1.35.8. Clearing state transfer status Clear state transfer status for sending sites with the ?action=clear-push-state-status parameter. 2.1.35.9. Modifying take offline conditions Sites go offline if certain conditions are met. Modify the take offline parameters to control when backup locations automatically go offline. Procedure Check configured take offline parameters with GET requests and the take-offline-config parameter. The Data Grid response includes after_failures and min_wait fields as follows: { "after_failures": 2, "min_wait": 1000 } Modify take offline parameters in the body of PUT requests. If the operation successfully completes, the service returns 204 (No Content) . 2.1.35.10. Canceling state transfer from receiving sites If the connection between two backup locations breaks, you can cancel state transfer on the site that is receiving the push. Cancel state transfer from a remote site and keep the current state of the local cache with the ?action=cancel-receive-state parameter. 2.1.36. Rolling Upgrades Perform rolling upgrades of cache data between Data Grid clusters 2.1.36.1. Connecting Source Clusters Connect a target cluster to the source cluster with: You must provide a remote-store definition in JSON format as the body: JSON { "remote-store": { "cache": "my-cache", "shared": true, "raw-values": true, "socket-timeout": 60000, "protocol-version": "2.9", "remote-server": [ { "host": "127.0.0.2", "port": 12222 } ], "connection-pool": { "max-active": 110, "exhausted-action": "CREATE_NEW" }, "async-executor": { "properties": { "name": 4 } }, "security": { "authentication": { "server-name": "servername", "digest": { "username": "username", "password": "password", "realm": "realm", "sasl-mechanism": "DIGEST-MD5" } }, "encryption": { "protocol": "TLSv1.2", "sni-hostname": "snihostname", "keystore": { "filename": "/path/to/keystore_client.jks", "password": "secret", "certificate-password": "secret", "key-alias": "hotrod", "type": "JKS" }, "truststore": { "filename": "/path/to/gca.jks", "password": "secret", "type": "JKS" } } } } } Several elements are optional such as security , async-executor and connection-pool . The configuration must contain minimally the cache name, the raw-values set to false and the host/ip of the single port in the source cluster. For details about the remote-store configuration, consult the XSD Schema . If the operation successfully completes, the service returns 204 (No Content). If the target cluster is already connected to the source cluster, it returns status 304 (Not Modified). 2.1.36.2. Obtaining Source Cluster connection details To obtain the remote-store definition of a cache, use a GET request: If the cache was previously connected, it returns the configuration of the associated remote-store in JSON format and status 200 (OK), otherwise a 404 (Not Found) status. Note This is not a cluster wide operation, and it only returns the remote-store of the cache in the node where the REST invocation is handled. 2.1.36.3. Checking if a Cache is connected To check if a cache have been connected to a remote cluster, use a HEAD request: Returns status 200 (OK) if for all nodes of the cluster, cacheName has a single remote store configured, and 404 (NOT_FOUND) otherwise. 2.1.36.4. Synchronizing Data Synchronize data from a source cluster to a target cluster with POST requests and the ?action=sync-data parameter: When the operation completes, Data Grid responds with the total number of entries copied to the target cluster. 2.1.36.5. Disconnecting Source Clusters After you synchronize data to target clusters, disconnect from the source cluster with DELETE requests: If the operation successfully completes, the service returns 204 (No Content) . It no source was connected, it returns code 304 (Not Modified). 2.2. Creating and Managing Counters Create, delete, and modify counters via the REST API. 2.2.1. Creating Counters Create counters with POST requests that include configuration in the payload. Example Weak Counter { "weak-counter":{ "initial-value":5, "storage":"PERSISTENT", "concurrency-level":1 } } Example Strong Counter { "strong-counter":{ "initial-value":3, "storage":"PERSISTENT", "upper-bound":5 } } 2.2.2. Deleting Counters Remove specific counters with DELETE requests. 2.2.3. Retrieving Counter Configuration Retrieve configuration for specific counters with GET requests. Data Grid responds with the counter configuration in JSON format. 2.2.4. Getting Counter Values Retrieve counter values with GET requests. Table 2.21. Headers Header Required or Optional Parameter Accept OPTIONAL The required format to return the content. Supported formats are application/json and text/plain . JSON is assumed if no header is provided. 2.2.5. Resetting Counters Restore the initial value of counters without POST requests and the ?action=reset parameter. If the operation successfully completes, the service returns 204 (No Content) . 2.2.6. Incrementing Counters Increment counter values with POST request` and the ?action=increment parameter. Note WEAK counters never respond after operations and return 204 (No Content) . STRONG counters return 200 (OK) and the current value after each operation. 2.2.7. Adding Deltas to Counters Add arbitrary values to counters with POST requests that include the ?action=add and delta parameters. Note WEAK counters never respond after operations and return 204 (No Content) . STRONG counters return 200 (OK) and the current value after each operation. 2.2.8. Decrementing Counter Values Decrement counter values with POST requests and the ?action=decrement parameter. Note WEAK counters never respond after operations and return 204 (No Content) . STRONG counters return 200 (OK) and the current value after each operation. 2.2.9. Performing getAndSet atomic operations on Strong Counters Atomically set values for strong counters with POST requests and the getAndSet parameter. If the operation is successful, Data Grid returns the value in the payload. 2.2.10. Performing compareAndSet Operations on Strong Counters Atomically set values for strong counters with GET requests and the compareAndSet parameter. Data Grid atomically sets the value to {update} if the current value is {expect} . If the operation is successful, Data Grid returns true . 2.2.11. Performing compareAndSwap Operations on Strong Counters Atomically set values for strong counters with GET requests and the compareAndSwap parameter. Data Grid atomically sets the value to {update} if the current value is {expect} . If the operation is successful, Data Grid returns the value in the payload. 2.2.12. Listing Counters Retrieve a list of counters in Data Grid clusters with GET requests. 2.3. Working with Protobuf Schemas Create and manage Protobuf schemas, .proto files, via the Data Grid REST API. 2.3.1. Creating Protobuf Schemas Create Protobuf schemas across Data Grid clusters with POST requests that include the content of a protobuf file in the payload. If the schema already exists, Data Grid returns HTTP 409 (Conflict) . If the schema is not valid, either because of syntax errors, or because some of its dependencies are missing, Data Grid stores the schema and returns the error in the response body. Data Grid responds with the schema name and any errors. { "name" : "users.proto", "error" : { "message": "Schema users.proto has errors", "cause": "java.lang.IllegalStateException:Syntax error in error.proto at 3:8: unexpected label: messoge" } } name is the name of the Protobuf schema. error is null for valid Protobuf schemas. If Data Grid cannot successfully validate the schema, it returns errors. If the operation successfully completes, the service returns 201 (Created) . 2.3.2. Reading Protobuf Schemas Retrieve Protobuf schema from Data Grid with GET requests. 2.3.3. Updating Protobuf Schemas Modify Protobuf schemas with PUT requests that include the content of a protobuf file in the payload. Important When you make changes to the existing Protobuf schema definition, you must either update or rebuild the index schema. If the changes involve modifying the existing fields, then you must rebuild the index. When you add new fields without touching existing schema, you can update the index schema instead of rebuilding it. If the schema is not valid, either because of syntax errors, or because some of its dependencies are missing, Data Grid updates the schema and returns the error in the response body. { "name" : "users.proto", "error" : { "message": "Schema users.proto has errors", "cause": "java.lang.IllegalStateException:Syntax error in error.proto at 3:8: unexpected label: messoge" } } name is the name of the Protobuf schema. error is null for valid Protobuf schemas. If Data Grid cannot successfully validate the schema, it returns errors. 2.3.4. Deleting Protobuf Schemas Remove Protobuf schemas from Data Grid clusters with DELETE requests. If the operation successfully completes, the service returns 204 (No Content) . 2.3.5. Listing Protobuf Schemas List all available Protobuf schemas with GET requests. Data Grid responds with a list of all schemas available on the cluster. [ { "name" : "users.proto", "error" : { "message": "Schema users.proto has errors", "cause": "java.lang.IllegalStateException:Syntax error in error.proto at 3:8: unexpected label: messoge" } }, { "name" : "people.proto", "error" : null }] name is the name of the Protobuf schema. error is null for valid Protobuf schemas. If Data Grid cannot successfully validate the schema, it returns errors. 2.3.6. Listing Protobuf Types List all available Protobuf types with GET requests. Data Grid responds with a list of all types available on the cluster. ["org.infinispan.Person", "org.infinispan.Phone"] 2.4. Working with Cache Managers Interact with Data Grid Cache Managers to get cluster and usage statistics. 2.4.1. Getting Basic Container Information Retrieving information about the cache manager with GET requests. Data Grid responds with information in JSON format, as in the following example: Note Information about caches with security authorization is available only to users with the specific roles and permissions assigned to them. { "version":"xx.x.x-FINAL", "name":"default", "coordinator":true, "cache_configuration_names":[ "___protobuf_metadata", "cache2", "CacheManagerResourceTest", "cache1" ], "cluster_name":"ISPN", "physical_addresses":"[127.0.0.1:35770]", "coordinator_address":"CacheManagerResourceTest-NodeA-49696", "cache_manager_status":"RUNNING", "created_cache_count":"3", "running_cache_count":"3", "node_address":"CacheManagerResourceTest-NodeA-49696", "cluster_members":[ "CacheManagerResourceTest-NodeA-49696", "CacheManagerResourceTest-NodeB-28120" ], "cluster_members_physical_addresses":[ "127.0.0.1:35770", "127.0.0.1:60031" ], "cluster_size":2, "defined_caches":[ { "name":"CacheManagerResourceTest", "started":true }, { "name":"cache1", "started":true }, { "name":"___protobuf_metadata", "started":true }, { "name":"cache2", "started":true } ], "local_site": "LON", "relay_node": true, "relay_nodes_address": [ "CacheManagerResourceTest-NodeA-49696" ], "sites_view": [ "LON", "NYC" ], "rebalancing_enabled": true } version contains the Data Grid version name contains the name of the Cache Manager as defined in the configuration coordinator is true if the Cache Manager is the coordinator of the cluster cache_configuration_names contains an array of all caches configurations defined in the Cache Manager that are accessible to the current user cluster_name contains the name of the cluster as defined in the configuration physical_addresses contains the physical network addresses associated with the Cache Manager coordinator_address contains the physical network addresses of the coordinator of the cluster cache_manager_status the lifecycle status of the Cache Manager. For possible values, check the org.infinispan.lifecycle.ComponentStatus documentation created_cache_count number of created caches, excludes all internal and private caches running_cache_count number of created caches that are running node_address contains the logical address of the Cache Manager cluster_members and cluster_members_physical_addresses an array of logical and physical addresses of the members of the cluster cluster_size number of members in the cluster defined_caches A list of all caches defined in the Cache Manager, excluding private caches but including internal caches that are accessible local_site The name of the local site. If cross-site replication is not configured, Data Grid returns "local". relay_node is true if the node handles RELAY messages between clusters. relay_nodes_address is an array of logical addresses for relay nodes. sites_view The list of sites that participate in cross-site replication. If cross-site replication is not configured, Data Grid returns an empty list. rebalancing_enabled is true if rebalancing is enabled. Fetching this property might fail on the server. In that case the property won't be present in the payload. 2.4.2. Getting Cluster Health Retrieve health information for Data Grid clusters with GET requests. Data Grid responds with cluster health information in JSON format, as in the following example: { "cluster_health":{ "cluster_name":"ISPN", "health_status":"HEALTHY", "number_of_nodes":2, "node_names":[ "NodeA-36229", "NodeB-28703" ] }, "cache_health":[ { "status":"HEALTHY", "cache_name":"___protobuf_metadata" }, { "status":"HEALTHY", "cache_name":"cache2" }, { "status":"HEALTHY", "cache_name":"mycache" }, { "status":"HEALTHY", "cache_name":"cache1" } ] } cluster_health contains the health of the cluster cluster_name specifies the name of the cluster as defined in the configuration. health_status provides one of the following: DEGRADED indicates at least one of the caches is in degraded mode. HEALTHY_REBALANCING indicates at least one cache is in the rebalancing state. HEALTHY indicates all cache instances in the cluster are operating as expected. FAILED indicates the cache failed to start with the provided configuration. number_of_nodes displays the total number of cluster members. Returns a value of 0 for non-clustered (standalone) servers. node_names is an array of all cluster members. Empty for standalone servers. cache_health contains health information per-cache status HEALTHY, DEGRADED, HEALTHY_REBALANCING or FAILED cache_name the name of the cache as defined in the configuration. 2.4.3. Getting Container Health Status Retrieve the health status of the Data Grid container with GET requests that do not require authentication. Data Grid responds with one of the following in text/plain format: HEALTHY HEALTHY_REBALANCING DEGRADED FAILED 2.4.4. Checking REST Endpoint Availability Verify Data Grid server REST endpoint availability with HEAD requests. If you receive a successful response code then the Data Grid REST server is running and serving requests. 2.4.5. Obtaining Global Configuration Retrieve global configuration for the data container with GET requests. Table 2.22. Headers Header Required or Optional Parameter Accept OPTIONAL The required format to return the content. Supported formats are application/json and application/xml . JSON is assumed if no header is provided. Table 2.23. Request parameters Parameter Required or Optional Description pretty OPTIONAL If true returns formatted content, including additional spacing and line separators which improve readability but increase payload size. The default is false . Reference GlobalConfiguration 2.4.6. Obtaining Configuration for All Caches Retrieve the configuration for all caches with GET requests. Data Grid responds with JSON arrays that contain each cache and cache configuration, as in the following example: [ { "name":"cache1", "configuration":{ "distributed-cache":{ "mode":"SYNC", "partition-handling":{ "when-split":"DENY_READ_WRITES" }, "statistics":true } } }, { "name":"cache2", "configuration":{ "distributed-cache":{ "mode":"SYNC", "transaction":{ "mode":"NONE" } } } } ] Table 2.24. Request parameters Parameter Required or Optional Description pretty OPTIONAL If true returns formatted content, including additional spacing and line separators which improve readability but increase payload size. The default is false . 2.4.7. Listing Available Cache Templates Retrieve all available Data Grid cache templates with GET requests. Tip See Creating Caches with Templates . Table 2.25. Request parameters Parameter Required or Optional Description pretty OPTIONAL If true returns formatted content, including additional spacing and line separators which improve readability but increase payload size. The default is false . 2.4.8. Getting Container Statistics Retrieve the statistics of the container with GET requests. Data Grid responds with Cache Manager statistics in JSON format, as in the following example: { "statistics_enabled":true, "read_write_ratio":0.0, "time_since_start":1, "time_since_reset":1, "number_of_entries":0, "off_heap_memory_used":0, "data_memory_used":0, "misses":0, "remove_hits":0, "remove_misses":0, "evictions":0, "average_read_time":0, "average_read_time_nanos":0, "average_write_time":0, "average_write_time_nanos":0, "average_remove_time":0, "average_remove_time_nanos":0, "required_minimum_number_of_nodes":1, "hits":0, "stores":0, "current_number_of_entries_in_memory":0, "hit_ratio":0.0, "retrievals":0 } statistics_enabled is true if statistics collection is enabled for the Cache Manager. read_write_ratio displays the read/write ratio across all caches. time_since_start shows the time, in seconds, since the Cache Manager started. time_since_reset shows the number of seconds since the Cache Manager statistics were last reset. number_of_entries shows the total number of entries currently in all caches from the Cache Manager. This statistic returns entries in the local cache instances only. off_heap_memory_used shows the amount, in bytes[] , of off-heap memory used by this cache container. data_memory_used shows the amount, in bytes[] , that the current eviction algorithm estimates is in use for data across all caches. Returns 0 if eviction is not enabled. misses shows the number of get() misses across all caches. remove_hits shows the number of removal hits across all caches. remove_misses shows the number of removal misses across all caches. evictions shows the number of evictions across all caches. average_read_time shows the average number of milliseconds taken for get() operations across all caches. average_read_time_nanos same as average_read_time but in nanoseconds. average_remove_time shows the average number of milliseconds for remove() operations across all caches. average_remove_time_nanos same as average_remove_time but in nanoseconds. required_minimum_number_of_nodes shows the required minimum number of nodes to guarantee data consistency. hits provides the number of get() hits across all caches. stores provides the number of put() operations across all caches. current_number_of_entries_in_memory shows the total number of entries currently in all caches, excluding passivated entries. hit_ratio provides the total percentage hit/(hit+miss) ratio for all caches. retrievals shows the total number of get() operations. 2.4.9. Resetting Container Statistics Reset the statistics with POST requests. 2.4.10. Shutdown all container caches Shut down the Data Grid container on the server with POST requests. Data Grid responds with 204 (No Content) and then shutdowns all caches in the container. The servers remain running with active endpoints and clustering, however REST calls to container resources will result in a 503 Service Unavailable response. Note This method is primarily intended for use by the Data Grid Operator. The expectation is that the Server processes will be manually terminated shortly after this endpoint is invoked. Once this method has been called, it's not possible to restart the container state. 2.4.11. Enabling rebalancing for all caches Turn on automatic rebalancing for all caches. 2.4.12. Disabling rebalancing for all caches Turn off automatic rebalancing for all caches. 2.4.13. Backing Up Data Grid Create backup archives, application/zip , that contain resources (caches, cache templates, counters, Protobuf schemas, server tasks, and so on) currently stored in the Data Grid If a backup with the same name already exists, the service responds with 409 (Conflict) . If the directory parameter is not valid, the service returns 400 (Bad Request) . A 202 response indicates that the backup request is accepted for processing. Optionally include a JSON payload with your request that contains parameters for the backup operation, as follows: Table 2.26. JSON Parameters Key Required or Optional Value directory OPTIONAL Specifies a location on the server to create and store the backup archive. resources OPTIONAL Specifies the resources to back up, in JSON format. The default is to back up all resources. If you specify one or more resources, then Data Grid backs up only those resources. See the Resource Parameters table for more information. Table 2.27. Resource Parameters Key Required or Optional Value caches OPTIONAL Specifies either an array of cache names to back up or * for all caches. cache-configs OPTIONAL Specifies either an array of cache templates to back up or * for all templates. counters OPTIONAL Defines either an array of counter names to back up or * for all counters. proto-schemas OPTIONAL Defines either an array of Protobuf schema names to back up or * for all schemas. process OPTIONAL Specifies either an array of server tasks to back up or * for all tasks. The following example creates a backup archive with all counters and caches named [cache1,cache2] in a specified directory: { "directory": "/path/accessible/to/the/server", "resources": { "caches": ["cache1", "cache2"], "counters": ["*"] } } 2.4.14. Listing Backups Retrieve the names of all backup operations that are in progress, completed, or failed. Data Grid responds with an Array of all backup names as in the following example: ["backup1", "backup2"] 2.4.15. Checking Backup Availability Verify that a backup operation is complete. A 200 response indicates the backup archive is available. A 202 response indicates the backup operation is in progress. 2.4.16. Downloading Backup Archives Download backup archives from the server. A 200 response indicates the backup archive is available. A 202 response indicates the backup operation is in progress. 2.4.17. Deleting Backup Archives Remove backup archives from the server. A 204 response indicates that the backup archive is deleted. A 202 response indicates that the backup operation is in progress but will be deleted when the operation completes. 2.4.18. Restoring Data Grid Resources from Backup Archives Restore Data Grid resources from backup archives. The provided {restoreName} is for tracking restore progress, and is independent of the name of backup file being restored. A 202 response indicates that the restore request has been accepted for processing. 2.4.18.1. Restoring from Backup Archives on Data Grid Server Use the application/json content type with your POST request to back up from an archive that is available on the server. Table 2.28. JSON Parameters Key Required or Optional Value location REQUIRED Specifies the path of the backup archive to restore. resources OPTIONAL Specifies the resources to restore, in JSON format. The default is to restore all resources. If you specify one or more resources, then Data Grid restores only those resources. See the Resource Parameters table for more information. Table 2.29. Resource Parameters Key Required or Optional Value caches OPTIONAL Specifies either an array of cache names to back up or * for all caches. cache-configs OPTIONAL Specifies either an array of cache templates to back up or * for all templates. counters OPTIONAL Defines either an array of counter names to back up or * for all counters. proto-schemas OPTIONAL Defines either an array of Protobuf schema names to back up or * for all schemas. process OPTIONAL Specifies either an array of server tasks to back up or * for all tasks. The following example restores all counters from a backup archive on the server: { "location": "/path/accessible/to/the/server/backup-to-restore.zip", "resources": { "counters": ["*"] } } 2.4.18.2. Restoring from Local Backup Archives Use the multipart/form-data content type with your POST request to upload a local backup archive to the server. Table 2.30. Form Data Parameter Content-Type Required or Optional Value backup application/zip REQUIRED Specifies the bytes of the backup archive to restore. resources application/json , text/plain OPTIONAL Defines a JSON object of request parameters. Example Request 2.4.19. Listing Restores Retrieve the names of all restore requests that are in progress, completed, or failed. Data Grid responds with an Array of all restore names as in the following example: ["restore1", "restore2"] 2.4.20. Checking Restore Progress Verify that a restore operation is complete. A 201 (Created) response indicates the restore operation is completed. A 202 (Accepted) response indicates the backup operation is in progress. 2.4.21. Deleting Restore Metadata Remove metadata for restore requests from the server. This action removes all metadata associated with restore requests but does not delete any restored content. If you delete the request metadata, you can use the request name to perform subsequent restore operations. A 204 (No Content) response indicates that the restore metadata is deleted. A 202 (Accepted) response indicates that the restore operation is in progress and will be deleted when the operation completes. 2.4.22. Listening to container configuration events Receive events about configuration changes using Server-Sent Events . The event value will be one of create-cache , remove-cache , update-cache , create-template , remove-template or update-template . The data value will contain the declarative configuration of the entity that has been created. Remove events will only contain the name of the removed entity. Table 2.31. Headers Header Required or Optional Parameter Accept OPTIONAL Sets the required format to return content. Supported formats are application/yaml , application/json and application/xml . The default is application/yaml . See Accept for more information. Table 2.32. Request parameters Parameter Required or Optional Description includeCurrentState OPTIONAL If true , the results include the state of the existing configuration in addition to the changes. If set to false , the request returns only the changes. The default value is false . pretty OPTIONAL If true returns formatted content, including additional spacing and line separators which improve readability but increase payload size. The default is false . 2.4.23. Listening to container events Receive events from the container using Server-Sent Events . The emitted events come from logged information, so each event contains an identifier associated with the message. The event value will be lifecycle-event . The data has the logged information, which includes the message , category , level , timestamp , owner , context , and scope , some of which may be empty. Currently, we expose only LIFECYCLE events. Table 2.33. Headers Header Required or Optional Parameter Accept OPTIONAL Sets the required format to return content. Supported formats are application/yaml , application/json and application/xml . The default is application/yaml . See Accept for more information. Table 2.34. Request parameters Parameter Required or Optional Description includeCurrentState OPTIONAL If true , the results include the state of the existing configuration in addition to the changes. If set to false , the request returns only the changes. The default value is false . pretty OPTIONAL If true returns formatted content, including additional spacing and line separators which improve readability but increase payload size. The default is false . 2.4.24. Cross-Site Operations with Cache Managers Perform cross-site operations with Cache Managers to apply the operations to all caches. 2.4.24.1. Getting status of backup locations Retrieve the status of all backup locations with GET requests. Data Grid responds with status in JSON format, as in the following example: { "SFO-3":{ "status":"online" }, "NYC-2":{ "status":"mixed", "online":[ "CACHE_1" ], "offline":[ "CACHE_2" ], "mixed": [ "CACHE_3" ] } } Table 2.35. Returned status Value Description online All nodes in the local cluster have a cross-site view with the backup location. offline No nodes in the local cluster have a cross-site view with the backup location. mixed Some nodes in the local cluster have a cross-site view with the backup location, other nodes in the local cluster do not have a cross-site view. The response indicates status for each node. Returns the status for a single backup location. 2.4.24.2. Taking backup locations offline Take backup locations offline with the ?action=take-offline parameter. 2.4.24.3. Bringing backup locations online Bring backup locations online with the ?action=bring-online parameter. 2.4.24.4. Retrieving the state transfer mode Check the state transfer mode with GET requests. 2.4.24.5. Setting the state transfer mode Configure the state transfer mode with the ?action=set parameter. 2.4.24.6. Starting state transfer Push state of all caches to remote sites with the ?action=start-push-state parameter. 2.4.24.7. Canceling state transfer Cancel ongoing state transfer operations with the ?action=cancel-push-state parameter. 2.5. Working with Data Grid Servers Monitor and manage Data Grid server instances. 2.5.1. Retrieving Basic Server Information View basic information about Data Grid Servers with GET requests. Data Grid responds with the server name, codename, and version in JSON format as in the following example: { "version":"Infinispan 'Codename' xx.x.x.Final" } 2.5.2. Getting Cache Managers Retrieve lists of Cache Managers for Data Grid Servers with GET requests. Data Grid responds with an array of the Cache Manager names configured for the server. Note Data Grid currently supports one Cache Manager per server only. 2.5.3. Adding Caches to Ignore Lists Configure Data Grid to temporarily exclude specific caches from client requests. Send empty POST requests that include the names of the Cache Manager name and the cache. Data Grid responds with 204 (No Content) if the cache is successfully added to the ignore list or 404 (Not Found) if the cache or Cache Manager are not found. Note Data Grid currently supports one Cache Manager per server only. For future compatibility you must provide the Cache Manager name in the requests. 2.5.4. Removing Caches from Ignore Lists Remove caches from the ignore list with DELETE requests. Data Grid responds with 204 (No Content) if the cache is successfully removed from ignore list or 404 (Not Found) if the cache or Cache Manager are not found. 2.5.5. Confirming Ignored Caches Confirm that caches are ignored with GET requests. 2.5.6. Obtaining Server Configuration Retrieve Data Grid Server configurations with GET requests. Data Grid responds with the configuration in JSON format, as follows: { "server":{ "interfaces":{ "interface":{ "name":"public", "inet-address":{ "value":"127.0.0.1" } } }, "socket-bindings":{ "port-offset":0, "default-interface":"public", "socket-binding":[ { "name":"memcached", "port":11221, "interface":"memcached" } ] }, "security":{ "security-realms":{ "security-realm":{ "name":"default" } } }, "endpoints":{ "socket-binding":"default", "security-realm":"default", "hotrod-connector":{ "name":"hotrod" }, "rest-connector":{ "name":"rest" } } } } 2.5.7. Getting Environment Variables Retrieve all environment variables for Data Grid Servers with GET requests. 2.5.8. Getting JVM Memory Details Retrieve JVM memory usage information for Data Grid Servers with GET requests. Data Grid responds with heap and non-heap memory statistics, direct memory usage, and information about memory pools and garbage collection in JSON format. 2.5.9. Getting JVM Heap Dumps Generate JVM heap dumps for Data Grid Servers with POST requests. Data Grid generates a heap dump file in HPROF format in the server data directory and responds with the full path of the file in JSON format. 2.5.10. Getting JVM Thread Dumps Retrieve the current thread dump for the JVM with GET requests. Data Grid responds with the current thread dump in text/plain format. 2.5.11. Getting Diagnostic Reports for Data Grid Servers Retrieve aggregated reports for Data Grid Servers with GET requests. To retrieve the report for the requested server: To retrieve the report of another server in the cluster, reference the node by name: Data Grid responds with a tar.gz archive that contains an aggregated report with diagnostic information about both the Data Grid Server and the host. The report provides details about CPU, memory, open files, network sockets and routing, threads, in addition to configuration and log files. 2.5.12. Stopping Data Grid Servers Stop Data Grid Servers with POST requests. Data Grid responds with 204 (No Content) and then stops running. 2.5.13. Retrieving Client Connection Information List information about clients connected to Data Grid Servers with GET requests. Data Grid responds with details about all active client connections in JSON format as in the following example: [ { "id": 2, "name": "flower", "created": "2023-05-18T14:54:37.882566188Z", "principal": "admin", "local-address": "/127.0.0.1:11222", "remote-address": "/127.0.0.1:58230", "protocol-version": "RESP3", "client-library": null, "client-version": null, "ssl-application-protocol": "http/1.1", "ssl-cipher-suite": "TLS_AES_256_GCM_SHA384", "ssl-protocol": "TLSv1.3" }, { "id": 0, "name": null, "created": "2023-05-18T14:54:07.727775875Z", "principal": "admin", "local-address": "/127.0.0.1:11222", "remote-address": "/127.0.0.1:35716", "protocol-version": "HTTP/1.1", "client-library": "Infinispan CLI 15.0.0-SNAPSHOT", "client-version": null, "ssl-application-protocol": "http/1.1", "ssl-cipher-suite": "TLS_AES_256_GCM_SHA384", "ssl-protocol": "TLSv1.3" } ] Table 2.36. Request Parameters Parameter Required or Optional Value global OPTIONAL true : will collect connections from all servers in the cluster 2.5.14. Retrieving Default values for Cache Configuration Retrieve default values for cache configuration with GET requests. Data Grid responds with the default values for cache configuration in JSON format. 2.6. Working with Data Grid Clusters Monitor and perform administrative tasks on Data Grid clusters. 2.6.1. Stopping Data Grid Clusters Shut down entire Data Grid clusters with POST requests. Data Grid responds with 204 (No Content) and then performs an orderly shutdown of the entire cluster. 2.6.2. Stopping Specific Data Grid Servers in Clusters Shut down one or more specific servers in Data Grid clusters with GET requests and the ?action=stop&server parameter. Data Grid responds with 204 (No Content) . 2.6.3. Backing Up Data Grid Clusters Create backup archives, application/zip , that contain resources (caches, templates, counters, Protobuf schemas, server tasks, and so on) currently stored in the cache container for the cluster. Optionally include a JSON payload with your request that contains parameters for the backup operation, as follows: Table 2.37. JSON Parameters Key Required or Optional Value directory OPTIONAL Specifies a location on the server to create and store the backup archive. If the backup operation successfully completes, the service returns 202 (Accepted) . If a backup with the same name already exists, the service returns 409 (Conflict) . If the directory parameter is not valid, the service returns 400 (Bad Request) . 2.6.4. Listing Backups Retrieve the names of all backup operations that are in progress, completed, or failed. Data Grid responds with an Array of all backup names as in the following example: ["backup1", "backup2"] 2.6.5. Checking Backup Availability Verify that a backup operation is complete. A 200 response indicates the backup archive is available. A 202 response indicates the backup operation is in progress. 2.6.6. Downloading Backup Archives Download backup archives from the server. A 200 response indicates the backup archive is available. A 202 response indicates the backup operation is in progress. 2.6.7. Deleting Backup Archives Remove backup archives from the server. A 204 response indicates that the backup archive is deleted. A 202 response indicates that the backup operation is in progress but will be deleted when the operation completes. 2.6.8. Restoring Data Grid Cluster Resources Apply resources in a backup archive to restore Data Grid clusters. The provided {restoreName} is for tracking restore progress, and is independent of the name of backup file being restored. Important You can restore resources only if the container name in the backup archive matches the container name for the cluster. A 202 response indicates that the restore request is accepted for processing. 2.6.8.1. Restoring from Backup Archives on Data Grid Server Use the application/json content type with your POST request to back up from an archive that is available on the server. Table 2.38. JSON Parameters Key Required or Optional Value location REQUIRED Specifies the path of the backup archive to restore. resources OPTIONAL Specifies the resources to restore, in JSON format. The default is to restore all resources. If you specify one or more resources, then Data Grid restores only those resources. See the Resource Parameters table for more information. Table 2.39. Resource Parameters Key Required or Optional Value caches OPTIONAL Specifies either an array of cache names to back up or * for all caches. cache-configs OPTIONAL Specifies either an array of cache templates to back up or * for all templates. counters OPTIONAL Defines either an array of counter names to back up or * for all counters. proto-schemas OPTIONAL Defines either an array of Protobuf schema names to back up or * for all schemas. process OPTIONAL Specifies either an array of server tasks to back up or * for all tasks. The following example restores all counters from a backup archive on the server: { "location": "/path/accessible/to/the/server/backup-to-restore.zip", "resources": { "counters": ["*"] } } 2.6.8.2. Restoring from Local Backup Archives Use the multipart/form-data content type with your POST request to upload a local backup archive to the server. Table 2.40. Form Data Parameter Content-Type Required or Optional Value backup application/zip REQUIRED Specifies the bytes of the backup archive to restore. Example Request 2.6.9. Listing Restores Retrieve the names of all restore requests that are in progress, completed, or failed. Data Grid responds with an Array of all restore names as in the following example: ["restore1", "restore2"] 2.6.10. Checking Restore Progress Verify that a restore operation is complete. A 201 (Created) response indicates the restore operation is completed. A 202 response indicates the backup operation is in progress. 2.6.11. Deleting Restore Metadata Remove metadata for restore requests from the server. This action removes all metadata associated with restore requests but does not delete any restored content. If you delete the request metadata, you can use the request name to perform subsequent restore operations. A 204 response indicates that the restore metadata is deleted. A 202 response indicates that the restore operation is in progress and will be deleted when the operation completes. 2.6.12. Checking Cluster Distribution Retrieve the distribution details about all servers in the Data Grid cluster. Returns a JSON array of each Data Grid server statistics in the cluster with the format: [ { "node_name": "NodeA", "node_addresses": [ "127.0.0.1:39313" ], "memory_available": 466180016, "memory_used": 56010832 }, { "node_name": "NodeB", "node_addresses": [ "127.0.0.1:47477" ], "memory_available": 467548568, "memory_used": 54642280 } ] Each element in the array represents an Data Grid node. If the statistics collection is disabled, information about memory usage values is -1. The properties are: node_name is the node name. node_addresses is a list with all the node's physical addresses. memory_available the node available memory in bytes. memory_used the node used memory in bytes. 2.7. Data Grid Server logging configuration View and modify the logging configuration on Data Grid clusters at runtime. 2.7.1. Listing the logging appenders View a list of all configured appenders with GET requests. Data Grid responds with a list of appenders in JSON format as in the following example: { "STDOUT" : { "name" : "STDOUT" }, "JSON-FILE" : { "name" : "JSON-FILE" }, "HR-ACCESS-FILE" : { "name" : "HR-ACCESS-FILE" }, "FILE" : { "name" : "FILE" }, "REST-ACCESS-FILE" : { "name" : "REST-ACCESS-FILE" } } 2.7.2. Listing the loggers View a list of all configured loggers with GET requests. Data Grid responds with a list of loggers in JSON format as in the following example: [ { "name" : "", "level" : "INFO", "appenders" : [ "STDOUT", "FILE" ] }, { "name" : "org.infinispan.HOTROD_ACCESS_LOG", "level" : "INFO", "appenders" : [ "HR-ACCESS-FILE" ] }, { "name" : "com.arjuna", "level" : "WARN", "appenders" : [ ] }, { "name" : "org.infinispan.REST_ACCESS_LOG", "level" : "INFO", "appenders" : [ "REST-ACCESS-FILE" ] } ] 2.7.3. Creating/modifying a logger Create a new logger or modify an existing one with PUT requests. Data Grid sets the level of the logger identified by {loggerName} to {level} . Optionally, it is possible to set one or more appenders for the logger. If no appenders are specified, those specified in the root logger will be used. If the operation successfully completes, the service returns 204 (No Content) . 2.7.4. Removing a logger Remove an existing logger with DELETE requests. Data Grid removes the logger identified by {loggerName} , effectively reverting to the use of the root logger configuration. If operation processed successfully, the service returns a response code 204 (No Content) . 2.8. Using Server Tasks Retrieve, execute, and upload Data Grid server tasks. 2.8.1. Retrieving Server Tasks Information View information about available server tasks with GET requests. Table 2.41. Request Parameters Parameter Required or Optional Value type OPTIONAL user : will exclude internal (admin) tasks from the results Data Grid responds with a list of available tasks. The list includes the names of tasks, the engines that handle tasks, the named parameters for tasks, the execution modes of tasks, either ONE_NODE or ALL_NODES , and the allowed security role in JSON format, as in the following example: [ { "name": "SimpleTask", "type": "TaskEngine", "parameters": [ "p1", "p2" ], "execution_mode": "ONE_NODE", "allowed_role": null }, { "name": "RunOnAllNodesTask", "type": "TaskEngine", "parameters": [ "p1" ], "execution_mode": "ALL_NODES", "allowed_role": null }, { "name": "SecurityAwareTask", "type": "TaskEngine", "parameters": [], "execution_mode": "ONE_NODE", "allowed_role": "MyRole" } ] 2.8.2. Executing Tasks Execute tasks with POST requests that include the task name, an optional cache name and required parameters prefixed with param . Data Grid responds with the task result. 2.8.3. Uploading Script Tasks Upload script tasks with PUT or POST requests. Supply the script as the content payload of the request. After Data Grid uploads the script, you can execute it with GET requests. 2.8.4. Downloading Script Tasks Download script tasks with GET requests. 2.9. Working with Data Grid Security View and modify security information. 2.9.1. Retrieving the ACL of a user View information about the user's principals and access-control list. Data Grid responds with information about the user who has performed the request. The list includes the principals of the user, and a list of resources and the permissions that user has when accessing them. { "subject": [ { "name": "deployer", "type": "NamePrincipal" } ], "global": [ "READ", "WRITE", "EXEC", "LISTEN", "BULK_READ", "BULK_WRITE", "CREATE", "MONITOR", "ALL_READ", "ALL_WRITE" ], "caches": { "___protobuf_metadata": [ "READ", "WRITE", "EXEC", "LISTEN", "BULK_READ", "BULK_WRITE", "CREATE", "MONITOR", "ALL_READ", "ALL_WRITE" ], "mycache": [ "LIFECYCLE", "READ", "WRITE", "EXEC", "LISTEN", "BULK_READ", "BULK_WRITE", "ADMIN", "CREATE", "MONITOR", "ALL_READ", "ALL_WRITE" ], "___script_cache": [ "READ", "WRITE", "EXEC", "LISTEN", "BULK_READ", "BULK_WRITE", "CREATE", "MONITOR", "ALL_READ", "ALL_WRITE" ] } } 2.9.2. Flushing the security caches Flush the security caches across the cluster. 2.9.3. Retrieving the available roles View all the available roles defined in the server. Data Grid responds with a list of available roles. If authorization is enabled, only a user with the ADMIN permission can call this API. ["observer","application","admin","monitor","deployer"] 2.9.4. Retrieving the available roles detailed View all the available roles defined in the server with their full detail. Data Grid responds with a list of available roles and their detail. If authorization is enabled, only a user with the ADMIN permission can call this API. { "observer": { "inheritable": true, "permissions": [ "MONITOR", "ALL_READ" ], "implicit": true, "description": "..." }, "application": { "inheritable": true, "permissions": [ "MONITOR", "ALL_WRITE", "EXEC", "LISTEN", "ALL_READ" ], "implicit": true, "description": "..." }, "admin": { "inheritable": true, "permissions": [ "ALL" ], "implicit": true, "description": "..." }, "monitor": { "inheritable": true, "permissions": [ "MONITOR" ], "implicit": true, "description": "..." }, "deployer": { "inheritable": true, "permissions": [ "CREATE", "MONITOR", "ALL_WRITE", "EXEC", "LISTEN", "ALL_READ" ], "implicit": true, "description": "..." } } 2.9.5. Retrieving the roles for a principal View all the roles which map to a principal. Data Grid responds with a list of available roles for the specified principal. The principal need not exist in the realm in use. ["observer"] 2.9.6. Granting roles to a principal Grant one or more new roles to a principal. Table 2.42. Request Parameters Parameter Required or Optional Value role REQUIRED The name of a role 2.9.7. Denying roles to a principal Remove one or more roles that were previously granted to a principal. Table 2.43. Request Parameters Parameter Required or Optional Value role REQUIRED The name of a role 2.9.8. Listing principals List the principal names for all security realms that can enumerate users ( properties , ldap ) Data Grid responds with a list of principals keyed to each realm {"default:properties":["admin","user1","user2"]} 2.9.9. Creating roles Create a role by defining a name, its permissions and an optional description in the request body. Table 2.44. Request Parameters Parameter Required or Optional Value permission REQUIRED The name of a permission 2.9.10. Updating roles Update an existing role permissions and/or description. Table 2.45. Request Parameters Parameter Required or Optional Value permission REQUIRED The name of a permission 2.9.11. Deleting roles Delete an existing role. 2.9.12. Retrieving the permissions for a role View all the permissions of a role. Data Grid responds with a list of available roles for the specified principal. The principal need not exist in the realm in use. { "name" : "application", "permissions" : [ "LISTEN","ALL_READ","MONITOR","ALL_WRITE","EXEC" ], "inheritable": true, "implicit": true, "description": "..." } 2.10. Enabling Tracing Propagation Tracing with Data Grid Server and REST API lets you monitor and analyze the flow of requests and track the execution path across different components. 2.10.1. Enabling tracing propagation between Data Grid Server and REST API When you enable tracing propagation between the Data Grid Server and REST API, you must configure tracing on both the client side and the server side. To propagate the OpenTelemetry tracing spans to the Data Grid spans, you must set the trace context on each REST invocation. Prerequisite Have tracing enabled on Data Grid Server and remote client side. Procedure Extract the current tracing context using the io.opentelemetry.api.trace.propagation.W3CTraceContextPropagator . The extraction produces a context map that stores trace context information. Pass the context map in the header of the REST call to ensure that the trace context is preserved. HashMap<String, String> contextMap = new HashMap<>(); // Inject the request with the *current* Context, which contains our current Span. W3CTraceContextPropagator.getInstance().inject(Context.current(), contextMap, (carrier, key, value) -> carrier.put(key, value)); // Pass the context map in the header RestCacheClient client = restClient.cache(CACHE_NAME); client.put("aaa", MediaType.TEXT_PLAIN.toString(),RestEntity.create(MediaType.TEXT_PLAIN, "bbb"), contextMap); The tracing spans that the client application generates are correlated with the dependent spans generated by the Data Grid Server. Additional resources Enabling Data Grid tracing Hot Rod client tracing propagation
[ "POST /rest/v2/caches/{cacheName}", "<distributed-cache owners=\"2\" segments=\"256\" capacity-factor=\"1.0\" l1-lifespan=\"5000\" mode=\"SYNC\" statistics=\"true\"> <encoding media-type=\"application/x-protostream\"/> <locking isolation=\"REPEATABLE_READ\"/> <transaction mode=\"FULL_XA\" locking=\"OPTIMISTIC\"/> <expiration lifespan=\"5000\" max-idle=\"1000\" /> <memory max-count=\"1000000\" when-full=\"REMOVE\"/> <indexing enabled=\"true\" storage=\"local-heap\"> <index-reader refresh-interval=\"1000\"/> <indexed-entities> <indexed-entity>org.infinispan.Person</indexed-entity> </indexed-entities> </indexing> <partition-handling when-split=\"ALLOW_READ_WRITES\" merge-policy=\"PREFERRED_NON_NULL\"/> <persistence passivation=\"false\"> <!-- Persistent storage configuration. --> </persistence> </distributed-cache>", "{ \"distributed-cache\": { \"mode\": \"SYNC\", \"owners\": \"2\", \"segments\": \"256\", \"capacity-factor\": \"1.0\", \"l1-lifespan\": \"5000\", \"statistics\": \"true\", \"encoding\": { \"media-type\": \"application/x-protostream\" }, \"locking\": { \"isolation\": \"REPEATABLE_READ\" }, \"transaction\": { \"mode\": \"FULL_XA\", \"locking\": \"OPTIMISTIC\" }, \"expiration\" : { \"lifespan\" : \"5000\", \"max-idle\" : \"1000\" }, \"memory\": { \"max-count\": \"1000000\", \"when-full\": \"REMOVE\" }, \"indexing\" : { \"enabled\" : true, \"storage\" : \"local-heap\", \"index-reader\" : { \"refresh-interval\" : \"1000\" }, \"indexed-entities\": [ \"org.infinispan.Person\" ] }, \"partition-handling\" : { \"when-split\" : \"ALLOW_READ_WRITES\", \"merge-policy\" : \"PREFERRED_NON_NULL\" }, \"persistence\" : { \"passivation\" : false } } }", "distributedCache: mode: \"SYNC\" owners: \"2\" segments: \"256\" capacityFactor: \"1.0\" l1Lifespan: \"5000\" statistics: \"true\" encoding: mediaType: \"application/x-protostream\" locking: isolation: \"REPEATABLE_READ\" transaction: mode: \"FULL_XA\" locking: \"OPTIMISTIC\" expiration: lifespan: \"5000\" maxIdle: \"1000\" memory: maxCount: \"1000000\" whenFull: \"REMOVE\" indexing: enabled: \"true\" storage: \"local-heap\" indexReader: refreshInterval: \"1000\" indexedEntities: - \"org.infinispan.Person\" partitionHandling: whenSplit: \"ALLOW_READ_WRITES\" mergePolicy: \"PREFERRED_NON_NULL\" persistence: passivation: \"false\" # Persistent storage configuration.", "<replicated-cache segments=\"256\" mode=\"SYNC\" statistics=\"true\"> <encoding media-type=\"application/x-protostream\"/> <locking isolation=\"REPEATABLE_READ\"/> <transaction mode=\"FULL_XA\" locking=\"OPTIMISTIC\"/> <expiration lifespan=\"5000\" max-idle=\"1000\" /> <memory max-count=\"1000000\" when-full=\"REMOVE\"/> <indexing enabled=\"true\" storage=\"local-heap\"> <index-reader refresh-interval=\"1000\"/> <indexed-entities> <indexed-entity>org.infinispan.Person</indexed-entity> </indexed-entities> </indexing> <partition-handling when-split=\"ALLOW_READ_WRITES\" merge-policy=\"PREFERRED_NON_NULL\"/> <persistence passivation=\"false\"> <!-- Persistent storage configuration. --> </persistence> </replicated-cache>", "{ \"replicated-cache\": { \"mode\": \"SYNC\", \"segments\": \"256\", \"statistics\": \"true\", \"encoding\": { \"media-type\": \"application/x-protostream\" }, \"locking\": { \"isolation\": \"REPEATABLE_READ\" }, \"transaction\": { \"mode\": \"FULL_XA\", \"locking\": \"OPTIMISTIC\" }, \"expiration\" : { \"lifespan\" : \"5000\", \"max-idle\" : \"1000\" }, \"memory\": { \"max-count\": \"1000000\", \"when-full\": \"REMOVE\" }, \"indexing\" : { \"enabled\" : true, \"storage\" : \"local-heap\", \"index-reader\" : { \"refresh-interval\" : \"1000\" }, \"indexed-entities\": [ \"org.infinispan.Person\" ] }, \"partition-handling\" : { \"when-split\" : \"ALLOW_READ_WRITES\", \"merge-policy\" : \"PREFERRED_NON_NULL\" }, \"persistence\" : { \"passivation\" : false } } }", "replicatedCache: mode: \"SYNC\" segments: \"256\" statistics: \"true\" encoding: mediaType: \"application/x-protostream\" locking: isolation: \"REPEATABLE_READ\" transaction: mode: \"FULL_XA\" locking: \"OPTIMISTIC\" expiration: lifespan: \"5000\" maxIdle: \"1000\" memory: maxCount: \"1000000\" whenFull: \"REMOVE\" indexing: enabled: \"true\" storage: \"local-heap\" indexReader: refreshInterval: \"1000\" indexedEntities: - \"org.infinispan.Person\" partitionHandling: whenSplit: \"ALLOW_READ_WRITES\" mergePolicy: \"PREFERRED_NON_NULL\" persistence: passivation: \"false\" # Persistent storage configuration.", "<infinispan xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"urn:infinispan:config:15.0 https://infinispan.org/schemas/infinispan-config-15.0.xsd urn:infinispan:server:15.0 https://infinispan.org/schemas/infinispan-server-15.0.xsd\" xmlns=\"urn:infinispan:config:15.0\" xmlns:server=\"urn:infinispan:server:15.0\"> <cache-container name=\"default\" statistics=\"true\"> <distributed-cache name=\"mycacheone\" mode=\"ASYNC\" statistics=\"true\"> <encoding media-type=\"application/x-protostream\"/> <expiration lifespan=\"300000\"/> <memory max-size=\"400MB\" when-full=\"REMOVE\"/> </distributed-cache> <distributed-cache name=\"mycachetwo\" mode=\"SYNC\" statistics=\"true\"> <encoding media-type=\"application/x-protostream\"/> <expiration lifespan=\"300000\"/> <memory max-size=\"400MB\" when-full=\"REMOVE\"/> </distributed-cache> </cache-container> </infinispan>", "{ \"infinispan\" : { \"cache-container\" : { \"name\" : \"default\", \"statistics\" : \"true\", \"caches\" : { \"mycacheone\" : { \"distributed-cache\" : { \"mode\": \"ASYNC\", \"statistics\": \"true\", \"encoding\": { \"media-type\": \"application/x-protostream\" }, \"expiration\" : { \"lifespan\" : \"300000\" }, \"memory\": { \"max-size\": \"400MB\", \"when-full\": \"REMOVE\" } } }, \"mycachetwo\" : { \"distributed-cache\" : { \"mode\": \"SYNC\", \"statistics\": \"true\", \"encoding\": { \"media-type\": \"application/x-protostream\" }, \"expiration\" : { \"lifespan\" : \"300000\" }, \"memory\": { \"max-size\": \"400MB\", \"when-full\": \"REMOVE\" } } } } } } }", "infinispan: cacheContainer: name: \"default\" statistics: \"true\" caches: mycacheone: distributedCache: mode: \"ASYNC\" statistics: \"true\" encoding: mediaType: \"application/x-protostream\" expiration: lifespan: \"300000\" memory: maxSize: \"400MB\" whenFull: \"REMOVE\" mycachetwo: distributedCache: mode: \"SYNC\" statistics: \"true\" encoding: mediaType: \"application/x-protostream\" expiration: lifespan: \"300000\" memory: maxSize: \"400MB\" whenFull: \"REMOVE\"", "PUT /rest/v2/caches/{cacheName}", "HEAD /rest/v2/caches/{cacheName}", "GET /rest/v2/caches/{cacheName}?action=health", "POST /rest/v2/caches/{cacheName}?template={templateName}", "GET /rest/v2/caches/{name}?action=config", "POST /rest/v2/caches?action=convert", "curl localhost:11222/rest/v2/caches?action=convert --digest -u username:password -X POST -H \"Accept: application/yaml\" -H \"Content-Type: application/xml\" -d '<replicated-cache mode=\"SYNC\" statistics=\"false\"><encoding media-type=\"application/x-protostream\"/><expiration lifespan=\"300000\" /><memory max-size=\"400MB\" when-full=\"REMOVE\"/></replicated-cache>'", "POST /rest/v2/caches?action=compare", "GET /rest/v2/caches/{name}?action=stats", "{ \"stats\": { \"time_since_start\": -1, \"time_since_reset\": -1, \"hits\": -1, \"current_number_of_entries\": -1, \"current_number_of_entries_in_memory\": -1, \"stores\": -1, \"off_heap_memory_used\": -1, \"data_memory_used\": -1, \"retrievals\": -1, \"misses\": -1, \"remove_hits\": -1, \"remove_misses\": -1, \"evictions\": -1, \"average_read_time\": -1, \"average_read_time_nanos\": -1, \"average_write_time\": -1, \"average_write_time_nanos\": -1, \"average_remove_time\": -1, \"average_remove_time_nanos\": -1, \"required_minimum_number_of_nodes\": -1 }, \"size\": 0, \"configuration\": { \"distributed-cache\": { \"mode\": \"SYNC\", \"transaction\": { \"stop-timeout\": 0, \"mode\": \"NONE\" } } }, \"rehash_in_progress\": false, \"rebalancing_enabled\": true, \"bounded\": false, \"indexed\": false, \"persistent\": false, \"transactional\": false, \"secured\": false, \"has_remote_backup\": false, \"indexing_in_progress\": false, \"statistics\": false, \"mode\" : \"DIST_SYNC\", \"storage_type\": \"HEAP\", \"max_size\": \"\", \"max_size_bytes\" : -1 }", "POST /rest/v2/caches/{name}?action=stats-reset", "GET /rest/v2/caches/{name}?action=distribution", "[ { \"node_name\": \"NodeA\", \"node_addresses\": [ \"127.0.0.1:44175\" ], \"memory_entries\": 0, \"total_entries\": 0, \"memory_used\": 528512 }, { \"node_name\":\"NodeB\", \"node_addresses\": [ \"127.0.0.1:44187\" ], \"memory_entries\": 0, \"total_entries\": 0, \"memory_used\": 528512 } ]", "GET /rest/v2/caches/{name}?action=get-mutable-attributes", "[ \"jmx-statistics.statistics\", \"locking.acquire-timeout\", \"transaction.single-phase-auto-commit\", \"expiration.max-idle\", \"transaction.stop-timeout\", \"clustering.remote-timeout\", \"expiration.lifespan\", \"expiration.interval\", \"memory.max-count\", \"memory.max-size\" ]", "GET /rest/v2/caches/mycache?action=get-mutable-attributes&full=true", "{ \"jmx-statistics.statistics\": { \"value\": true, \"type\": \"boolean\" }, \"locking.acquire-timeout\": { \"value\": 15000, \"type\": \"long\" }, \"transaction.single-phase-auto-commit\": { \"value\": false, \"type\": \"boolean\" }, \"expiration.max-idle\": { \"value\": -1, \"type\": \"long\" }, \"transaction.stop-timeout\": { \"value\": 30000, \"type\": \"long\" }, \"clustering.remote-timeout\": { \"value\": 17500, \"type\": \"long\" }, \"expiration.lifespan\": { \"value\": -1, \"type\": \"long\" }, \"expiration.interval\": { \"value\": 60000, \"type\": \"long\" }, \"memory.max-count\": { \"value\": -1, \"type\": \"long\" }, \"memory.max-size\": { \"value\": null, \"type\": \"string\" } }", "POST /rest/v2/caches/{name}?action=set-mutable-attributes&attribute-name={attributeName}&attribute-value={attributeValue}", "POST /rest/v2/caches/{cacheName}/{cacheKey}", "PUT /rest/v2/caches/{cacheName}/{cacheKey}", "GET /rest/v2/caches/{cacheName}/{cacheKey}", "GET /rest/v2/caches/{cacheName}/{cacheKey}?extended", "HEAD /rest/v2/caches/{cacheName}/{cacheKey}", "DELETE /rest/v2/caches/{cacheName}/{cacheKey}", "GET /rest/v2/caches/{cacheName}/{cacheKey}?action=distribution", "{ \"contains_key\": true, \"owners\": [ { \"node_name\": \"NodeA\", \"primary\": true, \"node_addresses\": [ \"127.0.0.1:39492\" ] }, { \"node_name\": \"NodeB\", \"primary\": false, \"node_addresses\": [ \"127.0.0.1:38195\" ] } ] }", "DELETE /rest/v2/caches/{cacheName}", "GET /rest/v2/caches/{cacheName}?action=keys", "GET /rest/v2/caches/{cacheName}?action=entries", "[ { \"key\": 1, \"value\": \"value1\", \"timeToLiveSeconds\": -1, \"maxIdleTimeSeconds\": -1, \"created\": -1, \"lastUsed\": -1, \"expireTime\": -1 }, { \"key\": 2, \"value\": \"value2\", \"timeToLiveSeconds\": 10, \"maxIdleTimeSeconds\": 45, \"created\": 1607966017944, \"lastUsed\": 1607966017944, \"expireTime\": 1607966027944, \"version\": 7 }, { \"key\": 3, \"value\": \"value2\", \"timeToLiveSeconds\": 10, \"maxIdleTimeSeconds\": 45, \"created\": 1607966017944, \"lastUsed\": 1607966017944, \"expireTime\": 1607966027944, \"version\": 7, \"topologyId\": 9 } ]", "POST /rest/v2/caches/{cacheName}?action=clear", "GET /rest/v2/caches/{cacheName}?action=size", "GET /rest/v2/caches/{cacheName}?action=stats", "GET /rest/v2/caches/", "GET /rest/v2/caches?action=detailed", "[ { \"status\" : \"RUNNING\", \"name\" : \"cache1\", \"type\" : \"local-cache\", \"simple_cache\" : false, \"transactional\" : false, \"persistent\" : false, \"bounded\": false, \"secured\": false, \"indexed\": true, \"has_remote_backup\": true, \"health\":\"HEALTHY\", \"rebalancing_enabled\": true }, { \"status\" : \"RUNNING\", \"name\" : \"cache2\", \"type\" : \"distributed-cache\", \"simple_cache\" : false, \"transactional\" : true, \"persistent\" : false, \"bounded\": false, \"secured\": false, \"indexed\": true, \"has_remote_backup\": true, \"health\":\"HEALTHY\", \"rebalancing_enabled\": false }]", "GET /rest/v2/caches?action=role-accessible&role=observer", "{ \"secured\" : [\"securedCache1\", \"securedCache2\"], \"non-secured\" : [\"cache1\", \"cache2\", \"cache3\"] }", "GET /rest/v2/caches/{name}?action=listen", "POST /rest/v2/caches/{cacheName}?action=enable-rebalancing", "POST /rest/v2/caches/{cacheName}?action=disable-rebalancing", "GET /rest/v2/caches/{cacheName}?action=get-availability", "POST /rest/v2/caches/{cacheName}?action=set-availability&availability={AVAILABILITY}", "POST /rest/v2/caches/{cacheName}?action=initialize&force={FORCE}", "GET /rest/v2/caches/{cacheName}?action=search&query={ickle query}", "{ \"hit_count\" : 150, \"hit_count_exact\" : true, \"hits\" : [ { \"hit\" : { \"name\" : \"user1\", \"age\" : 35 } }, { \"hit\" : { \"name\" : \"user2\", \"age\" : 42 } }, { \"hit\" : { \"name\" : \"user3\", \"age\" : 12 } } ] }", "POST /rest/v2/caches/{cacheName}?action=search", "{ \"query\":\"from Entity where name:\\\"user1\\\"\", \"max_results\":20, \"offset\":10 }", "POST /rest/v2/caches/{cacheName}/search/indexes?action=reindex", "POST /rest/v2/caches/{cacheName}/search/indexes?action=updateSchema", "POST /rest/v2/caches/{cacheName}/search/indexes?action=clear", "GET /rest/v2/caches/{cacheName}/search/indexes/metamodel", "[{ \"entity-name\": \"org.infinispan.query.test.Book\", \"java-class\": \"org.infinispan.query.test.Book\", \"index-name\": \"org.infinispan.query.test.Book\", \"value-fields\": { \"description\": { \"multi-valued\": false, \"multi-valued-in-root\": false, \"type\": \"java.lang.String\", \"projection-type\": \"java.lang.String\", \"argument-type\": \"java.lang.String\", \"searchable\": true, \"sortable\": false, \"projectable\": false, \"aggregable\": false, \"analyzer\": \"standard\" }, \"name\": { \"multi-valued\": false, \"multi-valued-in-root\": true, \"type\": \"java.lang.String\", \"projection-type\": \"java.lang.String\", \"argument-type\": \"java.lang.String\", \"searchable\": true, \"sortable\": false, \"projectable\": false, \"aggregable\": false, \"analyzer\": \"standard\" }, \"surname\": { \"multi-valued\": false, \"multi-valued-in-root\": true, \"type\": \"java.lang.String\", \"projection-type\": \"java.lang.String\", \"argument-type\": \"java.lang.String\", \"searchable\": true, \"sortable\": false, \"projectable\": false, \"aggregable\": false }, \"title\": { \"multi-valued\": false, \"multi-valued-in-root\": false, \"type\": \"java.lang.String\", \"projection-type\": \"java.lang.String\", \"argument-type\": \"java.lang.String\", \"searchable\": true, \"sortable\": false, \"projectable\": false, \"aggregable\": false } }, \"object-fields\": { \"authors\": { \"multi-valued\": true, \"multi-valued-in-root\": true, \"nested\": true, \"value-fields\": { \"name\": { \"multi-valued\": false, \"multi-valued-in-root\": true, \"type\": \"java.lang.String\", \"projection-type\": \"java.lang.String\", \"argument-type\": \"java.lang.String\", \"searchable\": true, \"sortable\": false, \"projectable\": false, \"aggregable\": false, \"analyzer\": \"standard\" }, \"surname\": { \"multi-valued\": false, \"multi-valued-in-root\": true, \"type\": \"java.lang.String\", \"projection-type\": \"java.lang.String\", \"argument-type\": \"java.lang.String\", \"searchable\": true, \"sortable\": false, \"projectable\": false, \"aggregable\": false } } } } }, { \"entity-name\": \"org.infinispan.query.test.Author\", \"java-class\": \"org.infinispan.query.test.Author\", \"index-name\": \"org.infinispan.query.test.Author\", \"value-fields\": { \"surname\": { \"multi-valued\": false, \"multi-valued-in-root\": false, \"type\": \"java.lang.String\", \"projection-type\": \"java.lang.String\", \"argument-type\": \"java.lang.String\", \"searchable\": true, \"sortable\": false, \"projectable\": false, \"aggregable\": false }, \"name\": { \"multi-valued\": false, \"multi-valued-in-root\": false, \"type\": \"java.lang.String\", \"projection-type\": \"java.lang.String\", \"argument-type\": \"java.lang.String\", \"searchable\": true, \"sortable\": false, \"projectable\": false, \"aggregable\": false, \"analyzer\": \"standard\" } } }]", "GET /rest/v2/caches/{cacheName}/search/stats", "{ \"query\": { \"indexed_local\": { \"count\": 1, \"average\": 12344.2, \"max\": 122324, \"slowest\": \"FROM Entity WHERE field > 4\" }, \"indexed_distributed\": { \"count\": 0, \"average\": 0.0, \"max\": -1, \"slowest\": \"FROM Entity WHERE field > 4\" }, \"hybrid\": { \"count\": 0, \"average\": 0.0, \"max\": -1, \"slowest\": \"FROM Entity WHERE field > 4 AND desc = 'value'\" }, \"non_indexed\": { \"count\": 0, \"average\": 0.0, \"max\": -1, \"slowest\": \"FROM Entity WHERE desc = 'value'\" }, \"entity_load\": { \"count\": 123, \"average\": 10.0, \"max\": 120 } }, \"index\": { \"types\": { \"org.infinispan.same.test.Entity\": { \"count\": 5660001, \"size\": 0 }, \"org.infinispan.same.test.AnotherEntity\": { \"count\": 40, \"size\": 345560 } }, \"reindexing\": false } }", "POST /rest/v2/caches/{cacheName}/search/stats?action=clear", "GET /rest/v2/caches/{cacheName}/search/indexes/stats", "{ \"indexed_class_names\": [\"org.infinispan.sample.User\"], \"indexed_entities_count\": { \"org.infinispan.sample.User\": 4 }, \"index_sizes\": { \"cacheName_protobuf\": 14551 }, \"reindexing\": false }", "GET /rest/v2/caches/{cacheName}/search/query/stats", "{ \"search_query_execution_count\":20, \"search_query_total_time\":5, \"search_query_execution_max_time\":154, \"search_query_execution_avg_time\":2, \"object_loading_total_time\":1, \"object_loading_execution_max_time\":1, \"object_loading_execution_avg_time\":1, \"objects_loaded_count\":20, \"search_query_execution_max_time_query_string\": \"FROM entity\" }", "POST /rest/v2/caches/{cacheName}/search/query/stats?action=clear", "GET /rest/v2/caches/{cacheName}/x-site/backups/", "{ \"NYC\": { \"status\": \"online\" }, \"LON\": { \"status\": \"mixed\", \"online\": [ \"NodeA\" ], \"offline\": [ \"NodeB\" ] } }", "GET /rest/v2/caches/{cacheName}/x-site/backups/{siteName}", "{ \"NodeA\":\"offline\", \"NodeB\":\"online\" }", "POST /rest/v2/caches/{cacheName}/x-site/backups/{siteName}?action=take-offline", "POST /rest/v2/caches/{cacheName}/x-site/backups/{siteName}?action=bring-online", "POST /rest/v2/caches/{cacheName}/x-site/backups/{siteName}?action=start-push-state", "POST /rest/v2/caches/{cacheName}/x-site/backups/{siteName}?action=cancel-push-state", "GET /rest/v2/caches/{cacheName}/x-site/backups?action=push-state-status", "{ \"NYC\":\"CANCELED\", \"LON\":\"OK\" }", "POST /rest/v2/caches/{cacheName}/x-site/local?action=clear-push-state-status", "GET /rest/v2/caches/{cacheName}/x-site/backups/{siteName}/take-offline-config", "{ \"after_failures\": 2, \"min_wait\": 1000 }", "PUT /rest/v2/caches/{cacheName}/x-site/backups/{siteName}/take-offline-config", "POST /rest/v2/caches/{cacheName}/x-site/backups/{siteName}?action=cancel-receive-state", "POST /rest/v2/caches/{cacheName}/rolling-upgrade/source-connection", "{ \"remote-store\": { \"cache\": \"my-cache\", \"shared\": true, \"raw-values\": true, \"socket-timeout\": 60000, \"protocol-version\": \"2.9\", \"remote-server\": [ { \"host\": \"127.0.0.2\", \"port\": 12222 } ], \"connection-pool\": { \"max-active\": 110, \"exhausted-action\": \"CREATE_NEW\" }, \"async-executor\": { \"properties\": { \"name\": 4 } }, \"security\": { \"authentication\": { \"server-name\": \"servername\", \"digest\": { \"username\": \"username\", \"password\": \"password\", \"realm\": \"realm\", \"sasl-mechanism\": \"DIGEST-MD5\" } }, \"encryption\": { \"protocol\": \"TLSv1.2\", \"sni-hostname\": \"snihostname\", \"keystore\": { \"filename\": \"/path/to/keystore_client.jks\", \"password\": \"secret\", \"certificate-password\": \"secret\", \"key-alias\": \"hotrod\", \"type\": \"JKS\" }, \"truststore\": { \"filename\": \"/path/to/gca.jks\", \"password\": \"secret\", \"type\": \"JKS\" } } } } }", "GET /rest/v2/caches/{cacheName}/rolling-upgrade/source-connection", "HEAD /rest/v2/caches/{cacheName}/rolling-upgrade/source-connection", "POST /rest/v2/caches/{cacheName}?action=sync-data", "DELETE /rest/v2/caches/{cacheName}/rolling-upgrade/source-connection", "POST /rest/v2/counters/{counterName}", "{ \"weak-counter\":{ \"initial-value\":5, \"storage\":\"PERSISTENT\", \"concurrency-level\":1 } }", "{ \"strong-counter\":{ \"initial-value\":3, \"storage\":\"PERSISTENT\", \"upper-bound\":5 } }", "DELETE /rest/v2/counters/{counterName}", "GET /rest/v2/counters/{counterName}/config", "GET /rest/v2/counters/{counterName}", "POST /rest/v2/counters/{counterName}?action=reset", "POST /rest/v2/counters/{counterName}?action=increment", "POST /rest/v2/counters/{counterName}?action=add&delta={delta}", "POST /rest/v2/counters/{counterName}?action=decrement", "POST /rest/v2/counters/{counterName}?action=getAndSet&value={value}", "POST /rest/v2/counters/{counterName}?action=compareAndSet&expect={expect}&update={update}", "POST /rest/v2/counters/{counterName}?action=compareAndSwap&expect={expect}&update={update}", "GET /rest/v2/counters/", "POST /rest/v2/schemas/{schemaName}", "{ \"name\" : \"users.proto\", \"error\" : { \"message\": \"Schema users.proto has errors\", \"cause\": \"java.lang.IllegalStateException:Syntax error in error.proto at 3:8: unexpected label: messoge\" } }", "GET /rest/v2/schemas/{schemaName}", "PUT /rest/v2/schemas/{schemaName}", "{ \"name\" : \"users.proto\", \"error\" : { \"message\": \"Schema users.proto has errors\", \"cause\": \"java.lang.IllegalStateException:Syntax error in error.proto at 3:8: unexpected label: messoge\" } }", "DELETE /rest/v2/schemas/{schemaName}", "GET /rest/v2/schemas/", "[ { \"name\" : \"users.proto\", \"error\" : { \"message\": \"Schema users.proto has errors\", \"cause\": \"java.lang.IllegalStateException:Syntax error in error.proto at 3:8: unexpected label: messoge\" } }, { \"name\" : \"people.proto\", \"error\" : null }]", "GET /rest/v2/schemas?action=types", "[\"org.infinispan.Person\", \"org.infinispan.Phone\"]", "GET /rest/v2/container", "{ \"version\":\"xx.x.x-FINAL\", \"name\":\"default\", \"coordinator\":true, \"cache_configuration_names\":[ \"___protobuf_metadata\", \"cache2\", \"CacheManagerResourceTest\", \"cache1\" ], \"cluster_name\":\"ISPN\", \"physical_addresses\":\"[127.0.0.1:35770]\", \"coordinator_address\":\"CacheManagerResourceTest-NodeA-49696\", \"cache_manager_status\":\"RUNNING\", \"created_cache_count\":\"3\", \"running_cache_count\":\"3\", \"node_address\":\"CacheManagerResourceTest-NodeA-49696\", \"cluster_members\":[ \"CacheManagerResourceTest-NodeA-49696\", \"CacheManagerResourceTest-NodeB-28120\" ], \"cluster_members_physical_addresses\":[ \"127.0.0.1:35770\", \"127.0.0.1:60031\" ], \"cluster_size\":2, \"defined_caches\":[ { \"name\":\"CacheManagerResourceTest\", \"started\":true }, { \"name\":\"cache1\", \"started\":true }, { \"name\":\"___protobuf_metadata\", \"started\":true }, { \"name\":\"cache2\", \"started\":true } ], \"local_site\": \"LON\", \"relay_node\": true, \"relay_nodes_address\": [ \"CacheManagerResourceTest-NodeA-49696\" ], \"sites_view\": [ \"LON\", \"NYC\" ], \"rebalancing_enabled\": true }", "GET /rest/v2/container/health", "{ \"cluster_health\":{ \"cluster_name\":\"ISPN\", \"health_status\":\"HEALTHY\", \"number_of_nodes\":2, \"node_names\":[ \"NodeA-36229\", \"NodeB-28703\" ] }, \"cache_health\":[ { \"status\":\"HEALTHY\", \"cache_name\":\"___protobuf_metadata\" }, { \"status\":\"HEALTHY\", \"cache_name\":\"cache2\" }, { \"status\":\"HEALTHY\", \"cache_name\":\"mycache\" }, { \"status\":\"HEALTHY\", \"cache_name\":\"cache1\" } ] }", "GET /rest/v2/container/health/status", "HEAD /rest/v2/container/health", "GET /rest/v2/container/config", "GET /rest/v2/container/cache-configs", "[ { \"name\":\"cache1\", \"configuration\":{ \"distributed-cache\":{ \"mode\":\"SYNC\", \"partition-handling\":{ \"when-split\":\"DENY_READ_WRITES\" }, \"statistics\":true } } }, { \"name\":\"cache2\", \"configuration\":{ \"distributed-cache\":{ \"mode\":\"SYNC\", \"transaction\":{ \"mode\":\"NONE\" } } } } ]", "GET /rest/v2/cache-configs/templates", "GET /rest/v2/container/stats", "{ \"statistics_enabled\":true, \"read_write_ratio\":0.0, \"time_since_start\":1, \"time_since_reset\":1, \"number_of_entries\":0, \"off_heap_memory_used\":0, \"data_memory_used\":0, \"misses\":0, \"remove_hits\":0, \"remove_misses\":0, \"evictions\":0, \"average_read_time\":0, \"average_read_time_nanos\":0, \"average_write_time\":0, \"average_write_time_nanos\":0, \"average_remove_time\":0, \"average_remove_time_nanos\":0, \"required_minimum_number_of_nodes\":1, \"hits\":0, \"stores\":0, \"current_number_of_entries_in_memory\":0, \"hit_ratio\":0.0, \"retrievals\":0 }", "POST /rest/v2/container/stats?action=reset", "POST /rest/v2/container?action=shutdown", "POST /rest/v2/container?action=enable-rebalancing", "POST /rest/v2/container?action=disable-rebalancing", "POST /rest/v2/container/backups/{backupName}", "{ \"directory\": \"/path/accessible/to/the/server\", \"resources\": { \"caches\": [\"cache1\", \"cache2\"], \"counters\": [\"*\"] } }", "GET /rest/v2/container/backups", "[\"backup1\", \"backup2\"]", "HEAD /rest/v2/container/backups/{backupName}", "GET /rest/v2/container/backups/{backupName}", "DELETE /rest/v2/container/backups/{backupName}", "POST /rest/v2/container/restores/{restoreName}", "{ \"location\": \"/path/accessible/to/the/server/backup-to-restore.zip\", \"resources\": { \"counters\": [\"*\"] } }", "Content-Type: multipart/form-data; boundary=5ec9bc07-f069-4662-a535-46069afeda32 Content-Length: 7721 --5ec9bc07-f069-4662-a535-46069afeda32 Content-Disposition: form-data; name=\"resources\" Content-Length: 23 {\"scripts\":[\"test.js\"]} --5ec9bc07-f069-4662-a535-46069afeda32 Content-Disposition: form-data; name=\"backup\"; filename=\"testManagerRestoreParameters.zip\" Content-Type: application/zip Content-Length: 7353 <zip-bytes> --5ec9bc07-f069-4662-a535-46069afeda32--", "GET /rest/v2/container/restores", "[\"restore1\", \"restore2\"]", "HEAD /rest/v2/container/restores/{restoreName}", "DELETE /rest/v2/container/restores/{restoreName}", "GET /rest/v2/container/config?action=listen", "GET /rest/v2/container?action=listen", "GET /rest/v2/container/x-site/backups/", "{ \"SFO-3\":{ \"status\":\"online\" }, \"NYC-2\":{ \"status\":\"mixed\", \"online\":[ \"CACHE_1\" ], \"offline\":[ \"CACHE_2\" ], \"mixed\": [ \"CACHE_3\" ] } }", "GET /rest/v2/container/x-site/backups/{site}", "POST /rest/v2/container/x-site/backups/{siteName}?action=take-offline", "POST /rest/v2/container/x-site/backups/{siteName}?action=bring-online", "GET /rest/v2/caches/{cacheName}/x-site/backups/{site}/state-transfer-mode", "POST /rest/v2/caches/{cacheName}/x-site/backups/{site}/state-transfer-mode?action=set&mode={mode}", "POST /rest/v2/container/x-site/backups/{siteName}?action=start-push-state", "POST /rest/v2/container/x-site/backups/{siteName}?action=cancel-push-state", "GET /rest/v2/server", "{ \"version\":\"Infinispan 'Codename' xx.x.x.Final\" }", "GET /rest/v2/server/cache-managers", "POST /rest/v2/server/ignored-caches/{cache}", "DELETE /rest/v2/server/ignored-caches/{cache}", "GET /rest/v2/server/ignored-caches/", "GET /rest/v2/server/config", "{ \"server\":{ \"interfaces\":{ \"interface\":{ \"name\":\"public\", \"inet-address\":{ \"value\":\"127.0.0.1\" } } }, \"socket-bindings\":{ \"port-offset\":0, \"default-interface\":\"public\", \"socket-binding\":[ { \"name\":\"memcached\", \"port\":11221, \"interface\":\"memcached\" } ] }, \"security\":{ \"security-realms\":{ \"security-realm\":{ \"name\":\"default\" } } }, \"endpoints\":{ \"socket-binding\":\"default\", \"security-realm\":\"default\", \"hotrod-connector\":{ \"name\":\"hotrod\" }, \"rest-connector\":{ \"name\":\"rest\" } } } }", "GET /rest/v2/server/env", "GET /rest/v2/server/memory", "POST /rest/v2/server/memory?action=heap-dump[&live=true|false]", "GET /rest/v2/server/threads", "GET /rest/v2/server/report", "GET /rest/v2/server/report/{nodeName}", "POST /rest/v2/server?action=stop", "GET /rest/v2/server/connections", "[ { \"id\": 2, \"name\": \"flower\", \"created\": \"2023-05-18T14:54:37.882566188Z\", \"principal\": \"admin\", \"local-address\": \"/127.0.0.1:11222\", \"remote-address\": \"/127.0.0.1:58230\", \"protocol-version\": \"RESP3\", \"client-library\": null, \"client-version\": null, \"ssl-application-protocol\": \"http/1.1\", \"ssl-cipher-suite\": \"TLS_AES_256_GCM_SHA384\", \"ssl-protocol\": \"TLSv1.3\" }, { \"id\": 0, \"name\": null, \"created\": \"2023-05-18T14:54:07.727775875Z\", \"principal\": \"admin\", \"local-address\": \"/127.0.0.1:11222\", \"remote-address\": \"/127.0.0.1:35716\", \"protocol-version\": \"HTTP/1.1\", \"client-library\": \"Infinispan CLI 15.0.0-SNAPSHOT\", \"client-version\": null, \"ssl-application-protocol\": \"http/1.1\", \"ssl-cipher-suite\": \"TLS_AES_256_GCM_SHA384\", \"ssl-protocol\": \"TLSv1.3\" } ]", "POST /rest/v2/server/caches/defaults", "POST /rest/v2/cluster?action=stop", "POST /rest/v2/cluster?action=stop&server={server1_host}&server={server2_host}", "POST /rest/v2/cluster/backups/{backupName}", "GET /rest/v2/cluster/backups", "[\"backup1\", \"backup2\"]", "HEAD /rest/v2/cluster/backups/{backupName}", "GET /rest/v2/cluster/backups/{backupName}", "DELETE /rest/v2/cluster/backups/{backupName}", "POST /rest/v2/cluster/restores/{restoreName}", "{ \"location\": \"/path/accessible/to/the/server/backup-to-restore.zip\", \"resources\": { \"counters\": [\"*\"] } }", "Content-Type: multipart/form-data; boundary=5ec9bc07-f069-4662-a535-46069afeda32 Content-Length: 7798 --5ec9bc07-f069-4662-a535-46069afeda32 Content-Disposition: form-data; name=\"backup\"; filename=\"testManagerRestoreParameters.zip\" Content-Type: application/zip Content-Length: 7353 <zip-bytes> --5ec9bc07-f069-4662-a535-46069afeda32--", "GET /rest/v2/cluster/restores", "[\"restore1\", \"restore2\"]", "HEAD /rest/v2/cluster/restores/{restoreName}", "DELETE /rest/v2/cluster/restores/{restoreName}", "GET /rest/v2/cluster?action=distribution", "[ { \"node_name\": \"NodeA\", \"node_addresses\": [ \"127.0.0.1:39313\" ], \"memory_available\": 466180016, \"memory_used\": 56010832 }, { \"node_name\": \"NodeB\", \"node_addresses\": [ \"127.0.0.1:47477\" ], \"memory_available\": 467548568, \"memory_used\": 54642280 } ]", "GET /rest/v2/logging/appenders", "{ \"STDOUT\" : { \"name\" : \"STDOUT\" }, \"JSON-FILE\" : { \"name\" : \"JSON-FILE\" }, \"HR-ACCESS-FILE\" : { \"name\" : \"HR-ACCESS-FILE\" }, \"FILE\" : { \"name\" : \"FILE\" }, \"REST-ACCESS-FILE\" : { \"name\" : \"REST-ACCESS-FILE\" } }", "GET /rest/v2/logging/loggers", "[ { \"name\" : \"\", \"level\" : \"INFO\", \"appenders\" : [ \"STDOUT\", \"FILE\" ] }, { \"name\" : \"org.infinispan.HOTROD_ACCESS_LOG\", \"level\" : \"INFO\", \"appenders\" : [ \"HR-ACCESS-FILE\" ] }, { \"name\" : \"com.arjuna\", \"level\" : \"WARN\", \"appenders\" : [ ] }, { \"name\" : \"org.infinispan.REST_ACCESS_LOG\", \"level\" : \"INFO\", \"appenders\" : [ \"REST-ACCESS-FILE\" ] } ]", "PUT /rest/v2/logging/loggers/{loggerName}?level={level}&appender={appender}&appender={appender}", "DELETE /rest/v2/logging/loggers/{loggerName}", "GET /rest/v2/tasks", "[ { \"name\": \"SimpleTask\", \"type\": \"TaskEngine\", \"parameters\": [ \"p1\", \"p2\" ], \"execution_mode\": \"ONE_NODE\", \"allowed_role\": null }, { \"name\": \"RunOnAllNodesTask\", \"type\": \"TaskEngine\", \"parameters\": [ \"p1\" ], \"execution_mode\": \"ALL_NODES\", \"allowed_role\": null }, { \"name\": \"SecurityAwareTask\", \"type\": \"TaskEngine\", \"parameters\": [], \"execution_mode\": \"ONE_NODE\", \"allowed_role\": \"MyRole\" } ]", "POST /rest/v2/tasks/SimpleTask?action=exec&cache=mycache&param.p1=v1&param.p2=v2", "POST /rest/v2/tasks/taskName", "GET /rest/v2/tasks/taskName?action=script", "GET /rest/v2/security/user/acl", "{ \"subject\": [ { \"name\": \"deployer\", \"type\": \"NamePrincipal\" } ], \"global\": [ \"READ\", \"WRITE\", \"EXEC\", \"LISTEN\", \"BULK_READ\", \"BULK_WRITE\", \"CREATE\", \"MONITOR\", \"ALL_READ\", \"ALL_WRITE\" ], \"caches\": { \"___protobuf_metadata\": [ \"READ\", \"WRITE\", \"EXEC\", \"LISTEN\", \"BULK_READ\", \"BULK_WRITE\", \"CREATE\", \"MONITOR\", \"ALL_READ\", \"ALL_WRITE\" ], \"mycache\": [ \"LIFECYCLE\", \"READ\", \"WRITE\", \"EXEC\", \"LISTEN\", \"BULK_READ\", \"BULK_WRITE\", \"ADMIN\", \"CREATE\", \"MONITOR\", \"ALL_READ\", \"ALL_WRITE\" ], \"___script_cache\": [ \"READ\", \"WRITE\", \"EXEC\", \"LISTEN\", \"BULK_READ\", \"BULK_WRITE\", \"CREATE\", \"MONITOR\", \"ALL_READ\", \"ALL_WRITE\" ] } }", "POST /rest/v2/security/cache?action=flush", "GET /rest/v2/security/roles", "[\"observer\",\"application\",\"admin\",\"monitor\",\"deployer\"]", "GET /rest/v2/security/roles?action=detailed", "{ \"observer\": { \"inheritable\": true, \"permissions\": [ \"MONITOR\", \"ALL_READ\" ], \"implicit\": true, \"description\": \"...\" }, \"application\": { \"inheritable\": true, \"permissions\": [ \"MONITOR\", \"ALL_WRITE\", \"EXEC\", \"LISTEN\", \"ALL_READ\" ], \"implicit\": true, \"description\": \"...\" }, \"admin\": { \"inheritable\": true, \"permissions\": [ \"ALL\" ], \"implicit\": true, \"description\": \"...\" }, \"monitor\": { \"inheritable\": true, \"permissions\": [ \"MONITOR\" ], \"implicit\": true, \"description\": \"...\" }, \"deployer\": { \"inheritable\": true, \"permissions\": [ \"CREATE\", \"MONITOR\", \"ALL_WRITE\", \"EXEC\", \"LISTEN\", \"ALL_READ\" ], \"implicit\": true, \"description\": \"...\" } }", "GET /rest/v2/security/roles/some_principal", "[\"observer\"]", "PUT /rest/v2/security/roles/some_principal?action=grant&role=role1&role=role2", "PUT /rest/v2/security/roles/some_principal?action=deny&role=role1&role=role2", "GET /rest/v2/security/principals", "{\"default:properties\":[\"admin\",\"user1\",\"user2\"]}", "POST /rest/v2/security/permissions/somerole?permission=permission1&permission=permission2", "POST /rest/v2/security/permissions/somerole?permission=permission1&permission=permission2", "DELETE /rest/v2/security/permissions/somerole", "GET /rest/v2/security/permissions/somerole", "{ \"name\" : \"application\", \"permissions\" : [ \"LISTEN\",\"ALL_READ\",\"MONITOR\",\"ALL_WRITE\",\"EXEC\" ], \"inheritable\": true, \"implicit\": true, \"description\": \"...\" }", "HashMap<String, String> contextMap = new HashMap<>(); // Inject the request with the *current* Context, which contains our current Span. W3CTraceContextPropagator.getInstance().inject(Context.current(), contextMap, (carrier, key, value) -> carrier.put(key, value)); // Pass the context map in the header RestCacheClient client = restClient.cache(CACHE_NAME); client.put(\"aaa\", MediaType.TEXT_PLAIN.toString(),RestEntity.create(MediaType.TEXT_PLAIN, \"bbb\"), contextMap);" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_rest_api/rest_v2_api
Preface
Preface To enable your Jenkins pipeline to perform essential tasks, such as vulnerability scanning, image signing, and attestation, follow these steps. The table outlines the actions you need to take and when you need to complete them. Action When to complete Adding secrets to Jenkins for secure integration with external tools Before you use secure software templates to create an application, add secrets to Jenkins. This ensures seamless integration with ACS, Quay, and GitOps. Add your application to Jenkins After creating the application and source repositories, add them to Jenkins. This enables you to review various aspects of the Jenkins pipeline on the Red Hat Developer Hub platform. By completing these steps, you enable Jenkins to integrate seamlessly with ACS (Advanced Cluster Security), Quay, and GitOps, and utilize Cosign for signing and verifying container images.
null
https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.4/html/configuring_jenkins/pr01
Chapter 2. Working with ML2/OVN
Chapter 2. Working with ML2/OVN Red Hat OpenStack Platform (RHOSP) networks are managed by the Networking service (neutron). The core of the Networking service is the Modular Layer 2 (ML2) plug-in, and the default mechanism driver for RHOSP ML2 plug-in is the Open Virtual Networking (OVN) mechanism driver. Earlier RHOSP versions used the Open vSwitch (OVS) mechanism driver by default, but Red Hat recommends the ML2/OVN mechanism driver for most deployments. 2.1. List of components in the RHOSP OVN architecture The RHOSP OVN architecture replaces the OVS Modular Layer 2 (ML2) mechanism driver with the OVN ML2 mechanism driver to support the Networking API. OVN provides networking services for the Red Hat OpenStack platform. As illustrated in Figure 2.1, the OVN architecture consists of the following components and services: ML2 plug-in with OVN mechanism driver The ML2 plug-in translates the OpenStack-specific networking configuration into the platform-neutral OVN logical networking configuration. It typically runs on the Controller node. OVN northbound (NB) database ( ovn-nb ) This database stores the logical OVN networking configuration from the OVN ML2 plug-in. It typically runs on the Controller node and listens on TCP port 6641 . OVN northbound service ( ovn-northd ) This service converts the logical networking configuration from the OVN NB database to the logical data path flows and populates these on the OVN Southbound database. It typically runs on the Controller node. OVN southbound (SB) database ( ovn-sb ) This database stores the converted logical data path flows. It typically runs on the Controller node and listens on TCP port 6642 . OVN controller ( ovn-controller ) This controller connects to the OVN SB database and acts as the open vSwitch controller to control and monitor network traffic. It runs on all Compute and gateway nodes where OS::Tripleo::Services::OVNController is defined. OVN metadata agent ( ovn-metadata-agent ) This agent creates the haproxy instances for managing the OVS interfaces, network namespaces and HAProxy processes used to proxy metadata API requests. The agent runs on all Compute and gateway nodes where OS::TripleO::Services::OVNMetadataAgent is defined. OVS database server (OVSDB) Hosts the OVN Northbound and Southbound databases. Also interacts with ovs-vswitchd to host the OVS database conf.db . Note The schema file for the NB database is located in /usr/share/ovn/ovn-nb.ovsschema , and the SB database schema file is in /usr/share/ovn/ovn-sb.ovsschema . Figure 2.1. OVN architecture in a RHOSP environment 2.2. ML2/OVN databases In Red Hat OpenStack Platform ML2/OVN deployments, network configuration information passes between processes through shared distributed databases. You can inspect these databases to verify the status of the network and identify issues. OVN northbound database The northbound database ( OVN_Northbound ) serves as the interface between OVN and a cloud management system such as Red Hat OpenStack Platform (RHOSP). RHOSP produces the contents of the northbound database. The northbound database contains the current desired state of the network, presented as a collection of logical ports, logical switches, logical routers, and more. Every RHOSP Networking service (neutron) object is represented in a table in the northbound database. OVN southbound database The southbound database ( OVN_Southbound ) holds the logical and physical configuration state for OVN system to support virtual network abstraction. The ovn-controller uses the information in this database to configure OVS to satisfy Networking service (neutron) requirements. 2.3. The ovn-controller service on Compute nodes The ovn-controller service runs on each Compute node and connects to the OVN southbound (SB) database server to retrieve the logical flows. The ovn-controller translates these logical flows into physical OpenFlow flows and adds the flows to the OVS bridge ( br-int ). To communicate with ovs-vswitchd and install the OpenFlow flows, the ovn-controller connects to one of the active ovsdb-server servers (which host conf.db ) using the UNIX socket path that was passed when ovn-controller was started (for example unix:/var/run/openvswitch/db.sock ). The ovn-controller service expects certain key-value pairs in the external_ids column of the Open_vSwitch table; puppet-ovn uses puppet-vswitch to populate these fields. The following example shows the key-value pairs that puppet-vswitch configures in the external_ids column: 2.4. OVN metadata agent on Compute nodes The OVN metadata agent is configured in the tripleo-heat-templates/deployment/ovn/ovn-metadata-container-puppet.yaml file and included in the default Compute role through OS::TripleO::Services::OVNMetadataAgent . As such, the OVN metadata agent with default parameters is deployed as part of the OVN deployment. OpenStack guest instances access the Networking metadata service available at the link-local IP address: 169.254.169.254. The neutron-ovn-metadata-agent has access to the host networks where the Compute metadata API exists. Each HAProxy is in a network namespace that is not able to reach the appropriate host network. HaProxy adds the necessary headers to the metadata API request and then forwards the request to the neutron-ovn-metadata-agent over a UNIX domain socket. The OVN Networking service creates a unique network namespace for each virtual network that enables the metadata service. Each network accessed by the instances on the Compute node has a corresponding metadata namespace (ovnmeta-<network_uuid>). 2.5. The OVN composable service Red Hat OpenStack Platform usually consists of nodes in pre-defined roles, such as nodes in Controller roles, Compute roles, and different storage role types. Each of these default roles contains a set of services that are defined in the core heat template collection. In a default Red Hat OpenStack (RHOSP) deployment, the ML2/OVN composable service, ovn-dbs, runs on Controller nodes. Because the service is composable, you can assign it to another role, such as a Networker role. By choosing to assign the ML2/OVN service to another role you can reduce the load on the Controller node, and implement a high-availability strategy by isolating the Networking service on Networker nodes. Related information Deploying a custom role with ML2/OVN SR-IOV with ML2/OVN and native OVN DHCP 2.6. Layer 3 high availability with OVN OVN supports Layer 3 high availability (L3 HA) without any special configuration. OVN automatically schedules the router port to all available gateway nodes that can act as an L3 gateway on the specified external network. OVN L3 HA uses the gateway_chassis column in the OVN Logical_Router_Port table. Most functionality is managed by OpenFlow rules with bundled active_passive outputs. The ovn-controller handles the Address Resolution Protocol (ARP) responder and router enablement and disablement. Gratuitous ARPs for FIPs and router external addresses are also periodically sent by the ovn-controller . Note L3HA uses OVN to balance the routers back to the original gateway nodes to avoid any nodes becoming a bottleneck. BFD monitoring OVN uses the Bidirectional Forwarding Detection (BFD) protocol to monitor the availability of the gateway nodes. This protocol is encapsulated on top of the Geneve tunnels established from node to node. Each gateway node monitors all the other gateway nodes in a star topology in the deployment. Gateway nodes also monitor the compute nodes to let the gateways enable and disable routing of packets and ARP responses and announcements. Each compute node uses BFD to monitor each gateway node and automatically steers external traffic, such as source and destination Network Address Translation (SNAT and DNAT), through the active gateway node for a given router. Compute nodes do not need to monitor other compute nodes. Note External network failures are not detected as would happen with an ML2-OVS configuration. L3 HA for OVN supports the following failure modes: The gateway node becomes disconnected from the network (tunneling interface). ovs-vswitchd stops ( ovs-switchd is responsible for BFD signaling) ovn-controller stops ( ovn-controller removes itself as a registered node). Note This BFD monitoring mechanism only works for link failures, not for routing failures. 2.7. Deploying a custom role with ML2/OVN In a default Red Hat OpenStack (RHOSP) deployment, the ML2/OVN composable service runs on Controller nodes. You can optionally use supported custom roles like those described in the following examples. Networker Run the OVN composable services on dedicated networker nodes. Networker with SR-IOV Run the OVN composable services on dedicated networker nodes with SR-IOV. Controller with SR-IOV Run the OVN composable services on SR-IOV capable controller nodes. You can also generate your own custom roles. Limitations The following limitations apply to the use of SR-IOV with ML2/OVN and native OVN DHCP in this release. All external ports are scheduled on a single gateway node because there is only one HA Chassis Group for all of the ports. North/south routing on VF(direct) ports on VLAN tenant networks does not work with SR-IOV because the external ports are not colocated with the logical router's gateway ports. See https://bugs.launchpad.net/neutron/+bug/1875852 . Prerequisites You know how to deploy custom roles. For more information see Composable services and custom roles in the Director Installation and Usage guide. Procedure Log in to the undercloud host as the stack user and source the stackrc file. Choose the custom roles file that is appropriate for your deployment. Use it directly in the deploy command if it suits your needs as-is. Or you can generate your own custom roles file that combines other custom roles files. Deployment Role Role File Networker role Networker Networker.yaml Networker role with SR-IOV NetworkerSriov NetworkerSriov.yaml Co-located control and networker with SR-IOV ControllerSriov ControllerSriov.yaml (Optional) Generate a new custom roles data file that combines one of the custom roles files listed earlier with other custom roles files. Follow the instructions in Creating a roles_data file in the Director Installation and Usage guide. Include the appropriate source role files depending on your deployment. (Optional) To identify specific nodes for the role, you can create a specific hardware flavor and assign the flavor to specific nodes. Then use an environment file to define the flavor for the role, and to specify a node count. For more information, see the example in Creating a new role in the Director Installation and Usage guide. Create an environment file as appropriate for your deployment. Deployment Sample Environment File Networker role neutron-ovn-dvr-ha.yaml Networker role with SR-IOV ovn-sriov.yaml Include the following settings as appropriate for your deployment. Deployment Settings Networker role Networker role with SR-IOV Co-located control and networker with SR-IOV Run the deployment command and include the core heat templates, other environment files, and the custom roles data file in your deployment command with the -r option. Important The order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence. Example Verification steps Log in to the Controller or Networker node as the tripleo-admin user: Example Ensure that ovn_metadata_agent is running. Sample output Ensure that Controller nodes with OVN services or dedicated Networker nodes have been configured as gateways for OVS. Sample output Additional verification steps for SR-IOV deployments Log in to a Compute node as the tripleo-admin user: Example Ensure that neutron_sriov_agent is running on Compute nodes. Sample output Ensure that network-available SR-IOV NICs have been successfully detected. Sample output Additional resources Composable services and custom roles in the Director Installation and Usage guide overcloud deploy in the Command Line Interface Reference 2.8. SR-IOV with ML2/OVN and native OVN DHCP You can deploy a custom role to use SR-IOV in an ML2/OVN deployment with native OVN DHCP. See Section 2.7, "Deploying a custom role with ML2/OVN" . Limitations The following limitations apply to the use of SR-IOV with ML2/OVN and native OVN DHCP in this release. All external ports are scheduled on a single gateway node because there is only one HA Chassis Group for all of the ports. North/south routing on VF(direct) ports on VLAN tenant networks does not work with SR-IOV because the external ports are not colocated with the logical router's gateway ports. See https://bugs.launchpad.net/neutron/+bug/1875852 . Additional resources Composable services and custom roles in the Director Installation and Usage guide.
[ "hostname=<HOST NAME> ovn-encap-ip=<IP OF THE NODE> ovn-encap-type=geneve ovn-remote=tcp:OVN_DBS_VIP:6642", "source stackrc", "ControllerParameters: OVNCMSOptions: \"\" ControllerSriovParameters: OVNCMSOptions: \"\" NetworkerParameters: OVNCMSOptions: \"enable-chassis-as-gw\" NetworkerSriovParameters: OVNCMSOptions: \"\"", "OS::TripleO::Services::NeutronDhcpAgent: OS::Heat::None ControllerParameters: OVNCMSOptions: \"\" ControllerSriovParameters: OVNCMSOptions: \"\" NetworkerParameters: OVNCMSOptions: \"\" NetworkerSriovParameters: OVNCMSOptions: \"enable-chassis-as-gw\"", "OS::TripleO::Services::NeutronDhcpAgent: OS::Heat::None ControllerParameters: OVNCMSOptions: \"\" ControllerSriovParameters: OVNCMSOptions: \"enable-chassis-as-gw\" NetworkerParameters: OVNCMSOptions: \"\" NetworkerSriovParameters: OVNCMSOptions: \"\"", "openstack overcloud deploy --templates <core_heat_templates> -e <other_environment_files> -e /home/stack/templates/my-neutron-environment.yaml -r mycustom_roles_file.yaml", "ssh tripleo-admin@controller-0", "sudo podman ps | grep ovn_metadata", "a65125d9588d undercloud-0.ctlplane.localdomain:8787/rh-osbs openstack-neutron-metadata-agent-ovn kolla_start 23 hours ago Up 21 hours ago ovn_metadata_agent", "sudo ovs-vsctl get Open_Vswitch . external_ids:ovn-cms-options", "enable-chassis-as-gw", "ssh tripleo-admin@compute-0", "sudo podman ps | grep neutron_sriov_agent", "f54cbbf4523a undercloud-0.ctlplane.localdomain:8787 openstack-neutron-sriov-agent kolla_start 23 hours ago Up 21 hours ago neutron_sriov_agent", "sudo podman exec -uroot galera-bundle-podman-0 mysql nova -e 'select hypervisor_hostname,pci_stats from compute_nodes;'", "computesriov-1.localdomain {... {\"dev_type\": \"type-PF\", \"physical_network\" : \"datacentre\", \"trusted\": \"true\"}, \"count\": 1}, ... {\"dev_type\": \"type-VF\", \"physical_network\": \"datacentre\", \"trusted\": \"true\", \"parent_ifname\": \"enp7s0f3\"}, \"count\": 5}, ...} computesriov-0.localdomain {... {\"dev_type\": \"type-PF\", \"physical_network\": \"datacentre\", \"trusted\": \"true\"}, \"count\": 1}, ... {\"dev_type\": \"type-VF\", \"physical_network\": \"datacentre\", \"trusted\": \"true\", \"parent_ifname\": \"enp7s0f3\"}, \"count\": 5}, ...}" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/networking_guide/assembly_work-with-ovn_rhosp-network
Chapter 5. View OpenShift Data Foundation Topology
Chapter 5. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage Data Foundation Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_using_microsoft_azure/viewing-odf-topology_mcg-verify
5.88. gnome-keyring
5.88. gnome-keyring 5.88.1. RHBA-2012:1334 - gnome-keyring bug fix update Updated gnome-keyring packages that fix one bug are now available for Red Hat Enterprise Linux 6. The gnome-keyring session daemon manages passwords and other types of secrets for the user, storing them encrypted with a main password. Applications can use the gnome-keyring library to integrate with the key ring. Bug Fix BZ# 860644 Due to a bug in the thread-locking mechanism, the gnome-keyring daemon could sporadically become unresponsive while reading data. This update fixes the thread-locking mechanism and no more deadlocks occur in gnome-keyring in the described scenario. All gnome-keyring users are advised to upgrade to these updated packages, which fix this bug. 5.88.2. RHBA-2012:0878 - gnome-keyring bug fix update Updated gnome-keyring packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The gnome-keyring session daemon manages passwords and other types of secrets for the user, storing them encrypted with a main password. Applications can use the gnome-keyring library to integrate with the keyring. Bug Fixes BZ# 708919 , BZ# 745695 Previously, the mechanism for locking threads was missing. Due to this, gnome-keyring could have, under certain circumstances, terminated unexpectedly on multiple key requests from the integrated ssh-agent. With this update, the missing mechanism has been integrated into gnome-keyring so that gnome-keyring now works as expected. All users of gnome-keyring are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/gnome-keyring
Chapter 9. Working with certificates
Chapter 9. Working with certificates Certificates are used by various components in OpenShift to validate access to the cluster. For clusters that rely on self-signed certificates, you can add those self-signed certificates to a cluster-wide Certificate Authority (CA) bundle and use the CA bundle in Red Hat OpenShift AI. You can also use self-signed certificates in a custom CA bundle that is separate from the cluster-wide bundle. Administrators can add a CA bundle, remove a CA bundle from all namespaces, remove a CA bundle from individual namespaces, or manually manage certificate changes instead of the system. 9.1. Understanding certificates in OpenShift AI For OpenShift clusters that rely on self-signed certificates, you can add those self-signed certificates to a cluster-wide Certificate Authority (CA) bundle ( ca-bundle.crt ) and use the CA bundle in Red Hat OpenShift AI. You can also use self-signed certificates in a custom CA bundle ( odh-ca-bundle.crt ) that is separate from the cluster-wide bundle. 9.1.1. How CA bundles are injected After installing OpenShift AI, the Red Hat OpenShift AI Operator automatically creates an empty odh-trusted-ca-bundle configuration file (ConfigMap), and the Cluster Network Operator (CNO) injects the cluster-wide CA bundle into the odh-trusted-ca-bundle configMap with the label "config.openshift.io/inject-trusted-cabundle". The components deployed in the affected namespaces are responsible for mounting this configMap as a volume in the deployment pods. After the CNO operator injects the bundle, it updates the ConfigMap with the ca-bundle.crt file containing the certificates. 9.1.2. How the ConfigMap is managed By default, the Red Hat OpenShift AI Operator manages the odh-trusted-ca-bundle ConfigMap. If you want to manage or remove the odh-trusted-ca-bundle ConfigMap, or add a custom CA bundle ( odh-ca-bundle.crt ) separate from the cluster-wide CA bundle ( ca-bundle.crt ), you can use the trustedCABundle property in the Operator's DSC Initialization (DSCI) object. In the Operator's DSCI object, you can set the spec.trustedCABundle.managementState field to the following values: Managed : The Red Hat OpenShift AI Operator manages the odh-trusted-ca-bundle ConfigMap and adds it to all non-reserved existing and new namespaces (the ConfigMap is not added to any reserved or system namespaces, such as default , openshift-\* or kube-* ). The ConfigMap is automatically updated to reflect any changes made to the customCABundle field. This is the default value after installing Red Hat OpenShift AI. Removed : The Red Hat OpenShift AI Operator removes the odh-trusted-ca-bundle ConfigMap (if present) and disables the creation of the ConfigMap in new namespaces. If you change this field from Managed to Removed , the odh-trusted-ca-bundle ConfigMap is also deleted from namespaces. This is the default value after upgrading Red Hat OpenShift AI from 2.7 or earlier versions to 2.18. Unmanaged : The Red Hat OpenShift AI Operator does not manage the odh-trusted-ca-bundle ConfigMap, allowing for an administrator to manage it instead. Changing the managementState from Managed to Unmanaged does not remove the odh-trusted-ca-bundle ConfigMap, but the ConfigMap is not updated if you make changes to the customCABundle field. In the Operator's DSCI object, you can add a custom certificate to the spec.trustedCABundle.customCABundle field. This adds the odh-ca-bundle.crt file containing the certificates to the odh-trusted-ca-bundle ConfigMap, as shown in the following example: 9.2. Adding a CA bundle There are two ways to add a Certificate Authority (CA) bundle to OpenShift AI. You can use one or both of these methods: For OpenShift clusters that rely on self-signed certificates, you can add those self-signed certificates to a cluster-wide Certificate Authority (CA) bundle ( ca-bundle.crt ) and use the CA bundle in Red Hat OpenShift AI. To use this method, log in to the OpenShift as a cluster administrator and follow the steps as described in Configuring the cluster-wide proxy during installation . You can use self-signed certificates in a custom CA bundle ( odh-ca-bundle.crt ) that is separate from the cluster-wide bundle. To use this method, follow the steps in this section. Prerequisites You have admin access to the DSCInitialization resources in the OpenShift cluster. You installed the OpenShift command line interface ( oc ) as described in Installing the OpenShift CLI . You are working in a new installation of Red Hat OpenShift AI. If you upgraded Red Hat OpenShift AI, see Adding a CA bundle after upgrading . Procedure Log in to the OpenShift. Click Operators Installed Operators and then click the Red Hat OpenShift AI Operator. Click the DSC Initialization tab. Click the default-dsci object. Click the YAML tab. In the spec section, add the custom certificate to the customCABundle field for trustedCABundle , as shown in the following example: Click Save . Verification If you are using a cluster-wide CA bundle, run the following command to verify that all non-reserved namespaces contain the odh-trusted-ca-bundle ConfigMap: If you are using a custom CA bundle, run the following command to verify that a non-reserved namespace contains the odh-trusted-ca-bundle ConfigMap and that the ConfigMap contains your customCABundle value. In the following command, example-namespace is the non-reserved namespace and examplebundle123 is the customCABundle value. 9.3. Removing a CA bundle You can remove a Certificate Authority (CA) bundle from all non-reserved namespaces in OpenShift AI. This process changes the default configuration and disables the creation of the odh-trusted-ca-bundle configuration file (ConfigMap), as described in Understanding certificates in OpenShift AI . Note The odh-trusted-ca-bundle ConfigMaps are only deleted from namespaces when you set the managementState of trustedCABundle to Removed ; deleting the DSC Initialization does not delete the ConfigMaps. To remove a CA bundle from a single namespace only, see Removing a CA bundle from a namespace . Prerequisites You have cluster administrator privileges for your OpenShift cluster. You installed the OpenShift command line interface ( oc ) as described in Installing the OpenShift CLI . Procedure In the OpenShift web console, click Operators Installed Operators and then click the Red Hat OpenShift AI Operator. Click the DSC Initialization tab. Click the default-dsci object. Click the YAML tab. In the spec section, change the value of the managementState field for trustedCABundle to Removed : Click Save . Verification Run the following command to verify that the odh-trusted-ca-bundle ConfigMap has been removed from all namespaces: The command should not return any ConfigMaps. 9.4. Removing a CA bundle from a namespace You can remove a custom Certificate Authority (CA) bundle from individual namespaces in OpenShift AI. This process disables the creation of the odh-trusted-ca-bundle configuration file (ConfigMap) for the specified namespace only. To remove a certificate bundle from all namespaces, see Removing a CA bundle . Prerequisites You have cluster administrator privileges for your OpenShift cluster. You installed the OpenShift command line interface ( oc ) as described in Installing the OpenShift CLI . Procedure Run the following command to remove a CA bundle from a namespace. In the following command, example-namespace is the non-reserved namespace. Verification Run the following command to verify that the CA bundle has been removed from the namespace. In the following command, example-namespace is the non-reserved namespace. The command should return configmaps "odh-trusted-ca-bundle" not found . 9.5. Managing certificates After installing OpenShift AI, the Red Hat OpenShift AI Operator creates the odh-trusted-ca-bundle configuration file (ConfigMap) that contains the trusted CA bundle and adds it to all new and existing non-reserved namespaces in the cluster. By default, the Red Hat OpenShift AI Operator manages the odh-trusted-ca-bundle ConfigMap and automatically updates it if any changes are made to the CA bundle. You can choose to manage the odh-trusted-ca-bundle ConfigMap instead of allowing the Red Hat OpenShift AI Operator to manage it. Prerequisites You have cluster administrator privileges for your OpenShift cluster. Procedure In the OpenShift web console, click Operators Installed Operators and then click the Red Hat OpenShift AI Operator . Click the DSC Initialization tab. Click the default-dsci object. Click the YAML tab. In the spec section, change the value of the managementState field for trustedCABundle to Unmanaged , as shown: Click Save . Note that changing the managementState from Managed to Unmanaged does not remove the odh-trusted-ca-bundle ConfigMap, but the ConfigMap is not updated if you make changes to the customCABundle field. Verification In the spec section, set or change the value of the customCABundle field for trustedCABundle , for example: Click Save . Click Workloads ConfigMaps . Select a project from the project list. Click the odh-trusted-ca-bundle ConfigMap. Click the YAML tab and verify that the value of the customCABundle field did not update. 9.6. Accessing S3-compatible object storage with self-signed certificates To use object storage solutions or databases that are deployed in an OpenShift cluster that uses self-signed certificates, you must configure OpenShift AI to trust the cluster's certificate authority (CA). Each namespace has a ConfigMap called kube-root-ca.crt that contains the CA certificates of the internal API Server. Use the following steps to configure OpenShift AI to trust the certificates issued by kube-root-ca.crt . Alternatively, you can add a custom CA bundle by using the OpenShift console, as described in Adding a CA bundle . Prerequisites You have cluster administrator privileges for your OpenShift cluster. You have downloaded and installed the OpenShift command-line interface (CLI). See Installing the OpenShift CLI . You have an object storage solution or database deployed in your OpenShift cluster. Procedure In a terminal window, log in to the OpenShift CLI as shown in the following example: Run the following command to fetch the current OpenShift AI trusted CA configuration and store it in a new file: Add the cluster's kube-root-ca.crt ConfigMap to the OpenShift AI trusted CA configuration: Update the OpenShift AI trusted CA configuration to trust certificates issued by the certificate authorities in kube-root-ca.crt : Verification You can successfully deploy components that are configured to use object storage solutions or databases that are deployed in the OpenShift cluster. For example, a pipeline server that is configured to use a database deployed in the cluster starts successfully. Note You can verify your new certificate configuration by following the steps in the OpenShift AI fraud detection tutorial. Run the script to install local object storage buckets and create connections, and then enable data science pipelines. For more information about running the script to install local object storage buckets, see Running a script to install local object storage buckets and create connections . For more information about enabling data science pipelines, see Enabling data science pipelines . 9.7. Using self-signed certificates with OpenShift AI components Some OpenShift AI components have additional options or required configuration for self-signed certificates. 9.7.1. Using certificates with data science pipelines If you want to use self-signed certificates, you have added them to a central Certificate Authority (CA) bundle as described in Working with certificates (for disconnected environments, see Working with certificates ). No additional configuration is necessary to use those certificates with data science pipelines. 9.7.1.1. Providing a CA bundle only for data science pipelines Perform the following steps to provide a Certificate Authority (CA) bundle just for data science pipelines. Procedure Log in to OpenShift. From Workloads ConfigMaps , create a ConfigMap with the required bundle in the same data science project or namespace as the target data science pipeline: Add the following snippet to the .spec.apiserver.caBundle field of the underlying Data Science Pipelines Application (DSPA): The pipeline server pod redeploys with the updated bundle and uses it in the newly created pipeline pods. Verification Perform the following steps to confirm that your CA bundle was successfully mounted. Log in to the OpenShift console. Go to the OpenShift project that corresponds to the data science project. Click the Pods tab. Click the pipeline server pod with the ds-pipeline-dspa-<hash> prefix. Click Terminal . Enter cat /dsp-custom-certs/dsp-ca.crt . Verify that your CA bundle is present within this file. You can also confirm that your CA bundle was successfully mounted by using the CLI: In a terminal window, log in to the OpenShift cluster where OpenShift AI is deployed. Set the dspa value: Set the dsProject value, replacing USDYOUR_DS_PROJECT with the name of your data science project: Set the pod value: Display the contents of the /dsp-custom-certs/dsp-ca.crt file: Verify that your CA bundle is present within this file. 9.7.2. Using certificates with workbenches Important Self-signed certificates apply by default to workbenches that you create after configuring the certificates centrally as described in Working with certificates (for disconnected environments, see Working with certificates ). To apply centrally configured certificates to an existing workbench, stop and then restart the workbench. Self-signed certificates are stored in /etc/pki/tls/custom-certs/ca-bundle.crt . Workbenches are preset with an environment variable that points packages to this path, and that covers many popular HTTP client packages. For packages that are not included by default, you can provide this certificate path. For example, for the kfp package to connect to the data science pipeline server:
[ "apiVersion: v1 kind: ConfigMap metadata: labels: app.kubernetes.io/part-of: opendatahub-operator config.openshift.io/inject-trusted-cabundle: 'true' name: odh-trusted-ca-bundle", "apiVersion: v1 kind: ConfigMap metadata: labels: app.kubernetes.io/part-of: opendatahub-operator config.openshift.io/inject-trusted-cabundle: 'true' name: odh-trusted-ca-bundle data: ca-bundle.crt: | <BUNDLE OF CLUSTER-WIDE CERTIFICATES>", "spec: trustedCABundle: managementState: Managed customCABundle: \"\"", "apiVersion: v1 kind: ConfigMap metadata: labels: app.kubernetes.io/part-of: opendatahub-operator config.openshift.io/inject-trusted-cabundle: 'true' name: odh-trusted-ca-bundle data: ca-bundle.crt: | <BUNDLE OF CLUSTER-WIDE CERTIFICATES> odh-ca-bundle.crt: | <BUNDLE OF CUSTOM CERTIFICATES>", "spec: trustedCABundle: managementState: Managed customCABundle: | -----BEGIN CERTIFICATE----- examplebundle123 -----END CERTIFICATE-----", "oc get configmaps --all-namespaces -l app.kubernetes.io/part-of=opendatahub-operator | grep odh-trusted-ca-bundle", "oc get configmap odh-trusted-ca-bundle -n example-namespace -o yaml | grep examplebundle123", "spec: trustedCABundle: managementState: Removed", "oc get configmaps --all-namespaces | grep odh-trusted-ca-bundle", "oc annotate ns example-namespace security.opendatahub.io/inject-trusted-ca-bundle=false", "oc get configmap odh-trusted-ca-bundle -n example-namespace", "spec: trustedCABundle: managementState: Unmanaged", "spec: trustedCABundle: managementState: Unmanaged customCABundle: example123", "login api.<cluster_name>.<cluster_domain>:6443 --web", "get dscinitializations.dscinitialization.opendatahub.io default-dsci -o json | jq -r '.spec.trustedCABundle.customCABundle' > /tmp/my-custom-ca-bundles.crt", "get configmap kube-root-ca.crt -o jsonpath=\"{['data']['ca\\.crt']}\" >> /tmp/my-custom-ca-bundles.crt", "patch dscinitialization default-dsci --type='json' -p='[{\"op\":\"replace\",\"path\":\"/spec/trustedCABundle/customCABundle\",\"value\":\"'\"USD(awk '{printf \"%s\\\\n\", USD0}' /tmp/my-custom-ca-bundles.crt)\"'\"}]'", "kind: ConfigMap apiVersion: v1 metadata: name: custom-ca-bundle data: ca-bundle.crt: | # contents of ca-bundle.crt", "apiVersion: datasciencepipelinesapplications.opendatahub.io/v1 kind: DataSciencePipelinesApplication metadata: name: data-science-dspa spec: apiServer: cABundle: configMapName: custom-ca-bundle configMapKey: ca-bundle.crt", "login", "dspa=dspa", "dsProject=USDYOUR_DS_PROJECT", "pod=USD(oc get pod -n USD{dsProject} -l app=ds-pipeline-USD{dspa} --no-headers | awk '{print USD1}')", "-n USD{dsProject} exec USDpod -- cat /dsp-custom-certs/dsp-ca.crt", "from kfp.client import Client with open(sa_token_file_path, 'r') as token_file: bearer_token = token_file.read() client = Client( host='https://<GO_TO_ROUTER_OF_DS_PROJECT>/', existing_token=bearer_token, ssl_ca_cert='/etc/pki/tls/custom-certs/ca-bundle.crt' ) print(client.list_experiments())" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/installing_and_uninstalling_openshift_ai_self-managed/working-with-certificates_certs
4.8. Using stunnel
4.8. Using stunnel The stunnel program is an encryption wrapper between a client and a server. It listens on the port specified in its configuration file, encrypts the communitation with the client, and forwards the data to the original daemon listening on its usual port. This way, you can secure any service that itself does not support any type of encryption, or improve the security of a service that uses a type of encryption that you want to avoid for security reasons, such as SSL versions 2 and 3, affected by the POODLE SSL vulnerability (CVE-2014-3566). See https://access.redhat.com/solutions/1234773 for details. CUPS is an example of a component that does not provide a way to disable SSL in its own configuration. 4.8.1. Installing stunnel Install the stunnel package by entering the following command as root : 4.8.2. Configuring stunnel as a TLS Wrapper To configure stunnel , follow these steps: You need a valid certificate for stunnel regardless of what service you use it with. If you do not have a suitable certificate, you can apply to a Certificate Authority to obtain one, or you can create a self-signed certificate. Warning Always use certificates signed by a Certificate Authority for servers running in a production environment. Self-signed certificates are only appropriate for testing purposes or private networks. See Section 4.7.2.1, "Creating a Certificate Signing Request" for more information about certificates granted by a Certificate Authority. On the other hand, to create a self-signed certificate for stunnel , enter the /etc/pki/tls/certs/ directory and type the following command as root : Answer all of the questions to complete the process. When you have a certificate, create a configuration file for stunnel . It is a text file in which every line specifies an option or the beginning of a service definition. You can also keep comments and empty lines in the file to improve its legibility, where comments start with a semicolon. The stunnel RPM package contains the /etc/stunnel/ directory, in which you can store the configuration file. Although stunnel does not require any special format of the file name or its extension, use /etc/stunnel/stunnel.conf . The following content configures stunnel as a TLS wrapper: Alternatively, you can avoid SSL by replacing the line containing sslVersion = TLSv1 with the following lines: The purpose of the options is as follows: cert - the path to your certificate sslVersion - the version of SSL; note that you can use TLS here even though SSL and TLS are two independent cryptographic protocols chroot - the changed root directory in which the stunnel process runs, for greater security setuid , setgid - the user and group that the stunnel process runs as; nobody is a restricted system account pid - the file in which stunnel saves its process ID, relative to chroot socket - local and remote socket options; in this case, disable Nagle's algorithm to improve network latency [ service_name ] - the beginning of the service definition; the options used below this line apply to the given service only, whereas the options above affect stunnel globally accept - the port to listen on connect - the port to connect to; this must be the port that the service you are securing uses TIMEOUTclose - how many seconds to wait for the close_notify alert from the client; 0 instructs stunnel not to wait at all options - OpenSSL library options Example 4.3. Securing CUPS To configure stunnel as a TLS wrapper for CUPS , use the following values: Instead of 632 , you can use any free port that you prefer. 631 is the port that CUPS normally uses. Create the chroot directory and give the user specified by the setuid option write access to it. To do so, enter the following commands as root : This allows stunnel to create the PID file. If your system is using firewall settings that disallow access to the new port, change them accordingly. See Section 5.6.7, "Opening Ports using GUI" for details. When you have created the configuration file and the chroot directory, and when you are sure that the specified port is accessible, you are ready to start using stunnel . 4.8.3. Starting, Stopping, and Restarting stunnel To start stunnel , enter the following command as root : By default, stunnel uses /var/log/secure to log its output. To terminate stunnel , kill the process by running the following command as root : If you edit the configuration file while stunnel is running, terminate stunnel and start it again for your changes to take effect.
[ "~]# yum install stunnel", "certs]# make stunnel.pem", "cert = /etc/pki/tls/certs/stunnel.pem ; Allow only TLS, thus avoiding SSL sslVersion = TLSv1 chroot = /var/run/stunnel setuid = nobody setgid = nobody pid = /stunnel.pid socket = l:TCP_NODELAY=1 socket = r:TCP_NODELAY=1 [ service_name ] accept = port connect = port TIMEOUTclose = 0", "options = NO_SSLv2 options = NO_SSLv3", "[cups] accept = 632 connect = 631", "~]# mkdir /var/run/stunnel ~]# chown nobody:nobody /var/run/stunnel", "~]# stunnel /etc/stunnel/stunnel.conf", "~]# kill `cat /var/run/stunnel/stunnel.pid`" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-using_stunnel
7.165. openldap
7.165. openldap 7.165.1. RHBA-2013:0364 - openldap bug fix and enhancement update Updated openldap packages that fix multiple bugs and add an enhancement are now available for Red Hat Enterprise Linux 6. OpenLDAP is an open source suite of LDAP (Lightweight Directory Access Protocol) applications and development tools. LDAP is a set of protocols for accessing directory services (usually phone book style information, but other information is possible) over the Internet, similar to the way DNS (Domain Name System) information is propagated over the Internet. The openldap package contains configuration files, libraries, and documentation for OpenLDAP. Bug Fixes BZ# 820278 When the smbk5pwd overlay was enabled in an OpenLDAP server and a user changed their password, the Microsoft NT LAN Manager (NTLM) and Microsoft LAN Manager (LM) hashes were not computed correctly. Consequently, the sambaLMPassword and sambaNTPassword attributes were updated with incorrect values, preventing the user from logging in using a Windows-based client or a Samba client. With this update, the smbk5pwd overlay is linked against OpenSSL. As such, the NTLM and LM hashes are computed correctly and password changes work as expected when using smbk5pwd . BZ# 857390 If the TLS_CACERTDIR configuration option used a prefix, which specified a Mozilla NSS database type, such as sql: , and when a TLS operation was requested, the certificate database failed to open. This update provides a patch, which removes the database type prefix when checking the existence of a directory with certificate database, and the certificate database is now successfully opened even if the database type prefix is used. BZ# 829319 When a file containing a password was provided to open a database without user interaction, a piece of unallocated memory could be read and be mistaken to contain a password, leading to the connection to become unresponsive. A patch has been applied to correctly allocate the memory for the password file and the connection no longer hangs in the described scenario. BZ# 818572 When a TLS connection to an LDAP server was established, used, and then correctly terminated, the order of the internal TLS shutdown operations was incorrect. Consequently, unexpected terminations and other issues could occur in the underlying cryptographic library (Mozilla NSS). A patch has been provided to reorder the operations performed when closing the connection. Now, the order of TLS shutdown operations matches the Mozilla NSS documentation, thus fixing this bug. BZ# 859858 When TLS was configured to use a certificate from a PEM file while TLS_CACERTDIR was set to use a Mozilla NSS certificate database, the PEM certificate failed to load. With this update, the certificate is first looked up in the Mozilla NSS certificate database and if not found, the PEM file is used as a fallback. As a result, PEM certificates are now properly loaded in the described scenario. BZ# 707599 The OpenLDAP server could be configured for replication with TLS enabled for both accepting connections from remote peers and for TLS client authentication to the other replicas. When different TLS configuration was used for server and for connecting to replicas, a connection to a replica could fail due to TLS certificate lookup errors or due to unknown PKCS#11 TLS errors. This update provides a set of patches, which makes multiple TLS LDAP contexts within one process possible without affecting the others. As a result, OpenLDAP replication works properly in the described scenario. BZ# 811468 When the CA (Certificate Authority) certificate directory hashed via OpenSSL was configured to be used as a source of trusted CA certificates, the libldap library incorrectly expected that filenames of all hashed certificates end with the .0 suffix. Consequently, even though any numeric suffix is allowed, only certificates with .0 suffix were loaded. This update provides a patch that properly checks filenames in OpenSSL CA certificate directory and now all certificates that are allowed to be in that directory are loaded with libldap as expected. BZ#843056 When multiple LDAP servers were specified with TLS enabled and a connection to a server failed because the host name did not match the name in the certificate, fallback to another server was performed. However, the fallback connection became unresponsive during the TLS handshake. This update provides a patch that re-creates internal structures, which handle the connection state, and the fallback connection no longer hangs in the described scenario. BZ#864913 When the OpenLDAP server was configured to use the rwm overlay and a client sent the modrdn operation, which included the newsuperior attribute matching the current superior attribute of the entry being modified, the slapd server terminated unexpectedly with a segmentation fault. With this update, slapd is prevented from accessing uninitialized memory in the described scenario, the crashes no longer occur, and the client operation now finishes successfully. BZ# 828787 When a self-signed certificate without Basic Constraint Extension (BCE) was used as a server TLS certificate and the TLS client was configured to ignore any TLS certificate validation errors, the client could not connect to the server and an incorrect message about missing BCE was returned. This update provides a patch to preserve the original TLS certificate validation error if BCE is not found in the certificate. As a result, clients can connect to the server, proper error messages about untrusted certification authority which signed the server certificate are returned, and the connection continues as expected. BZ# 821848 When the slapd server configuration database ( cn=config ) was configured with replication in mirror mode and the replication configuration ( olcSyncrepl ) was changed, the cn=config database was silently removed from mirror mode and could not be futher modified without restarting the slapd daemon. With this update, changes in replication configuration are properly handled so that the state of mirror mode is now properly preserved and the cn=config database can be modified in the described scenario. BZ# 835012 Previously, the OpenLDAP library looked up for an AAAA (IPv6) DNS record while resolving the server IP address even if IPv6 was disabled on the host, which could cause extra delays when connecting. With this update, the AI_ADDRCONFIG flag is set when resolving the remote host address. As a result, the OpenLDAP library no longer looks up for the AAAA DNS record when resolving the server IP address and IPv6 is disabled on the local system. Enhancements BZ# 852339 When libldap was configured to use TLS, not all TLS ciphers supported by the Mozilla NSS library could be used. This update provides all missing ciphers supported by Mozilla NSS to the internal list of ciphers in libldap , thus improving libldap security capabilities. Users of openldap are advised to upgrade to these updated packages, which fix these bugs and add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/openldap
Chapter 2. Configuring the OpenShift Pipelines tkn CLI
Chapter 2. Configuring the OpenShift Pipelines tkn CLI Configure the Red Hat OpenShift Pipelines tkn CLI to enable tab completion. 2.1. Enabling tab completion After you install the tkn CLI, you can enable tab completion to automatically complete tkn commands or suggest options when you press Tab. Prerequisites You must have the tkn CLI tool installed. You must have bash-completion installed on your local system. Procedure The following procedure enables tab completion for Bash. Save the Bash completion code to a file: USD tkn completion bash > tkn_bash_completion Copy the file to /etc/bash_completion.d/ : USD sudo cp tkn_bash_completion /etc/bash_completion.d/ Alternatively, you can save the file to a local directory and source it from your .bashrc file instead. Tab completion is enabled when you open a new terminal.
[ "tkn completion bash > tkn_bash_completion", "sudo cp tkn_bash_completion /etc/bash_completion.d/" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.18/html/pipelines_cli_tkn_reference/op-configuring-tkn
Chapter 16. Bean Validator
Chapter 16. Bean Validator Only producer is supported The Validator component performs bean validation of the message body using the Java Bean Validation API (). Camel uses the reference implementation, which is Hibernate Validator . 16.1. Dependencies When using bean-validator with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-bean-validator-starter</artifactId> </dependency> 16.2. URI format Where label is an arbitrary text value describing the endpoint. You can append query options to the URI in the following format, ?option=value&option=value&... 16.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 16.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 16.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 16.4. Component Options The Bean Validator component supports 8 options, which are listed below. Name Description Default Type ignoreXmlConfiguration (producer) Whether to ignore data from the META-INF/validation.xml file. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean constraintValidatorFactory (advanced) To use a custom ConstraintValidatorFactory. ConstraintValidatorFactory messageInterpolator (advanced) To use a custom MessageInterpolator. MessageInterpolator traversableResolver (advanced) To use a custom TraversableResolver. TraversableResolver validationProviderResolver (advanced) To use a a custom ValidationProviderResolver. ValidationProviderResolver validatorFactory (advanced) Autowired To use a custom ValidatorFactory. ValidatorFactory 16.5. Endpoint Options The Bean Validator endpoint is configured using URI syntax: with the following path and query parameters: 16.5.1. Path Parameters (1 parameters) Name Description Default Type label (producer) Required Where label is an arbitrary text value describing the endpoint. String 16.5.2. Query Parameters (8 parameters) Name Description Default Type group (producer) To use a custom validation group. javax.validation.groups.Default String ignoreXmlConfiguration (producer) Whether to ignore data from the META-INF/validation.xml file. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean constraintValidatorFactory (advanced) To use a custom ConstraintValidatorFactory. ConstraintValidatorFactory messageInterpolator (advanced) To use a custom MessageInterpolator. MessageInterpolator traversableResolver (advanced) To use a custom TraversableResolver. TraversableResolver validationProviderResolver (advanced) To use a a custom ValidationProviderResolver. ValidationProviderResolver validatorFactory (advanced) To use a custom ValidatorFactory. ValidatorFactory 16.6. OSGi deployment To use Hibernate Validator in the OSGi environment use dedicated ValidationProviderResolver implementation, just as org.apache.camel.component.bean.validator.HibernateValidationProviderResolver . The snippet below demonstrates this approach. You can also use HibernateValidationProviderResolver . Using HibernateValidationProviderResolver from("direct:test"). to("bean-validator://ValidationProviderResolverTest?validationProviderResolver=#myValidationProviderResolver"); <bean id="myValidationProviderResolver" class="org.apache.camel.component.bean.validator.HibernateValidationProviderResolver"/> If no custom ValidationProviderResolver is defined and the validator component has been deployed into the OSGi environment, the HibernateValidationProviderResolver will be automatically used. 16.7. Example Assumed we have a java bean with the following annotations Car.java public class Car { @NotNull private String manufacturer; @NotNull @Size(min = 5, max = 14, groups = OptionalChecks.class) private String licensePlate; // getter and setter } and an interface definition for our custom validation group OptionalChecks.java public interface OptionalChecks { } with the following Camel route, only the @NotNull constraints on the attributes manufacturer and licensePlate will be validated (Camel uses the default group javax.validation.groups.Default ). from("direct:start") .to("bean-validator://x") .to("mock:end") If you want to check the constraints from the group OptionalChecks , you have to define the route like this from("direct:start") .to("bean-validator://x?group=OptionalChecks") .to("mock:end") If you want to check the constraints from both groups, you have to define a new interface first AllChecks.java @GroupSequence({Default.class, OptionalChecks.class}) public interface AllChecks { } and then your route definition should looks like this from("direct:start") .to("bean-validator://x?group=AllChecks") .to("mock:end") And if you have to provide your own message interpolator, traversable resolver and constraint validator factory, you have to write a route like this <bean id="myMessageInterpolator" class="my.ConstraintValidatorFactory" /> <bean id="myTraversableResolver" class="my.TraversableResolver" /> <bean id="myConstraintValidatorFactory" class="my.ConstraintValidatorFactory" /> from("direct:start") .to("bean-validator://x?group=AllChecks&messageInterpolator=#myMessageInterpolator &traversableResolver=#myTraversableResolver&constraintValidatorFactory=#myConstraintValidatorFactory") .to("mock:end") It's also possible to describe your constraints as XML and not as Java annotations. In this case, you have to provide the file META-INF/validation.xml which could looks like this validation.xml <validation-config xmlns="http://jboss.org/xml/ns/javax/validation/configuration" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://jboss.org/xml/ns/javax/validation/configuration"> <default-provider>org.hibernate.validator.HibernateValidator</default-provider> <message-interpolator>org.hibernate.validator.engine.ResourceBundleMessageInterpolator</message-interpolator> <traversable-resolver>org.hibernate.validator.engine.resolver.DefaultTraversableResolver</traversable-resolver> <constraint-validator-factory>org.hibernate.validator.engine.ConstraintValidatorFactoryImpl</constraint-validator-factory> <constraint-mapping>/constraints-car.xml</constraint-mapping> </validation-config> and the constraints-car.xml file constraints-car.xml <constraint-mappings xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://jboss.org/xml/ns/javax/validation/mapping validation-mapping-1.0.xsd" xmlns="http://jboss.org/xml/ns/javax/validation/mapping"> <default-package>org.apache.camel.component.bean.validator</default-package> <bean class="CarWithoutAnnotations" ignore-annotations="true"> <field name="manufacturer"> <constraint annotation="javax.validation.constraints.NotNull" /> </field> <field name="licensePlate"> <constraint annotation="javax.validation.constraints.NotNull" /> <constraint annotation="javax.validation.constraints.Size"> <groups> <value>org.apache.camel.component.bean.validator.OptionalChecks</value> </groups> <element name="min">5</element> <element name="max">14</element> </constraint> </field> </bean> </constraint-mappings> Here is the XML syntax for the example route definition for OrderedChecks . Note that the body should include an instance of a class to validate. <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd"> <camelContext id="camel" xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:start"/> <to uri="bean-validator://x?group=org.apache.camel.component.bean.validator.OrderedChecks"/> </route> </camelContext> </beans> 16.8. Spring Boot Auto-Configuration The component supports 9 options, which are listed below. Name Description Default Type camel.component.bean-validator.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.bean-validator.constraint-validator-factory To use a custom ConstraintValidatorFactory. The option is a javax.validation.ConstraintValidatorFactory type. ConstraintValidatorFactory camel.component.bean-validator.enabled Whether to enable auto configuration of the bean-validator component. This is enabled by default. Boolean camel.component.bean-validator.ignore-xml-configuration Whether to ignore data from the META-INF/validation.xml file. false Boolean camel.component.bean-validator.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.bean-validator.message-interpolator To use a custom MessageInterpolator. The option is a javax.validation.MessageInterpolator type. MessageInterpolator camel.component.bean-validator.traversable-resolver To use a custom TraversableResolver. The option is a javax.validation.TraversableResolver type. TraversableResolver camel.component.bean-validator.validation-provider-resolver To use a a custom ValidationProviderResolver. The option is a javax.validation.ValidationProviderResolver type. ValidationProviderResolver camel.component.bean-validator.validator-factory To use a custom ValidatorFactory. The option is a javax.validation.ValidatorFactory type. ValidatorFactory
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-bean-validator-starter</artifactId> </dependency>", "bean-validator:label[?options]", "bean-validator:label", "from(\"direct:test\"). to(\"bean-validator://ValidationProviderResolverTest?validationProviderResolver=#myValidationProviderResolver\");", "<bean id=\"myValidationProviderResolver\" class=\"org.apache.camel.component.bean.validator.HibernateValidationProviderResolver\"/>", "public class Car { @NotNull private String manufacturer; @NotNull @Size(min = 5, max = 14, groups = OptionalChecks.class) private String licensePlate; // getter and setter }", "public interface OptionalChecks { }", "from(\"direct:start\") .to(\"bean-validator://x\") .to(\"mock:end\")", "from(\"direct:start\") .to(\"bean-validator://x?group=OptionalChecks\") .to(\"mock:end\")", "@GroupSequence({Default.class, OptionalChecks.class}) public interface AllChecks { }", "from(\"direct:start\") .to(\"bean-validator://x?group=AllChecks\") .to(\"mock:end\")", "<bean id=\"myMessageInterpolator\" class=\"my.ConstraintValidatorFactory\" /> <bean id=\"myTraversableResolver\" class=\"my.TraversableResolver\" /> <bean id=\"myConstraintValidatorFactory\" class=\"my.ConstraintValidatorFactory\" />", "from(\"direct:start\") .to(\"bean-validator://x?group=AllChecks&messageInterpolator=#myMessageInterpolator &traversableResolver=#myTraversableResolver&constraintValidatorFactory=#myConstraintValidatorFactory\") .to(\"mock:end\")", "<validation-config xmlns=\"http://jboss.org/xml/ns/javax/validation/configuration\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://jboss.org/xml/ns/javax/validation/configuration\"> <default-provider>org.hibernate.validator.HibernateValidator</default-provider> <message-interpolator>org.hibernate.validator.engine.ResourceBundleMessageInterpolator</message-interpolator> <traversable-resolver>org.hibernate.validator.engine.resolver.DefaultTraversableResolver</traversable-resolver> <constraint-validator-factory>org.hibernate.validator.engine.ConstraintValidatorFactoryImpl</constraint-validator-factory> <constraint-mapping>/constraints-car.xml</constraint-mapping> </validation-config>", "<constraint-mappings xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://jboss.org/xml/ns/javax/validation/mapping validation-mapping-1.0.xsd\" xmlns=\"http://jboss.org/xml/ns/javax/validation/mapping\"> <default-package>org.apache.camel.component.bean.validator</default-package> <bean class=\"CarWithoutAnnotations\" ignore-annotations=\"true\"> <field name=\"manufacturer\"> <constraint annotation=\"javax.validation.constraints.NotNull\" /> </field> <field name=\"licensePlate\"> <constraint annotation=\"javax.validation.constraints.NotNull\" /> <constraint annotation=\"javax.validation.constraints.Size\"> <groups> <value>org.apache.camel.component.bean.validator.OptionalChecks</value> </groups> <element name=\"min\">5</element> <element name=\"max\">14</element> </constraint> </field> </bean> </constraint-mappings>", "<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd\"> <camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <to uri=\"bean-validator://x?group=org.apache.camel.component.bean.validator.OrderedChecks\"/> </route> </camelContext> </beans>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-bean-validator-component-starter
5.6. Configuring PPP (Point-to-Point) Settings
5.6. Configuring PPP (Point-to-Point) Settings Authentication Methods In most cases, the provider's PPP servers supports all the allowed authentication methods. If a connection fails, the user should disable support for some methods, depending on the PPP server configuration. Use point-to-point encryption (MPPE) Microsoft Point-To-Point Encryption protocol ( RFC 3078 ). Allow BSD data compression PPP BSD Compression Protocol ( RFC 1977 ). Allow Deflate data compression PPP Deflate Protocol ( RFC 1979 ). Use TCP header compression Compressing TCP/IP Headers for Low-Speed Serial Links ( RFC 1144 ). Send PPP echo packets LCP Echo-Request and Echo-Reply Codes for loopback tests ( RFC 1661 ). Note Since the PPP support in NetworkManager is optional, to configure PPP settings, make sure that the NetworkManager-ppp package is already installed.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-Configuring_PPP_Point-to-Point_Settings