title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 3. Ceph Monitor configuration
|
Chapter 3. Ceph Monitor configuration As a storage administrator, you can use the default configuration values for the Ceph Monitor or customize them according to the intended workload. Prerequisites Installation of the Red Hat Ceph Storage software. 3.1. Ceph Monitor configuration Understanding how to configure a Ceph Monitor is an important part of building a reliable Red Hat Ceph Storage cluster. All storage clusters have at least one monitor. A Ceph Monitor configuration usually remains fairly consistent, but you can add, remove or replace a Ceph Monitor in a storage cluster. Ceph monitors maintain a "master copy" of the cluster map. That means a Ceph client can determine the location of all Ceph monitors and Ceph OSDs just by connecting to one Ceph monitor and retrieving a current cluster map. Before Ceph clients can read from or write to Ceph OSDs, they must connect to a Ceph Monitor first. With a current copy of the cluster map and the CRUSH algorithm, a Ceph client can compute the location for any object. The abilityto compute object locations allows a Ceph client to talk directly to Ceph OSDs, which is a very important aspect of Ceph's high scalability and performance. The primary role of the Ceph Monitor is to maintain a master copy of the cluster map. Ceph Monitors also provide authentication and logging services. Ceph Monitors write all changes in the monitor services to a single Paxos instance, and Paxos writes the changes to a key-value store for strong consistency. Ceph Monitors can query the most recent version of the cluster map during synchronization operations. Ceph Monitors leverage the key-value store's snapshots and iterators, using the rocksdb database, to perform store-wide synchronization. 3.2. Viewing the Ceph Monitor configuration database You can view Ceph Monitor configuration in the configuration database. Note releases of Red Hat Ceph Storage centralize Ceph Monitor configuration in /etc/ceph/ceph.conf . This configuration file has been deprecated as of Red Hat Ceph Storage 5. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to a Ceph Monitor host. Procedure Log into the cephadm shell. Use the ceph config command to view the configuration database: Example Additional Resources For more information about the options available for the ceph config command, use ceph config -h . 3.3. Ceph cluster maps The cluster map is a composite of maps, including the monitor map, the OSD map, and the placement group map. The cluster map tracks a number of important events: Which processes are in the Red Hat Ceph Storage cluster. Which processes that are in the Red Hat Ceph Storage cluster are up and running or down . Whether, the placement groups are active or inactive , and clean or in some other state. other details that reflect the current state of the cluster such as: the total amount of storage space or the amount of storage used. When there is a significant change in the state of the cluster, for example, a Ceph OSD goes down, a placement group falls into a degraded state, and so on. The cluster map gets updated to reflect the current state of the cluster. Additionally, the Ceph monitor also maintains a history of the prior states of the cluster. The monitor map, OSD map, and placement group map each maintain a history of their map versions. Each version is called an epoch . When operating the Red Hat Ceph Storage cluster, keeping track of these states is an important part of the cluster administration. 3.4. Ceph Monitor quorum A cluster will run sufficiently with a single monitor. However, a single monitor is a single-point-of-failure. To ensure high availability in a production Ceph storage cluster, run Ceph with multiple monitors so that the failure of a single monitor will not cause a failure of the entire storage cluster. When a Ceph storage cluster runs multiple Ceph Monitors for high availability, Ceph Monitors use the Paxos algorithm to establish consensus about the master cluster map. A consensus requires a majority of monitors running to establish a quorum for consensus about the cluster map. For example, 1; 2 out of 3; 3 out of 5; 4 out of 6; and so on. Red Hat recommends running a production Red Hat Ceph Storage cluster with at least three Ceph Monitors to ensure high availability. When you run multiple monitors, you can specify the initial monitors that must be members of the storage cluster to establish a quorum. This may reduce the time it takes for the storage cluster to come online. Note A majority of the monitors in the storage cluster must be able to reach each other in to establish a quorum. You can decrease the initial number of monitors to establish a quorum with the mon_initial_members option. 3.5. Ceph Monitor consistency When you add monitor settings to the Ceph configuration file, you need to be aware of some of the architectural aspects of Ceph Monitors. Ceph imposes strict consistency requirements for a Ceph Monitor when discovering another Ceph Monitor within the cluster. Whereas Ceph clients and other Ceph daemons use the Ceph configuration file to discover monitors, monitors discover each other using the monitor map ( monmap ), not the Ceph configuration file. A Ceph Monitor always refers to the local copy of the monitor map when discovering other Ceph Monitors in the Red Hat Ceph Storage cluster. Using the monitor map instead of the Ceph configuration file avoids errors that could break the cluster. For example, typos in the Ceph configuration file when specifying a monitor address or port. Since monitors use monitor maps for discovery and they share monitor maps with clients and other Ceph daemons, the monitor map provides monitors with a strict guarantee that their consensus is valid. Strict consistency when applying updates to the monitor maps As with any other updates on the Ceph Monitor, changes to the monitor map always run through a distributed consensus algorithm called Paxos. The Ceph Monitors must agree on each update to the monitor map, such as adding or removing a Ceph Monitor, to ensure that each monitor in the quorum has the same version of the monitor map. Updates to the monitor map are incremental so that Ceph Monitors have the latest agreed-upon version and a set of versions. Maintaining history Maintaining a history enables a Ceph Monitor that has an older version of the monitor map to catch up with the current state of the Red Hat Ceph Storage cluster. If Ceph Monitors discovered each other through the Ceph configuration file instead of through the monitor map, it would introduce additional risks because the Ceph configuration files are not updated and distributed automatically. Ceph Monitors might inadvertently use an older Ceph configuration file, fail to recognize a Ceph Monitor, fall out of a quorum, or develop a situation where Paxos is not able to determine the current state of the system accurately. 3.6. Bootstrap the Ceph Monitor In most configuration and deployment cases, tools that deploy Ceph, such as cephadm , might help bootstrap the Ceph monitors by generating a monitor map for you. A Ceph monitor requires a few explicit settings: File System ID : The fsid is the unique identifier for your object store. Since you can run multiple storage clusters on the same hardware, you must specify the unique ID of the object store when bootstrapping a monitor. Using deployment tools, such as cephadm , will generate a file system identifier, but you can also specify the fsid manually. Monitor ID : A monitor ID is a unique ID assigned to each monitor within the cluster. By convention, the ID is set to the monitor's hostname. This option can be set using a deployment tool, using the ceph command, or in the Ceph configuration file. In the Ceph configuration file, sections are formed as follows: Example Keys : The monitor must have secret keys. Additional Resources For more information about cephadm and the Ceph orchestrator, see the Red Hat Ceph Storage Operations Guide . 3.7. Minimum configuration for a Ceph Monitor The bare minimum monitor settings for a Ceph Monitor in the Ceph configuration file includes a host name for each monitor if it is not configured for DNS and the monitor address. The Ceph Monitors run on port 6789 and 3300 by default. Important Do not edit the Ceph configuration file. Note This minimum configuration for monitors assumes that a deployment tool generates the fsid and the mon. key for you. You can use the following commands to set or read the storage cluster configuration options. ceph config dump - Dumps the entire configuration database for the whole storage cluster. ceph config generate-minimal-conf - Generates a minimal ceph.conf file. ceph config get WHO - Dumps the configuration for a specific daemon or a client, as stored in the Ceph Monitor's configuration database. ceph config set WHO OPTION VALUE - Sets the configuration option in the Ceph Monitor's configuration database. ceph config show WHO - Shows the reported running configuration for a running daemon. ceph config assimilate-conf -i INPUT_FILE -o OUTPUT_FILE - Ingests a configuration file from the input file and moves any valid options into the Ceph Monitors' configuration database. Here, WHO parameter might be name of the section or a Ceph daemon, OPTION is a configuration file, and VALUE can be either true or false . Important When a Ceph daemon needs a config option prior to getting the option from the config store, you can set the configuration by running the following command: This command adds text to all the daemon's ceph.conf files. It is a workaround and is NOT a recommended operation. 3.8. Unique identifier for Ceph Each Red Hat Ceph Storage cluster has a unique identifier ( fsid ). If specified, it usually appears under the [global] section of the configuration file. Deployment tools usually generate the fsid and store it in the monitor map, so the value may not appear in a configuration file. The fsid makes it possible to run daemons for multiple clusters on the same hardware. Note Do not set this value if you use a deployment tool that does it for you. 3.9. Ceph Monitor data store Ceph provides a default path where Ceph monitors store data. Important Red Hat recommends running Ceph monitors on separate drives from Ceph OSDs for optimal performance in a production Red Hat Ceph Storage cluster. Note A dedicated /var/lib/ceph partition should be used for the MON database with a size between 50 and 100 GB. Ceph monitors call the fsync() function often, which can interfere with Ceph OSD workloads. Ceph monitors store their data as key-value pairs. Using a data store prevents recovering Ceph monitors from running corrupted versions through Paxos, and it enables multiple modification operations in one single atomic batch, among other advantages. Important Red Hat does not recommend changing the default data location. If you modify the default location, make it uniform across Ceph monitors by setting it in the [mon] section of the configuration file. 3.10. Ceph storage capacity When a Red Hat Ceph Storage cluster gets close to its maximum capacity (specifies by the mon_osd_full_ratio parameter), Ceph prevents you from writing to or reading from Ceph OSDs as a safety measure to prevent data loss. Therefore, letting a production Red Hat Ceph Storage cluster approach its full ratio is not a good practice, because it sacrifices high availability. The default full ratio is .95 , or 95% of capacity. This a very aggressive setting for a test cluster with a small number of OSDs. Tip When monitoring a cluster, be alert to warnings related to the nearfull ratio. This means that a failure of some OSDs could result in a temporary service disruption if one or more OSDs fails. Consider adding more OSDs to increase storage capacity. A common scenario for test clusters involves a system administrator removing a Ceph OSD from the Red Hat Ceph Storage cluster to watch the cluster re-balance. Then, removing another Ceph OSD, and so on until the Red Hat Ceph Storage cluster eventually reaches the full ratio and locks up. Important Red Hat recommends a bit of capacity planning even with a test cluster. Planning enables you to gauge how much spare capacity you will need in to maintain high availability. Ideally, you want to plan for a series of Ceph OSD failures where the cluster can recover to an active + clean state without replacing those Ceph OSDs immediately. You can run a cluster in an active + degraded state, but this is not ideal for normal operating conditions. The following diagram depicts a simplistic Red Hat Ceph Storage cluster containing 33 Ceph Nodes with one Ceph OSD per host, each Ceph OSD Daemon reading from and writing to a 3TB drive. So this exemplary Red Hat Ceph Storage cluster has a maximum actual capacity of 99TB. With a mon osd full ratio of 0.95 , if the Red Hat Ceph Storage cluster falls to 5 TB of remaining capacity, the cluster will not allow Ceph clients to read and write data. So the Red Hat Ceph Storage cluster's operating capacity is 95 TB, not 99 TB. It is normal in such a cluster for one or two OSDs to fail. A less frequent but reasonable scenario involves a rack's router or power supply failing, which brings down multiple OSDs simultaneously, for example, OSDs 7-12. In such a scenario, you should still strive for a cluster that can remain operational and achieve an active + clean state, even if that means adding a few hosts with additional OSDs in short order. If your capacity utilization is too high, you might not lose data, but you could still sacrifice data availability while resolving an outage within a failure domain if capacity utilization of the cluster exceeds the full ratio. For this reason, Red Hat recommends at least some rough capacity planning. Identify two numbers for your cluster: the number of OSDs the total capacity of the cluster To determine the mean average capacity of an OSD within a cluster, divide the total capacity of the cluster by the number of OSDs in the cluster. Consider multiplying that number by the number of OSDs you expect to fail simultaneously during normal operations (a relatively small number). Finally, multiply the capacity of the cluster by the full ratio to arrive at a maximum operating capacity. Then, subtract the amount of data from the OSDs you expect to fail to arrive at a reasonable full ratio. Repeat the foregoing process with a higher number of OSD failures (for example, a rack of OSDs) to arrive at a reasonable number for a near full ratio. 3.11. Ceph heartbeat Ceph monitors know about the cluster by requiring reports from each OSD, and by receiving reports from OSDs about the status of their neighboring OSDs. Ceph provides reasonable default settings for interaction between monitor and OSD, however, you can modify them as needed. 3.12. Ceph Monitor synchronization role When you run a production cluster with multiple monitors which is recommended, each monitor checks to see if a neighboring monitor has a more recent version of the cluster map. For example, a map in a neighboring monitor with one or more epoch numbers higher than the most current epoch in the map of the instant monitor. Periodically, one monitor in the cluster might fall behind the other monitors to the point where it must leave the quorum, synchronize to retrieve the most current information about the cluster, and then rejoin the quorum. Synchronization roles For the purposes of synchronization, monitors can assume one of three roles: Leader : The Leader is the first monitor to achieve the most recent Paxos version of the cluster map. Provider : The Provider is a monitor that has the most recent version of the cluster map, but was not the first to achieve the most recent version. Requester: The Requester is a monitor that has fallen behind the leader and must synchronize to retrieve the most recent information about the cluster before it can rejoin the quorum. These roles enable a leader to delegate synchronization duties to a provider, which prevents synchronization requests from overloading the leader and improving performance. In the following diagram, the requester has learned that it has fallen behind the other monitors. The requester asks the leader to synchronize, and the leader tells the requester to synchronize with a provider. Monitor synchronization Synchronization always occurs when a new monitor joins the cluster. During runtime operations, monitors can receive updates to the cluster map at different times. This means the leader and provider roles may migrate from one monitor to another. If this happens while synchronizing, for example, a provider falls behind the leader, the provider can terminate synchronization with a requester. Once synchronization is complete, Ceph requires trimming across the cluster. Trimming requires that the placement groups are active + clean . 3.13. Ceph time synchronization Ceph daemons pass critical messages to each other, which must be processed before daemons reach a timeout threshold. If the clocks in Ceph monitors are not synchronized, it can lead to a number of anomalies. For example: Daemons ignoring received messages such as outdated timestamps. Timeouts triggered too soon or late when a message was not received in time. Tip Install NTP on the Ceph monitor hosts to ensure that the monitor cluster operates with synchronized clocks. Clock drift may still be noticeable with NTP even though the discrepancy is not yet harmful. Ceph clock drift and clock skew warnings can get triggered even though NTP maintains a reasonable level of synchronization. Increasing your clock drift may be tolerable under such circumstances. However, a number of factors such as workload, network latency, configuring overrides to default timeouts, and other synchronization options can influence the level of acceptable clock drift without compromising Paxos guarantees. Additional Resources See the section on Ceph time synchronization for more details. See all the Red Hat Ceph Storage Monitor configuration options in Ceph Monitor configuration options for specific option descriptions and usage.
|
[
"cephadm shell",
"ceph config get mon",
"[mon] mon_initial_members = a,b,c",
"[mon.host1] [mon.host2]",
"ceph cephadm set-extra-ceph-conf"
] |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/configuration_guide/ceph-monitor-configuration
|
8.212. scsi-target-utils
|
8.212. scsi-target-utils 8.212.1. RHBA-2014:1599 - scsi-target-utils bug fix update Updated scsi-target-utils packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The scsi-target-utils packages contain a daemon and utilities to setup Small Computer System Interface (SCSI) targets. Currently, software Internet SCSI (iSCSI) and iSCSI Extensions for RDMA (iSER) targets are supported. Bug Fixes BZ# 848585 Previously, the tgtadm SCSI target administration utility did not correctly handle backing-store errors. As a consequence, calling tgtadm with an invalid backing-store parameter in some cases caused the tgtd service to become unresponsive. With this update, the bug in tgtadm has been fixed, and tgtd now recovers from an invalid request as intended. BZ# 854123 Prior to this update, tgtadm failed to handle setting a device to pass-through mode. As a consequence, calling tgtadm with the device-type option set to "passthrough" caused tgtd on the server side to terminate unexpectedly with a segmentation fault. A patch has been applied to fix this bug, and tgtadm no longer crashes in the described scenario. BZ# 865960 Prior to this update, running the "tgtadm --mode target --op show" command did not return the complete number of targets if many targets were present on the system. Consequently, tgtadm could show incorrect and also inconsistent results, because the displayed number of targets varied over repeated attempts. A patch has been applied to fix this bug. Running "tgtadm --mode target --op show" now shows all the targets correctly even on systems with a large amount of targets. BZ# 1094084 Previously, scsi-target-utils did not support the "WRITE and VERIFY (10)" SCSI command which is used by the AIX operating system. As a consequence, AIX failed to execute the mkvg command when the user tried to add iSCSI targets to the system. With this update, the support for "WRITE and VERIFY (10)" has been added, and scsi-target-utils now provide iSCSI targets to AIX as expected. BZ# 1123438 Previously, tgtd could experience a buffer overflow due to incorrect usage of the snprintf() function in the source code. As a consequence, tgtd terminated unexpectedly when trying to respond to a tgtadm query about a large number of connections. The source code has been updated to avoid the buffer overflow, and using tgtadm to display a large number of connections no longer causes tgtd to crash. Users of scsi-target-utils are advised to upgrade to these updated packages, which fix these bugs. All running scsi-target-utils services must be restarted for the update to take effect.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/scsi-target-utils
|
Chapter 5. Upgrading Fuse applications on Spring Boot standalone
|
Chapter 5. Upgrading Fuse applications on Spring Boot standalone To upgrade your Fuse applications on Spring Boot: You should consider Apache Camel updates as described in Section 5.1, "Camel migration considerations" . You must update your Fuse project's Maven dependencies to ensure that you are using the correct version of Fuse. Typically, you use Maven to build Fuse applications. Maven is a free and open source build tool from Apache. Maven configuration is defined in a Fuse application project's pom.xml file. While building a Fuse project, the default behavior is that Maven searches external repositories and downloads the required artifacts. You add a dependency for the Fuse Bill of Materials (BOM) to the pom.xml file so that the Maven build process picks up the correct set of Fuse supported artifacts. The following sections provide information on Maven dependencies and how to update them in your Fuse projects. Section 5.2, "About Maven dependencies" Section 5.3, "Updating your Fuse project's Maven dependencies" 5.1. Camel migration considerations Creating a connection to MongoDB using the MongoClients factory From Fuse 7.12, use com.mongodb.client.MongoClient instead of com.mongodb.MongoClient to create a connection to MongoDB (note the extra .client sub-package in the full path). If any of your existing Fuse applications use the camel-mongodb component, you must: Update your applications to create the connection bean as a com.mongodb.client.MongoClient instance. For example, create a connection to MongoDB as follows: You can then create the MongoClient bean as shown in following example: Evaluate and, if needed, refactor any code related to the methods exposed by the MongoClient class. Camel 2.23 Red Hat Fuse uses Apache Camel 2.23. You should consider the following updates to Camel 2.22 and 2.23 when you upgrade to Fuse 7.8. Camel 2.22 updates Camel has upgraded from Spring Boot v1 to v2 and therefore v1 is no longer supported. Upgraded to Spring Framework 5. Camel should work with Spring 4.3.x as well, but going forward Spring 5.x will be the minimum Spring version in future releases. Upgraded to Karaf 4.2. You may run Camel on Karaf 4.1 but we only officially support Karaf 4.2 in this release. Optimized using toD DSL to reuse endpoints and producers for components where it is possible. For example, HTTP based components will now reuse producer (HTTP clients) with dynamic URIs sending to the same host. The File2 consumer with read-lock idempotent/idempotent-changed can now be configured to delay the release tasks to expand the window when a file is regarded as in-process, which is usable in active/active cluster settings with a shared idempotent repository to ensure other nodes don't too quickly see a processed file as a file they can process (only needed if you have readLockRemoveOnCommit=true). Allow to plugin a custom request/reply correlation id manager implementation on Netty4 producer in request/reply mode. The Twitter component now uses extended mode by default to support tweets greater than 140 characters Rest DSL producer now supports being configured in REST configuration by using endpointProperties. The Kafka component now supports HeaderFilterStrategy to plugin custom implementations for controlling header mappings between Camel and Kafka messages. REST DSL now supports client request validation to validate that Content-Type/Accept headers are possible for the REST service. Camel now has a Service Registry SPI which allows you to register routes to a service registry (such as consul, etcd, or zookeeper) by using a Camel implementation or Spring Cloud. The SEDA component now has a default queue size of 1000 instead of unlimited. The following noteworthy issues have been fixed: Fixed a CXF continuation timeout issue with camel-cxf consumer that could cause the consumer to return a response with data instead of triggering a timeout to the calling SOAP client. Fixed camel-cxf consumer doesn't release UoW when using a robust one-way operation. Fixed using AdviceWith and using weave methods on onException etc. not working. Fixed Splitter in parallel processing and streaming mode may block, while iterating message body when the iterator throws an exception in the first invoked () method call. Fixed Kafka consumer to not auto commit if autoCommitEnable=false. Fixed file consumer was using markerFile as read-lock by default, which should have been none. Fixed using manual commit with Kafka to provide the current record offset and not the (and -1 for first). Fixed Content Based Router in Java DSL may not resolve property placeholders in when predicates. Camel 2.23 updates Upgraded to Spring Boot 2.1. Additional component-level options can now be configured by using spring-boot auto-configuration. These options are included in the spring-boot component metadata JSON file descriptor for tooling assistance. Added a documentation section that includes all the Spring Boot auto configuration options for all the components, data-formats, and languages. All the Camel Spring Boot starter JARs now include META-INF/spring-autoconfigure-metadata.properties file in their JARs to optimize Spring Boot auto-configuration. The Throttler now supports correlation groups based on dynamic expression so that you can group messages into different throttled sets. The Hystrix EIP now allows inheritance for Camel's error handler so that you can retry the entire Hystrix EIP block again if you have enabled error handling with redeliveries. SQL and ElSql consumers now support dynamic query parameters in route form. Note that this feature is limited to calling beans by using simple expressions. The swagger-restdsl maven plugin now supports generating DTO model classes from the Swagger specification file. The following noteworthy issues have been fixed: The Aggregator2 has been fixed to not propagate control headers for forcing completion of all groups, so it will not happen again if another aggregator EIP is in use later during routing. Fixed Tracer not working if redelivery was activa†ed in the error handler. The built-in type converter for XML Documents may output parsing errors to stdout, which has now been fixed to output by using the logging API. Fixed SFTP writing files by using the charset option would not work if the message body was streaming-based. Fixed Zipkin root id to not be reused when routing over multiple routes to group them together into a single parent span. Fixed optimized toD when using HTTP endpoints had a bug when the hostname contains an IP address with digits. Fixed issue with RabbitMQ with request/reply over temporary queues and using manual acknowledge mode. It would not acknowledge the temporary queue (which is needed to make request/reply possible). Fixed various HTTP consumer components that may not return all allowed HTTP verbs in Allow header for OPTIONS requests (such as when using rest-dsl). Fixed the thread-safety issue with FluentProducerTemplate. 5.2. About Maven dependencies The purpose of a Maven Bill of Materials (BOM) file is to provide a curated set of Maven dependency versions that work well together, saving you from having to define versions individually for every Maven artifact. There is a dedicated BOM file for each container in which Fuse runs. Note You can find these BOM files here: https://github.com/jboss-fuse/redhat-fuse . Alternatively, go to the latest Release Notes for information on BOM file updates. The Fuse BOM offers the following advantages: Defines versions for Maven dependencies, so that you do not need to specify the version when you add a dependency to your pom.xml file. Defines a set of curated dependencies that are fully tested and supported for a specific version of Fuse. Simplifies upgrades of Fuse. Important Only the set of dependencies defined by a Fuse BOM are supported by Red Hat. 5.3. Updating your Fuse project's Maven dependencies To upgrade your Fuse application for Spring Boot, update your project's Maven dependencies. Procedure Open your project's pom.xml file. Add a dependencyManagement element in your project's pom.xml file (or, possibly, in a parent pom.xml file), as shown in the following example: <?xml version="1.0" encoding="UTF-8" standalone="no"?> <project ...> ... <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <!-- configure the versions you want to use here --> <fuse.version>7.13.0.fuse-7_13_0-00012-redhat-00001</fuse.version> </properties> <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.redhat-fuse</groupId> <artifactId>fuse-springboot-bom</artifactId> <version>USD{fuse.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> ... </project> Note Ensure you update your Spring Boot version as well. This is typically found under the Fuse version in the pom.xml file: <properties> <!-- configure the versions you want to use here --> <fuse.version>7.13.0.fuse-7_13_0-00012-redhat-00001</fuse.version> <spring-boot.version>2.7.18</spring-boot.version> </properties> Save your pom.xml file. After you specify the BOM as a dependency in your pom.xml file, it becomes possible to add Maven dependencies to your pom.xml file without specifying the version of the artifact. For example, to add a dependency for the camel-velocity component, you would add the following XML fragment to the dependencies element in your pom.xml file: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-velocity</artifactId> <scope>provided</scope> </dependency> Note how the version element is omitted from this dependency definition.
|
[
"import com.mongodb.client.MongoClient;",
"return MongoClients.create(\"mongodb://admin:[email protected]:32553\");",
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"no\"?> <project ...> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <!-- configure the versions you want to use here --> <fuse.version>7.13.0.fuse-7_13_0-00012-redhat-00001</fuse.version> </properties> <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.redhat-fuse</groupId> <artifactId>fuse-springboot-bom</artifactId> <version>USD{fuse.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> </project>",
"<properties> <!-- configure the versions you want to use here --> <fuse.version>7.13.0.fuse-7_13_0-00012-redhat-00001</fuse.version> <spring-boot.version>2.7.18</spring-boot.version> </properties>",
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-velocity</artifactId> <scope>provided</scope> </dependency>"
] |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/migration_guide/upgrading-fuse-applications-on-spring-boot-standalone
|
Chapter 6. Configuring network settings after installing OpenStack
|
Chapter 6. Configuring network settings after installing OpenStack You can configure network settings for an OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) cluster after installation. 6.1. Configuring application access with floating IP addresses After you install OpenShift Container Platform, configure Red Hat OpenStack Platform (RHOSP) to allow application network traffic. Note You do not need to perform this procedure if you provided values for platform.openstack.apiFloatingIP and platform.openstack.ingressFloatingIP in the install-config.yaml file, or os_api_fip and os_ingress_fip in the inventory.yaml playbook, during installation. The floating IP addresses are already set. Prerequisites OpenShift Container Platform cluster must be installed Floating IP addresses are enabled as described in the OpenShift Container Platform on RHOSP installation documentation. Procedure After you install the OpenShift Container Platform cluster, attach a floating IP address to the ingress port: Show the port: USD openstack port show <cluster_name>-<cluster_ID>-ingress-port Attach the port to the IP address: USD openstack floating ip set --port <ingress_port_ID> <apps_FIP> Add a wildcard A record for *apps. to your DNS file: *.apps.<cluster_name>.<base_domain> IN A <apps_FIP> Note If you do not control the DNS server but want to enable application access for non-production purposes, you can add these hostnames to /etc/hosts : <apps_FIP> console-openshift-console.apps.<cluster name>.<base domain> <apps_FIP> integrated-oauth-server-openshift-authentication.apps.<cluster name>.<base domain> <apps_FIP> oauth-openshift.apps.<cluster name>.<base domain> <apps_FIP> prometheus-k8s-openshift-monitoring.apps.<cluster name>.<base domain> <apps_FIP> <app name>.apps.<cluster name>.<base domain> 6.2. Enabling OVS hardware offloading For clusters that run on Red Hat OpenStack Platform (RHOSP), you can enable Open vSwitch (OVS) hardware offloading. OVS is a multi-layer virtual switch that enables large-scale, multi-server network virtualization. Prerequisites You installed a cluster on RHOSP that is configured for single-root input/output virtualization (SR-IOV). You installed the SR-IOV Network Operator on your cluster. You created two hw-offload type virtual function (VF) interfaces on your cluster. Note Application layer gateway flows are broken in OpenShift Container Platform version 4.10, 4.11, and 4.12. Also, you cannot offload the application layer gateway flow for OpenShift Container Platform version 4.13. Procedure Create an SriovNetworkNodePolicy policy for the two hw-offload type VF interfaces that are on your cluster: The first virtual function interface apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy 1 metadata: name: "hwoffload9" namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: true nicSelector: pfNames: 2 - ens6 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 1 priority: 99 resourceName: "hwoffload9" 1 Insert the SriovNetworkNodePolicy value here. 2 Both interfaces must include physical function (PF) names. The second virtual function interface apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy 1 metadata: name: "hwoffload10" namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: true nicSelector: pfNames: 2 - ens5 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 1 priority: 99 resourceName: "hwoffload10" 1 Insert the SriovNetworkNodePolicy value here. 2 Both interfaces must include physical function (PF) names. Create NetworkAttachmentDefinition resources for the two interfaces: A NetworkAttachmentDefinition resource for the first interface apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload9 name: hwoffload9 namespace: default spec: config: '{ "cniVersion":"0.3.1", "name":"hwoffload9","type":"host-device","device":"ens6" }' A NetworkAttachmentDefinition resource for the second interface apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload10 name: hwoffload10 namespace: default spec: config: '{ "cniVersion":"0.3.1", "name":"hwoffload10","type":"host-device","device":"ens5" }' Use the interfaces that you created with a pod. For example: A pod that uses the two OVS offload interfaces apiVersion: v1 kind: Pod metadata: name: dpdk-testpmd namespace: default annotations: irq-load-balancing.crio.io: disable cpu-quota.crio.io: disable k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload9 k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload10 spec: restartPolicy: Never containers: - name: dpdk-testpmd image: quay.io/krister/centos8_nfv-container-dpdk-testpmd:latest 6.3. Attaching an OVS hardware offloading network You can attach an Open vSwitch (OVS) hardware offloading network to your cluster. Prerequisites Your cluster is installed and running. You provisioned an OVS hardware offloading network on Red Hat OpenStack Platform (RHOSP) to use with your cluster. Procedure Create a file named network.yaml from the following template: spec: additionalNetworks: - name: hwoffload1 namespace: cnf rawCNIConfig: '{ "cniVersion": "0.3.1", "name": "hwoffload1", "type": "host-device","pciBusId": "0000:00:05.0", "ipam": {}}' 1 type: Raw where: pciBusId Specifies the device that is connected to the offloading network. If you do not have it, you can find this value by running the following command: USD oc describe SriovNetworkNodeState -n openshift-sriov-network-operator From a command line, enter the following command to patch your cluster with the file: USD oc apply -f network.yaml 6.4. Enabling IPv6 connectivity to pods on RHOSP To enable IPv6 connectivity between pods that have additional networks that are on different nodes, disable port security for the IPv6 port of the server. Disabling port security obviates the need to create allowed address pairs for each IPv6 address that is assigned to pods and enables traffic on the security group. Important Only the following IPv6 additional network configurations are supported: SLAAC and host-device SLAAC and MACVLAN DHCP stateless and host-device DHCP stateless and MACVLAN Procedure On a command line, enter the following command: USD openstack port set --no-security-group --disable-port-security <compute_ipv6_port> 1 1 1 Specify the IPv6 port of the compute server. Important This command removes security groups from the port and disables port security. Traffic restrictions are removed entirely from the port. 6.5. Create pods that have IPv6 connectivity on RHOSP After you enable IPv6 connectivty for pods and add it to them, create pods that have secondary IPv6 connections. Procedure Define pods that use your IPv6 namespace and the annotation k8s.v1.cni.cncf.io/networks: <additional_network_name> , where <additional_network_name is the name of the additional network. For example, as part of a Deployment object: apiVersion: apps/v1 kind: Deployment metadata: name: hello-openshift namespace: ipv6 spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - hello-openshift replicas: 2 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift annotations: k8s.v1.cni.cncf.io/networks: ipv6 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: hello-openshift securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL image: quay.io/openshift/origin-hello-openshift ports: - containerPort: 8080 Create the pod. For example, on a command line, enter the following command: USD oc create -f <ipv6_enabled_resource> 1 1 Specify the file that contains your resource definition. 6.6. Adding IPv6 connectivity to pods on RHOSP After you enable IPv6 connectivity in pods, add connectivity to them by using a Container Network Interface (CNI) configuration. Procedure To edit the Cluster Network Operator (CNO), enter the following command: USD oc edit networks.operator.openshift.io cluster Specify your CNI configuration under the spec field. For example, the following configuration uses a SLAAC address mode with MACVLAN: ... spec: additionalNetworks: - name: ipv6 namespace: ipv6 1 rawCNIConfig: '{ "cniVersion": "0.3.1", "name": "ipv6", "type": "macvlan", "master": "ens4"}' 2 type: Raw 1 Be sure to create pods in the same namespace. 2 The interface in the network attachment "master" field can differ from "ens4" when more networks are configured or when a different kernel driver is used. Note If you are using stateful address mode, include the IP Address Management (IPAM) in the CNI configuration. DHCPv6 is not supported by Multus. Save your changes and quit the text editor to commit your changes. Verification On a command line, enter the following command: USD oc get network-attachment-definitions -A Example output NAMESPACE NAME AGE ipv6 ipv6 21h You can now create pods that have secondary IPv6 connections.
|
[
"openstack port show <cluster_name>-<cluster_ID>-ingress-port",
"openstack floating ip set --port <ingress_port_ID> <apps_FIP>",
"*.apps.<cluster_name>.<base_domain> IN A <apps_FIP>",
"<apps_FIP> console-openshift-console.apps.<cluster name>.<base domain> <apps_FIP> integrated-oauth-server-openshift-authentication.apps.<cluster name>.<base domain> <apps_FIP> oauth-openshift.apps.<cluster name>.<base domain> <apps_FIP> prometheus-k8s-openshift-monitoring.apps.<cluster name>.<base domain> <apps_FIP> <app name>.apps.<cluster name>.<base domain>",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy 1 metadata: name: \"hwoffload9\" namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: true nicSelector: pfNames: 2 - ens6 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 1 priority: 99 resourceName: \"hwoffload9\"",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy 1 metadata: name: \"hwoffload10\" namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: true nicSelector: pfNames: 2 - ens5 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 1 priority: 99 resourceName: \"hwoffload10\"",
"apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload9 name: hwoffload9 namespace: default spec: config: '{ \"cniVersion\":\"0.3.1\", \"name\":\"hwoffload9\",\"type\":\"host-device\",\"device\":\"ens6\" }'",
"apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload10 name: hwoffload10 namespace: default spec: config: '{ \"cniVersion\":\"0.3.1\", \"name\":\"hwoffload10\",\"type\":\"host-device\",\"device\":\"ens5\" }'",
"apiVersion: v1 kind: Pod metadata: name: dpdk-testpmd namespace: default annotations: irq-load-balancing.crio.io: disable cpu-quota.crio.io: disable k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload9 k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload10 spec: restartPolicy: Never containers: - name: dpdk-testpmd image: quay.io/krister/centos8_nfv-container-dpdk-testpmd:latest",
"spec: additionalNetworks: - name: hwoffload1 namespace: cnf rawCNIConfig: '{ \"cniVersion\": \"0.3.1\", \"name\": \"hwoffload1\", \"type\": \"host-device\",\"pciBusId\": \"0000:00:05.0\", \"ipam\": {}}' 1 type: Raw",
"oc describe SriovNetworkNodeState -n openshift-sriov-network-operator",
"oc apply -f network.yaml",
"openstack port set --no-security-group --disable-port-security <compute_ipv6_port> 1",
"apiVersion: apps/v1 kind: Deployment metadata: name: hello-openshift namespace: ipv6 spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - hello-openshift replicas: 2 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift annotations: k8s.v1.cni.cncf.io/networks: ipv6 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: hello-openshift securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL image: quay.io/openshift/origin-hello-openshift ports: - containerPort: 8080",
"oc create -f <ipv6_enabled_resource> 1",
"oc edit networks.operator.openshift.io cluster",
"spec: additionalNetworks: - name: ipv6 namespace: ipv6 1 rawCNIConfig: '{ \"cniVersion\": \"0.3.1\", \"name\": \"ipv6\", \"type\": \"macvlan\", \"master\": \"ens4\"}' 2 type: Raw",
"oc get network-attachment-definitions -A",
"NAMESPACE NAME AGE ipv6 ipv6 21h"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_openstack/installing-openstack-network-config
|
25.4.2. Compatibility with sysklogd
|
25.4.2. Compatibility with sysklogd The compatibility mode specified via the -c option exists in rsyslog version 5 but not in version 7. Also, the sysklogd-style command-line options are deprecated and configuring rsyslog through these command-line options should be avoided. However, you can use several templates and directives to configure rsyslogd to emulate sysklogd-like behavior. For more information on various rsyslogd options, see the rsyslogd(8) manual page.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-compatibility_with_sysklogd
|
Chapter 6. Configuring TLS
|
Chapter 6. Configuring TLS Transport Layer Security (short: TLS) is crucial to exchange data over a secured channel. For production environments, you should never expose Red Hat build of Keycloak endpoints through HTTP, as sensitive data is at the core of what Red Hat build of Keycloak exchanges with other applications. In this chapter, you will learn how to configure Red Hat build of Keycloak to use HTTPS/TLS. Red Hat build of Keycloak can be configured to load the required certificate infrastructure using files in PEM format or from a Java Keystore. When both alternatives are configured, the PEM files takes precedence over the Java Keystores. 6.1. Providing certificates in PEM format When you use a pair of matching certificate and private key files in PEM format, you configure Red Hat build of Keycloak to use them by running the following command: bin/kc.[sh|bat] start --https-certificate-file=/path/to/certfile.pem --https-certificate-key-file=/path/to/keyfile.pem Red Hat build of Keycloak creates a keystore out of these files in memory and uses this keystore afterwards. 6.2. Providing a Java Keystore When no keystore file is explicitly configured, but http-enabled is set to false, Red Hat build of Keycloak looks for a conf/server.keystore file. As an alternative, you can use an existing keystore by running the following command: bin/kc.[sh|bat] start --https-key-store-file=/path/to/existing-keystore-file 6.2.1. Setting the Keystore password You can set a secure password for your keystore using the https-key-store-password option: bin/kc.[sh|bat] start --https-key-store-password=<value> If no password is set, the default password password is used. 6.2.1.1. Securing credentials Avoid setting a password in plaintext by using the CLI or adding it to conf/keycloak.conf file. Instead use good practices such as using a vault / mounted secret. For more detail, see Using a vault and Configuring Red Hat build of Keycloak for production . 6.3. Configuring TLS protocols By default, Red Hat build of Keycloak does not enable deprecated TLS protocols. If your client supports only deprecated protocols, consider upgrading the client. However, as a temporary work-around, you can enable deprecated protocols by running the following command: bin/kc.[sh|bat] start --https-protocols=<protocol>[,<protocol>] To also allow TLSv1.2, use a command such as the following: kc.sh start --https-protocols=TLSv1.3,TLSv1.2 . 6.4. Switching the HTTPS port Red Hat build of Keycloak listens for HTTPS traffic on port 8443 . To change this port, use the following command: bin/kc.[sh|bat] start --https-port=<port> 6.5. Certificate and Key Reloading By default Red Hat build of Keycloak will reload the certificates, keys, and keystores specified in https-* options every hour. For environments where your server keys may need frequent rotation, this allows that to happen without a server restart. You may override the default via the https-certificates-reload-period option. Interval on which to reload key store, trust store, and certificate files referenced by https-* options. The value may be a java.time.Duration value, an integer number of seconds, or an integer followed by one of the time units [ ms , h , m , s , d ]. Must be greater than 30 seconds. Use -1 to disable. 6.6. Relevant options Value http-enabled Enables the HTTP listener. CLI: --http-enabled Env: KC_HTTP_ENABLED true , false (default) https-certificate-file The file path to a server certificate or certificate chain in PEM format. CLI: --https-certificate-file Env: KC_HTTPS_CERTIFICATE_FILE https-certificate-key-file The file path to a private key in PEM format. CLI: --https-certificate-key-file Env: KC_HTTPS_CERTIFICATE_KEY_FILE https-certificates-reload-period Interval on which to reload key store, trust store, and certificate files referenced by https-* options. May be a java.time.Duration value, an integer number of seconds, or an integer followed by one of [ms, h, m, s, d]. Must be greater than 30 seconds. Use -1 to disable. CLI: --https-certificates-reload-period Env: KC_HTTPS_CERTIFICATES_RELOAD_PERIOD 1h (default) https-cipher-suites The cipher suites to use. If none is given, a reasonable default is selected. CLI: --https-cipher-suites Env: KC_HTTPS_CIPHER_SUITES https-key-store-file The key store which holds the certificate information instead of specifying separate files. CLI: --https-key-store-file Env: KC_HTTPS_KEY_STORE_FILE https-key-store-password The password of the key store file. CLI: --https-key-store-password Env: KC_HTTPS_KEY_STORE_PASSWORD password (default) https-key-store-type The type of the key store file. If not given, the type is automatically detected based on the file extension. If fips-mode is set to strict and no value is set, it defaults to BCFKS . CLI: --https-key-store-type Env: KC_HTTPS_KEY_STORE_TYPE https-port The used HTTPS port. CLI: --https-port Env: KC_HTTPS_PORT 8443 (default) https-protocols The list of protocols to explicitly enable. CLI: --https-protocols Env: KC_HTTPS_PROTOCOLS [TLSv1.3,TLSv1.2] (default) https-management-certificate-file The file path to a server certificate or certificate chain in PEM format for the management server. If not given, the value is inherited from HTTP options. Relevant only when something is exposed on the management interface - see the guide for details. CLI: --https-management-certificate-file Env: KC_HTTPS_MANAGEMENT_CERTIFICATE_FILE https-management-certificate-key-file The file path to a private key in PEM format for the management server. If not given, the value is inherited from HTTP options. Relevant only when something is exposed on the management interface - see the guide for details. CLI: --https-management-certificate-key-file Env: KC_HTTPS_MANAGEMENT_CERTIFICATE_KEY_FILE https-management-key-store-file The key store which holds the certificate information instead of specifying separate files for the management server. If not given, the value is inherited from HTTP options. Relevant only when something is exposed on the management interface - see the guide for details. CLI: --https-management-key-store-file Env: KC_HTTPS_MANAGEMENT_KEY_STORE_FILE https-management-key-store-password The password of the key store file for the management server. If not given, the value is inherited from HTTP options. Relevant only when something is exposed on the management interface - see the guide for details. CLI: --https-management-key-store-password Env: KC_HTTPS_MANAGEMENT_KEY_STORE_PASSWORD password (default)
|
[
"bin/kc.[sh|bat] start --https-certificate-file=/path/to/certfile.pem --https-certificate-key-file=/path/to/keyfile.pem",
"bin/kc.[sh|bat] start --https-key-store-file=/path/to/existing-keystore-file",
"bin/kc.[sh|bat] start --https-key-store-password=<value>",
"bin/kc.[sh|bat] start --https-protocols=<protocol>[,<protocol>]",
"bin/kc.[sh|bat] start --https-port=<port>"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/server_configuration_guide/enabletls-
|
Networking Guide
|
Networking Guide Red Hat OpenStack Platform 16.2 An advanced guide to Red Hat OpenStack Platform Networking OpenStack Documentation Team [email protected]
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/networking_guide/index
|
4.3. sVirt Configuration
|
4.3. sVirt Configuration SELinux Booleans are variables that can be toggled on or off, quickly enabling or disabling features or other special conditions. Booleans can be toggled by running either setsebool boolean_name {on|off} for a temporary change, or setsebool -P boolean_name {on|off} to make the change persistent across reboots. The following table shows the SELinux Boolean values that affect KVM when launched by libvirt. The current state of these booleans (on or off) can be found by running the command getsebool -a|grep virt . Table 4.1. KVM SELinux Booleans SELinux Boolean Description staff_use_svirt Enables staff users to create and transition to sVirt domains. unprivuser_use_svirt Enables unprivileged users to create and transition to sVirt domains. virt_sandbox_use_audit Enables sandbox containers to send audit messages. virt_sandbox_use_netlink Enables sandbox containers to use netlink system calls. virt_sandbox_use_sys_admin Enables sandbox containers to use sys_admin system calls, such as mount. virt_transition_userdomain Enables virtual processes to run as user domains. virt_use_comm Enables virt to use serial/parallel communication ports. virt_use_execmem Enables confined virtual guests to use executable memory and executable stack. virt_use_fusefs Enables virt to read FUSE mounted files. virt_use_nfs Enables virt to manage NFS mounted files. virt_use_rawip Enables virt to interact with rawip sockets. virt_use_samba Enables virt to manage CIFS mounted files. virt_use_sanlock Enables confined virtual guests to interact with the sanlock. virt_use_usb Enables virt to use USB devices. virt_use_xserver Enables virtual machine to interact with the X Window System. Note For more information on SELinux Booleans, see the Red Hat Enterprise Linux SELinux Users and Administrators Guide .
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_security_guide/sect-virtualization_security_guide-svirt-configuration
|
Preface
|
Preface To begin, install Ansible Automation Platform and select a target system where you can deploy an initial playbook (provided by automation controller). This first playbook executes simple Ansible tasks, while teaching you how to use the controller and properly set it up. You can use any sort of system manageable by Ansible, as described in the Managed nodes section of the Ansible documentation. For further instructions, see the Red Hat Ansible Automation Platform Installation Guide . Note Ansible Automation Platform is offered on a subscription basis. These subscriptions vary in price and support-levels. For more information about subscriptions and features, see Subscription Types .
| null |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/getting_started_with_automation_controller/pr01
|
Chapter 8. Handling large messages
|
Chapter 8. Handling large messages Clients might send large messages that can exceed the size of the broker's internal buffer, causing unexpected errors. To prevent this situation, you can configure the broker to store messages as files when the messages are larger than a specified minimum value. Handling large messages in this way means that the broker does not hold the messages in memory. Instead, you specify a directory on disk or in a database table in which the broker stores large message files. When the broker stores a message as a large message, the queue retains a reference to the file in the large messages directory or database table. Large message handling is available for the Core Protocol, AMQP, OpenWire and STOMP protocols. For the Core Protocol and OpenWire protocols, clients specify the minimum large message size in their connection configurations. For the AMQP and STOMP protocols, you specify the minimum large message size in the acceptor defined for each protocol in the broker configuration. Note It is recommended that you do not use different protocols for producing and consuming large messages. To do this, the broker might need to perform several conversions of the message. For example, say that you want to send a message using the AMQP protocol and receive it using OpenWire. In this situation, the broker must first read the entire body of the large message and convert it to use the Core protocol. Then, the broker must perform another conversion, this time to the OpenWire protocol. Message conversions such as these cause significant processing overhead on the broker. The minimum large message size that you specify for any of the preceding protocols is affected by system resources such as the amount of disk space available, as well as the sizes of the messages. It is recommended that you run performance tests using several values to determine an appropriate size. The procedures in this section show how to: Configure the broker to store large messages Configure acceptors for the AMQP and STOMP protocols for large message handling This section also links to additional resources about configuring AMQ Core Protocol and AMQ OpenWire JMS clients to work with large messages. 8.1. Configuring the broker for large message handling The following procedure shows how to specify a directory on disk or a database table in which the broker stores large message files. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Specify where you want the broker to store large message files. If you are storing large messages on disk, add the large-messages-directory parameter within the core element and specify a file system location. For example: <configuration> <core> ... <large-messages-directory>/path/to/my-large-messages-directory</large-messages-directory> ... </core> </configuration> Note If you do not explicitly specify a value for large-messages-directory , the broker uses a default value of <broker_instance_dir> /data/largemessages If you are storing large messages in a database table, add the large-message-table parameter to the database-store element and specify a value. For example: <store> <database-store> ... <large-message-table>MY_TABLE</large-message-table> ... </database-store> </store> Note If you do not explicitly specify a value for large-message-table , the broker uses a default value of LARGE_MESSAGE_TABLE . Additional resources For more information about configuring a database store, see Section 6.2, "Persisting message data in a database" . 8.2. Configuring AMQP acceptors for large message handling The following procedure shows how to configure an AMQP acceptor to handle an AMQP message larger than a specified size as a large message. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. The default AMQP acceptor in the broker configuration file looks as follows: <acceptors> ... <acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor> ... </acceptors> In the default AMQP acceptor (or another AMQP acceptor that you have configured), add the amqpMinLargeMessageSize property and specify a value. For example: <acceptors> ... <acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=204800</acceptor> ... </acceptors> In the preceding example, the broker is configured to accept AMQP messages on port 5672. Based on the value of amqpMinLargeMessageSize , if the acceptor receives an AMQP message with a body larger than or equal to 204800 bytes (that is, 200 kilobytes), the broker stores the message as a large message. If you do not explicitly specify a value for this property, the broker uses a default value of 102400 (that is, 100 kilobytes). Note If you set amqpMinLargeMessageSize to -1, large message handling for AMQP messages is disabled. If the broker receives a persistent AMQP message that does not exceed the value of amqpMinLargeMessageSize , but which does exceed the size of the messaging journal buffer (specified using the journal-buffer-size configuration parameter), the broker converts the message to a large Core Protocol message, before storing it in the journal. 8.3. Configuring STOMP acceptors for large message handling The following procedure shows how to configure a STOMP acceptor to handle a STOMP message larger than a specified size as a large message. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. The default AMQP acceptor in the broker configuration file looks as follows: <acceptors> ... <acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor> ... </acceptors> In the default STOMP acceptor (or another STOMP acceptor that you have configured), add the stompMinLargeMessageSize property and specify a value. For example: <acceptors> ... <acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true;stompMinLargeMessageSize=204800</acceptor> ... </acceptors> In the preceding example, the broker is configured to accept STOMP messages on port 61613. Based on the value of stompMinLargeMessageSize , if the acceptor receives a STOMP message with a body larger than or equal to 204800 bytes (that is, 200 kilobytes), the broker stores the message as a large message. If you do not explicitly specify a value for this property, the broker uses a default value of 102400 (that is, 100 kilobytes). Note To deliver a large message to a STOMP consumer, the broker automatically converts the message from a large message to a normal message before sending it to the client. If a large message is compressed, the broker decompresses it before sending it to STOMP clients. 8.4. Large messages and Java clients There are two options available to Java developers who are writing clients that use large messages. One option is to use instances of InputStream and OutputStream . For example, a FileInputStream can be used to send a message taken from a large file on a physical disk. A FileOutputStream can then be used by the receiver to stream the message to a location on its local file system. Another option is to stream a JMS BytesMessage or StreamMessage directly. For example: BytesMessage rm = (BytesMessage)cons.receive(10000); byte data[] = new byte[1024]; for (int i = 0; i < rm.getBodyLength(); i += 1024) { int numberOfBytes = rm.readBytes(data); // Do whatever you want with the data } Additional resources To learn about working with large messages in the AMQ Core Protocol JMS client, see: Large message options Writing to a streamed large message Reading from a streamed large message To learn about working with large messages in the AMQ OpenWire JMS client, see: Large message options Writing to a streamed large message Reading from a streamed large message For an example of working with large messages, see the large-message example in the <install_dir> /examples/features/standard/ directory of your AMQ Broker installation. To learn more about running example programs, see Running the AMQ Broker examples .
|
[
"<configuration> <core> <large-messages-directory>/path/to/my-large-messages-directory</large-messages-directory> </core> </configuration>",
"<store> <database-store> <large-message-table>MY_TABLE</large-message-table> </database-store> </store>",
"<acceptors> <acceptor name=\"amqp\">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor> </acceptors>",
"<acceptors> <acceptor name=\"amqp\">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=204800</acceptor> </acceptors>",
"<acceptors> <acceptor name=\"stomp\">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor> </acceptors>",
"<acceptors> <acceptor name=\"stomp\">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true;stompMinLargeMessageSize=204800</acceptor> </acceptors>",
"BytesMessage rm = (BytesMessage)cons.receive(10000); byte data[] = new byte[1024]; for (int i = 0; i < rm.getBodyLength(); i += 1024) { int numberOfBytes = rm.readBytes(data); // Do whatever you want with the data }"
] |
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.10/html/configuring_amq_broker/assembly-br-handling-large-messages_configuring
|
Chapter 17. Configuring the Key Recovery Authority
|
Chapter 17. Configuring the Key Recovery Authority 17.1. Manually Setting up Key Archival Important This procedure is unnecessary if the CA and KRA are in the same security domain. This procedure is only required for CAs outside the security domain. Configuring key archival manually demands two things: Having a trusted relationship between a CA and a KRA. Having the enrollment form enabled for key archival, meaning it has key archival configured and the KRA transport certificate stored in the form. In the same security domain, both of these configuration steps are done automatically when the KRA is configured because it is configured to have a trusted relationship with any CA in its security domain. It is possible to create that trusted relationship with Certificate Managers outside its security domain by manually configuring the trust relationships and profile enrollment forms. If necessary, create a trusted manager to establish a relationship between the Certificate Manager and the KRA. For the CA to be able to request key archival of the KRA, the two subsystems must be configured to recognize, trust, and communicate with each other. Verify that the Certificate Manager has been set up as a privileged user, with an appropriate TLS client authentication certificate, in the internal database of the KRA. By default, the Certificate Manager uses its subsystem certificate for TLS client authentication to the KRA. Copy the base-64 encoded transport certificate for the KRA. The transport certificate is stored in the KRA's certificate database, which can be retrieved using the certutil utility. If the transport certificate is signed by a Certificate Manager, then a copy of the certificate is available through the Certificate Manager end-entities page in the Retrieval tab. Alternatively, download the transport certificate using the pki ultility: Add the transport certificate to the CA's CS.cfg file. Then edit the enrollment form and add or replace the transport certificate value in the keyTransportCert method.
|
[
"pki cert-find --name \" KRA Transport certificate's subject common name \" pki cert-show serial_number --output transport.pem",
"ca.connector.KRA.enable=true ca.connector.KRA.host=server.example.com ca.connector.KRA.local=false ca.connector.KRA.nickName=subsystemCert cert-pki-ca ca.connector.KRA.port=8443 ca.connector.KRA.timeout=30 ca.connector.KRA.transportCert=MIIDbDCCAlSgAwIBAgIBDDANBgkqhkiG9w0BAQUFADA6MRgwFgYDVQQKEw9Eb21haW4gc28gbmFtZWQxHjAcBgNVBAMTFUNlcnRpZmljYXRlIEF1dGhvcml0eTAeFw0wNjExMTQxODI2NDdaFw0wODEwMTQxNzQwNThaMD4xGDAWBgNVBAoTD0RvbWFpbiBzbyBuYW1lZDEiMCAGA1UEAxMZRFJNIFRyYW5zcG9ydCBDZXJ0aWZpY2F0ZTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKnMGB3WkznueouwZjrWLFZBLpKt6TimNKV9iz5s0zrGUlpdt81/BTsU5A2sRUwNfoZSMs/d5KLuXOHPyGtmC6yVvaY719hr9EGYuv0Sw6jb3WnEKHpjbUO/vhFwTufJHWKXFN3V4pMbHTkqW/x5fu/3QyyUre/5IhG0fcEmfvYxIyvZUJx+aQBW437ATD99Kuh+I+FuYdW+SqYHznHY8BqOdJwJ1JiJMNceXYAuAdk+9t70RztfAhBmkK0OOP0vH5BZ7RCwE3Y/6ycUdSyPZGGc76a0HrKOz+lwVFulFStiuZIaG1pv0NNivzcj0hEYq6AfJ3hgxcC1h87LmCxgRWUCAwEAAaN5MHcwHwYDVR0jBBgwFoAURShCYtSg+Oh4rrgmLFB/Fg7X3qcwRAYIKwYBBQUHAQEEODA2MDQGCCsGAQUFBzABhihodHRwOi8vY2x5ZGUucmR1LnJlZGhhdC5jb206OTE4MC9jYS9vY3NwMA4GA1UdDwEB/wQEAwIE8DANBgkqhkiG9w0BAQUFAAOCAQEAFYz5ibujdIXgnJCbHSPWdKG0T+FmR67YqiOtoNlGyIgJ42fi5lsDPfCbIAe3YFqmF3wU472h8LDLGyBjy9RJxBj+aCizwHkuoH26KmPGntIayqWDH/UGsIL0mvTSOeLqI3KM0IuH7bxGXjlION83xWbxumW/kVLbT9RCbL4216tqq5jsjfOHNNvUdFhWyYdfEOjpp/UQZOhOM1d8GFiw8N8ClWBGc3mdlADQp6tviodXueluZ7UxJLNx3HXKFYLleewwIFhC82zqeQ1PbxQDL8QLjzca+IUzq6Cd/t7OAgvv3YmpXgNR0/xoWQGdM1/YwHxtcAcVlskXJw5ZR0Y2zA== ca.connector.KRA.uri=/kra/agent/kra/connector",
"vim /var/lib/pki/pki-tomcat/ca/webapps/ca/ee/ca/ProfileSelect.template var keyTransportCert = MIIDbDCCAlSgAwIBAgIBDDANBgkqhkiG9w0BAQUFADA6MRgwFgYDVQQKEw9Eb21haW4gc28gbmFtZWQxHjAcBgNVBAMTFUNlcnRpZmljYXRlIEF1dGhvcml0eTAeFw0wNjExMTQxODI2NDdaFw0wODEwMTQxNzQwNThaMD4xGDAWBgNVBAoTD0RvbWFpbiBzbyBuYW1lZDEiMCAGA1UEAxMZRFJNIFRyYW5zcG9ydCBDZXJ0aWZpY2F0ZTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKnMGB3WkznueouwZjrWLFZBLpKt6TimNKV9iz5s0zrGUlpdt81/BTsU5A2sRUwNfoZSMs/d5KLuXOHPyGtmC6yVvaY719hr9EGYuv0Sw6jb3WnEKHpjbUO/vhFwTufJHWKXFN3V4pMbHTkqW/x5fu/3QyyUre/5IhG0fcEmfvYxIyvZUJx+aQBW437ATD99Kuh+I+FuYdW+SqYHznHY8BqOdJwJ1JiJMNceXYAuAdk+9t70RztfAhBmkK0OOP0vH5BZ7RCwE3Y/6ycUdSyPZGGc76a0HrKOz+lwVFulFStiuZIaG1pv0NNivzcj0hEYq6AfJ3hgxcC1h87LmCxgRWUCAwEAAaN5MHcwHwYDVR0jBBgwFoAURShCYtSg+Oh4rrgmLFB/Fg7X3qcwRAYIKwYBBQUHAQEEODA2MDQGCCsGAQUFBzABhihodHRwOi8vY2x5ZGUucmR1LnJlZGhhdC5jb206OTE4MC9jYS9vY3NwMA4GA1UdDwEB/wQEAwIE8DANBgkqhkiG9w0BAQUFAAOCAQEAFYz5ibujdIXgnJCbHSPWdKG0T+FmR67YqiOtoNlGyIgJ42fi5lsDPfCbIAe3YFqmF3wU472h8LDLGyBjy9RJxBj+aCizwHkuoH26KmPGntIayqWDH/UGsIL0mvTSOeLqI3KM0IuH7bxGXjlION83xWbxumW/kVLbT9RCbL4216tqq5jsjfOHNNvUdFhWyYdfEOjpp/UQZOhOM1d8GFiw8N8ClWBGc3mdlADQp6tviodXueluZ7UxJLNx3HXKFYLleewwIFhC82zqeQ1PbxQDL8QLjzca+IUzq6Cd/t7OAgvv3YmpXgNR0/xoWQGdM1/YwHxtcAcVlskXJw5ZR0Y2zA==;"
] |
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/configuring-key-recovery-authority
|
Chapter 20. The Admin CLI
|
Chapter 20. The Admin CLI In chapters, we described how to use the Red Hat Single Sign-On Admin Console to perform administrative tasks. You can also perform those tasks from the command-line interface (CLI) by using the Admin CLI command-line tool. 20.1. Installing the Admin CLI The Admin CLI is packaged inside Red Hat Single Sign-On Server distribution. You can find execution scripts inside the bin directory. The Linux script is called kcadm.sh , and the script for Windows is called kcadm.bat . You can add the Red Hat Single Sign-On server directory to your PATH to use the client from any location on your file system. For example, on: Linux: Windows: We assume the KEYCLOAK_HOME environment (env) variable is set to the path where you extracted the Red Hat Single Sign-On Server distribution. Note To avoid repetition, the rest of this document only gives Windows examples in places where the difference in the CLI is more than just in the kcadm command name. 20.2. Using the Admin CLI The Admin CLI works by making HTTP requests to Admin REST endpoints. Access to them is protected and requires authentication. Note Consult the Admin REST API documentation for details about JSON attributes for specific endpoints. Start an authenticated session by providing credentials, that is, logging in. You are ready to perform create, read, update, and delete (CRUD) operations. For example, on Linux: Windows: In a production environment, you must access Red Hat Single Sign-On with https: to avoid exposing tokens to network sniffers. If a server's certificate is not issued by one of the trusted certificate authorities (CAs) that are included in Java's default certificate truststore, prepare a truststore.jks file and instruct the Admin CLI to use it. For example, on: Linux: Windows: 20.3. Authenticating When you log in with the Admin CLI, you specify a server endpoint URL and a realm, and then you specify a user name. Another option is to specify only a clientId, which results in using a special "service account". When you log in using a user name, you must use a password for the specified user. When you log in using a clientId, you only need the client secret, not the user password. You could also use Signed JWT instead of the client secret. Make sure the account used for the session has the proper permissions to invoke Admin REST API operations. For example, the realm-admin role of the realm-management client allows the user to administer the realm within which the user is defined. There are two primary mechanisms for authentication. One mechanism uses kcadm config credentials to start an authenticated session. This approach maintains an authenticated session between the kcadm command invocations by saving the obtained access token and the associated refresh token. It may also maintain other secrets in a private configuration file. See chapter for more information on the configuration file. The second approach only authenticates each command invocation for the duration of that invocation. This approach increases the load on the server and the time spent with roundtrips obtaining tokens. The benefit of this approach is not needing to save any tokens between invocations, which means nothing is saved to disk. This mode is used when the --no-config argument is specified. For example, when performing an operation, we specify all the information required for authentication. Run the kcadm.sh help command for more information on using the Admin CLI. Run the kcadm.sh config credentials --help command for more information about starting an authenticated session. 20.4. Working with alternative configurations By default, the Admin CLI automatically maintains a configuration file called kcadm.config located under the user's home directory. In Linux-based systems, the full path name is USDHOME/.keycloak/kcadm.config . On Windows, the full path name is %HOMEPATH%\.keycloak\kcadm.config . You can use the --config option to point to a different file or location so you can maintain multiple authenticated sessions in parallel. Note It is best to perform operations tied to a single configuration file from a single thread. Make sure you do not make the configuration file visible to other users on the system. It contains access tokens and secrets that should be kept private. By default, the ~/.keycloak directory and its content are created automatically with proper access limits. If the directory already exists, its permissions are not updated. If your unique circumstances require you to avoid storing secrets inside a configuration file, you can do so. It will be less convenient and you will have to make more token requests. To not store secrets, use the --no-config option with all your commands and specify all the authentication information needed by the config credentials command with each kcadm invocation. 20.5. Basic operations and resource URIs The Admin CLI allows you to generically perform CRUD operations against Admin REST API endpoints with additional commands that simplify performing certain tasks. The main usage pattern is listed below, where the create , get , update , and delete commands are mapped to the HTTP verbs POST , GET , PUT , and DELETE , respectively. ENDPOINT is a target resource URI and can either be absolute (starting with http: or https: ) or relative, used to compose an absolute URL of the following format: For example, if you authenticate against the server http://localhost:8080/auth and realm is master , then using users as ENDPOINT results in the resource URL http://localhost:8080/auth/admin/realms/master/users . If you set ENDPOINT to clients , the effective resource URI would be http://localhost:8080/auth/admin/realms/master/clients . There is a realms endpoint that is treated slightly differently because it is the container for realms. It resolves to: There is also a serverinfo endpoint, which is treated the same way because it is independent of realms. When you authenticate as a user with realm-admin powers, you might need to perform commands on multiple realms. In that case, specify the -r option to tell explicitly which realm the command should be executed against. Instead of using REALM as specified via the --realm option of kcadm.sh config credentials , the TARGET_REALM is used. For example, In this example, you start a session authenticated as the admin user in the master realm. You then perform a POST call against the resource URL http://localhost:8080/auth/admin/realms/demorealm/users . The create and update commands send a JSON body to the server by default. You can use -f FILENAME to read a premade document from a file. When you can use -f - option, the message body is read from standard input. You can also specify individual attributes and their values as seen in the create users example. They are composed into a JSON body and sent to the server. There are several ways to update a resource using the update command. You can first determine the current state of a resource and save it to a file, and then edit that file and send it to the server for updating. For example: This method updates the resource on the server with all the attributes in the sent JSON document. Another option is to perform an on-the-fly update using the -s, --set options to set new values. For example: That method only updates the enabled attribute to false . By default, the update command first performs a get and then merges the new attribute values with existing values. This is the preferred behavior. In some cases, the endpoint may support the PUT command but not the GET command. You can use the -n option to perform a "no-merge" update, which performs a PUT command without first running a GET command. 20.6. Realm operations Creating a new realm Use the create command on the realms endpoint to create a new enabled realm, and set the attributes to realm and enabled . A realm is not enabled by default. By enabling it, you can use a realm immediately for authentication. A description for a new object can also be in a JSON format. You can send a JSON document with realm attributes directly from a file or piped to a standard input. For example, on: Linux: Windows: Listing existing realms The following command returns a list of all realms. Note A list of realms is additionally filtered on the server to return only realms a user can see. Returning the entire realm description often provides too much information. Most users are interested only in a subset of attributes, such as realm name and whether the realm is enabled. You can specify which attributes to return by using the --fields option. You can also display the result as comma separated values. Getting a specific realm You append a realm name to a collection URI to get an individual realm. Updating a realm Use the -s option to set new values for the attributes when you want to change only some of the realm's attributes. For example: If you want to set all writable attributes with new values, run a get command, edit the current values in the JSON file, and resubmit. For example: Deleting a realm Run the following command to delete a realm. Turning on all login page options for the realm Set the attributes controlling specific capabilities to true . For example: Listing the realm keys Use the get operation on the keys endpoint of the target realm. Generating new realm keys Get the ID of the target realm before adding a new RSA-generated key pair. For example: Add a new key provider with a higher priority than the existing providers as revealed by kcadm.sh get keys -r demorealm . For example, on: Linux: Windows: Set the parentId attribute to the value of the target realm's ID. The newly added key should now become the active key as revealed by kcadm.sh get keys -r demorealm . Adding new realm keys from a Java Key Store file Add a new key provider to add a new key pair already prepared as a JKS file on the server. For example, on: Linux: Windows: Make sure to change the attribute values for keystore , keystorePassword , keyPassword , and alias to match your specific keystore. Set the parentId attribute to the value of the target realm's ID. Making the key passive or disabling the key Identify the key you want to make passive Use the key's providerId attribute to construct an endpoint URI, such as components/PROVIDER_ID . Perform an update . For example, on: Linux: Windows: You can update other key attributes. Set a new enabled value to disable the key, for example, config.enabled=["false"] . Set a new priority value to change the key's priority, for example, config.priority=["110"] . Deleting an old key Make sure the key you are deleting has been passive and disabled to prevent any existing tokens held by applications and users from abruptly failing to work. Identify the key you want to make passive. Use the providerId of that key to perform a delete. Configuring event logging for a realm Use the update command on the events/config endpoint. The eventsListeners attribute contains a list of EventListenerProviderFactory IDs that specify all event listeners receiving events. Separately, there are attributes that control a built-in event storage, which allows querying past events via the Admin REST API. There is separate control over logging of service calls ( eventsEnabled ) and auditing events triggered during Admin Console or Admin REST API ( adminEventsEnabled ). You may want to set up expiry of old events so that your database does not fill up; eventsExpiration is set to time-to-live expressed in seconds. Here is an example of setting up a built-in event listener that receives all the events and logs them through jboss-logging. (Using a logger called org.keycloak.events , error events are logged as WARN , and others are logged as DEBUG .) For example, on: Linux: Windows: Here is an example of turning on storage of all available ERROR events-not including auditing events-for 2 days so they can be retrieved via Admin REST. For example, on: Linux: Windows: Here is an example of how to reset stored event types to all available event types ; setting to empty list is the same as enumerating all. Here is an example of how to enable storage of auditing events. Here is an example of how to get the last 100 events; they are ordered from newest to oldest. Here is an example of how to delete all saved events. Flushing the caches Use the create command and one of the following endpoints: clear-realm-cache , clear-user-cache , or clear-keys-cache . Set realm to the same value as the target realm. For example: Importing a realm from exported .json file Use the create command on the partialImport endpoint. Set ifResourceExists to one of FAIL , SKIP , OVERWRITE . Use -f to submit the exported realm .json file For example: If realm does not yet exist, you first have to create it. For example: 20.7. Role operations Creating a realm role Use the roles endpoint to create a realm role. Creating a client role Identify the client first and then use the get command to list available clients when creating a client role. Create a new role by using the clientId attribute to construct an endpoint URI, such as clients/ID/roles . For example: Listing realm roles Use the get command on the roles endpoint to list existing realm roles. You can also use the get-roles command. Listing client roles There is a dedicated get-roles command to simplify listing realm and client roles. It is an extension of the get command and behaves the same with additional semantics for listing roles. Use the get-roles command, passing it either the clientId attribute (via the --cclientid option) or id (via the --cid option) to identify the client to list client roles. For example: Getting a specific realm role Use the get command and the role name to construct an endpoint URI for a specific realm role: roles/ROLE_NAME , where user is the name of the existing role. For example: You can also use the special get-roles command, passing it a role name (via the --rolename option) or ID (via the --roleid option). For example: Getting a specific client role Use a dedicated get-roles command, passing it either the clientId attribute (via the --cclientid option) or ID (via the --cid option) to identify the client, and passing it either the role name (via the --rolename option) or ID (via the --roleid ) to identify a specific client role. For example: Updating a realm role Use the update command with the same endpoint URI that you used to get a specific realm role. For example: Updating a client role Use the update command with the same endpoint URI that you used to get a specific client role. For example: Deleting a realm role Use the delete command with the same endpoint URI that you used to get a specific realm role. For example: Deleting a client role Use the delete command with the same endpoint URI that you used to get a specific client role. For example: Listing assigned, available, and effective realm roles for a composite role Use a dedicated get-roles command to list assigned, available, and effective realm roles for a composite role. To list assigned realm roles for the composite role, you can specify the target composite role by either name (via the --rname option) or ID (via the --rid option). For example: Use the additional --effective option to list effective realm roles. For example: Use the --available option to list realm roles that can still be added to the composite role. For example: Listing assigned, available, and effective client roles for a composite role Use a dedicated get-roles command to list assigned, available, and effective client roles for a composite role. To list assigned client roles for the composite role, you can specify the target composite role by either name (via the --rname option) or ID (via the --rid option) and client by either the clientId attribute (via the --cclientid option) or ID (via the --cid option). For example: Use the additional --effective option to list effective realm roles. For example: Use the --available option to list realm roles that can still be added to the target composite role. For example: Adding realm roles to a composite role There is a dedicated add-roles command that can be used for adding realm roles and client roles. The following example adds the user role to the composite role testrole . Removing realm roles from a composite role There is a dedicated remove-roles command that can be used to remove realm roles and client roles. The following example removes the user role from the target composite role testrole . Adding client roles to a realm role Use a dedicated add-roles command that can be used for adding realm roles and client roles. The following example adds the roles defined on the client realm-management - create-client role and the view-users role to the testrole composite role. Adding client roles to a client role Determine the ID of the composite client role by using the get-roles command. For example: Assume that there is a client with a clientId attribute of test-client , a client role called support , and another client role called operations , which becomes a composite role, that has an ID of "fc400897-ef6a-4e8c-872b-1581b7fa8a71". Use the following example to add another role to the composite role. List the roles of a composite role by using the get-roles --all command. For example: Removing client roles from a composite role Use a dedicated remove-roles command to remove client roles from a composite role. Use the following example to remove two roles defined on the client realm-management - create-client role and the view-users role from the testrole composite role. Adding client roles to a group Use a dedicated add-roles command that can be used for adding realm roles and client roles. The following example adds the roles defined on the client realm-management - create-client role and the view-users role to the Group group (via the --gname option). The group can alternatively be specified by ID (via the --gid option). See Group operations for more operations that can be performed to groups. Removing client roles from a group Use a dedicated remove-roles command to remove client roles from a group. Use the following example to remove two roles defined on the client realm management - create-client role and the view-users role from the Group group. See Group operations for more operations that can be performed to groups. 20.8. Client operations Creating a client Run the create command on a clients endpoint to create a new client. For example: Specify a secret if you want to set a secret for adapters to authenticate. For example: Listing clients Use the get command on the clients endpoint to list clients. For example: This example filters the output to list only the id and clientId attributes. Getting a specific client Use a client's ID to construct an endpoint URI that targets a specific client, such as clients/ID . For example: Getting the current secret for a specific client Use a client's ID to construct an endpoint URI, such as clients/ID/client-secret . For example: Getting an adapter configuration file (keycloak.json) for a specific client Use a client's ID to construct an endpoint URI that targets a specific client, such as clients/ID/installation/providers/keycloak-oidc-keycloak-json . For example: Getting a WildFly subsystem adapter configuration for a specific client Use a client's ID to construct an endpoint URI that targets a specific client, such as clients/ID/installation/providers/keycloak-oidc-jboss-subsystem . For example: Getting a Docker-v2 example configuration for a specific client Use a client's ID to construct an endpoint URI that targets a specific client, such as clients/ID/installation/providers/docker-v2-compose-yaml . Note that response will be in .zip format. For example: Updating a client Use the update command with the same endpoint URI that you used to get a specific client. For example, on: Linux: Windows: Deleting a client Use the delete command with the same endpoint URI that you used to get a specific client. For example: Adding or removing roles for client's service account Service account for the client is just a special kind of user account with username service-account-CLIENT_ID . You can perform user operations on this account as if it was a regular user. 20.9. User operations Creating a user Run the create command on the users endpoint to create a new user. For example: Listing users Use the users endpoint to list users. The target user will have to change the password the time they log in. For example: You can filter users by username , firstName , lastName , or email . For example: Note Filtering does not use exact matching. For example, the above example would match the value of the username attribute against the *testuser* pattern. You can also filter across multiple attributes by specifying multiple -q options, which return only users that match the condition for all the attributes. Getting a specific user Use a user's ID to compose an endpoint URI, such as users/USER_ID . For example: Updating a user Use the update command with the same endpoint URI that you used to get a specific user. For example, on: Linux: Windows: Deleting a user Use the delete command with the same endpoint URI that you used to get a specific user. For example: Resetting a user's password Use the dedicated set-password command to reset a user's password. For example: That command sets a temporary password for the user. The target user will have to change the password the time they log in. You can use --userid if you want to specify the user by using the id attribute. You can achieve the same result using the update command on an endpoint constructed from the one you used to get a specific user, such as users/USER_ID/reset-password . For example: The last parameter ( -n ) ensures that only the PUT command is performed without a prior GET command. It is necessary in this instance because the reset-password endpoint does not support GET . Listing assigned, available, and effective realm roles for a user You can use a dedicated get-roles command to list assigned, available, and effective realm roles for a user. Specify the target user by either user name or ID to list assigned realm roles for the user. For example: Use the additional --effective option to list effective realm roles. For example: Use the --available option to list realm roles that can still be added to the user. For example: Listing assigned, available, and effective client roles for a user Use a dedicated get-roles command to list assigned, available, and effective client roles for a user. Specify the target user by either a user name (via the --uusername option) or an ID (via the --uid option) and client by either a clientId attribute (via the --cclientid option) or an ID (via the --cid option) to list assigned client roles for the user. For example: Use the additional --effective option to list effective realm roles. For example: Use the --available option to list realm roles that can still be added to the user. For example: Adding realm roles to a user Use a dedicated add-roles command to add realm roles to a user. Use the following example to add the user role to user testuser . Removing realm roles from a user Use a dedicated remove-roles command to remove realm roles from a user. Use the following example to remove the user role from the user testuser . Adding client roles to a user Use a dedicated add-roles command to add client roles to a user. Use the following example to add two roles defined on the client realm management - create-client role and the view-users role to the user testuser . Removing client roles from a user Use a dedicated remove-roles command to remove client roles from a user. Use the following example to remove two roles defined on the realm management client. Listing a user's sessions Identify the user's ID, and then use it to compose an endpoint URI, such as users/ID/sessions . Use the get command to retrieve a list of the user's sessions. For example: Logging out a user from a specific session Determine the session's ID as described above. Use the session's ID to compose an endpoint URI, such as sessions/ID . Use the delete command to invalidate the session. For example: Logging out a user from all sessions You need a user's ID to construct an endpoint URI, such as users/ID/logout . Use the create command to perform POST on that endpoint URI. For example: 20.10. Group operations Creating a group Use the create command on the groups endpoint to create a new group. For example: Listing groups Use the get command on the groups endpoint to list groups. For example: Getting a specific group Use the group's ID to construct an endpoint URI, such as groups/GROUP_ID . For example: Updating a group Use the update command with the same endpoint URI that you used to get a specific group. For example: Deleting a group Use the delete command with the same endpoint URI that you used to get a specific group. For example: Creating a subgroup Find the ID of the parent group by listing groups, and then use that ID to construct an endpoint URI, such as groups/GROUP_ID/children . For example: Moving a group under another group Find the ID of an existing parent group and of an existing child group. Use the parent group's ID to construct an endpoint URI, such as groups/PARENT_GROUP_ID/children . Run the create command on this endpoint and pass the child group's ID as a JSON body. For example: Get groups for a specific user Use a user's ID to determine a user's membership in groups to compose an endpoint URI, such as users/USER_ID/groups . For example: Adding a user to a group Use the update command with an endpoint URI composed from user's ID and a group's ID, such as users/USER_ID/groups/GROUP_ID , to add a user to a group. For example: Removing a user from a group Use the delete command on the same endpoint URI as used for adding a user to a group, such as users/USER_ID/groups/GROUP_ID , to remove a user from a group. For example: Listing assigned, available, and effective realm roles for a group Use a dedicated get-roles command to list assigned, available, and effective realm roles for a group. Specify the target group by name (via the --gname option), path (via the [command] --gpath option), or ID (via the --gid option) to list assigned realm roles for the group. For example: Use the additional --effective option to list effective realm roles. For example: Use the --available option to list realm roles that can still be added to the group. For example: Listing assigned, available, and effective client roles for a group Use a dedicated get-roles command to list assigned, available, and effective client roles for a group. Specify the target group by either name (via the --gname option) or ID (via the --gid option), and client by either the clientId attribute (via the [command] --cclientid option) or ID (via the --id option) to list assigned client roles for the user. For example: Use the additional --effective option to list effective realm roles. For example: Use the --available option to list realm roles that can still be added to the group. For example: 20.11. Identity provider operations Listing available identity providers Use the serverinfo endpoint to list available identity providers. For example: Note The serverinfo endpoint is handled similarly to the realms endpoint in that it is not resolved relative to a target realm because it exists outside any specific realm. Listing configured identity providers Use the identity-provider/instances endpoint. For example: Getting a specific configured identity provider Use the alias attribute of the identity provider to construct an endpoint URI, such as identity-provider/instances/ALIAS , to get a specific identity provider. For example: Removing a specific configured identity provider Use the delete command with the same endpoint URI that you used to get a specific configured identity provider to remove a specific configured identity provider. For example: Configuring a Keycloak OpenID Connect identity provider Use keycloak-oidc as the providerId when creating a new identity provider instance. Provide the config attributes: authorizationUrl , tokenUrl , clientId , and clientSecret . For example: Configuring an OpenID Connect identity provider Configure the generic OpenID Connect provider the same way you configure the Keycloak OpenID Connect provider, except that you set the providerId attribute value to oidc . Configuring a SAML 2 identity provider Use saml as the providerId . Provide the config attributes: singleSignOnServiceUrl , nameIDPolicyFormat , and signatureAlgorithm . For example: Configuring a Facebook identity provider Use facebook as the providerId . Provide the config attributes: clientId and clientSecret . You can find these attributes in the Facebook Developers application configuration page for your application. For example: Configuring a Google identity provider Use google as the providerId . Provide the config attributes: clientId and clientSecret . You can find these attributes in the Google Developers application configuration page for your application. For example: Configuring a Twitter identity provider Use twitter as the providerId . Provide the config attributes clientId and clientSecret . You can find these attributes in the Twitter Application Management application configuration page for your application. For example: Configuring a GitHub identity provider Use github as the providerId . Provide the config attributes clientId and clientSecret . You can find these attributes in the GitHub Developer Application Settings page for your application. For example: Configuring a LinkedIn identity provider Use linkedin as the providerId . Provide the config attributes clientId and clientSecret . You can find these attributes in the LinkedIn Developer Console application page for your application. For example: Configuring a Microsoft Live identity provider Use microsoft as the providerId . Provide the config attributes clientId and clientSecret . You can find these attributes in the Microsoft Application Registration Portal page for your application. For example: Configuring a Stack Overflow identity provider Use stackoverflow command as the providerId . Provide the config attributes clientId , clientSecret , and key . You can find these attributes in the Stack Apps OAuth page for your application. For example: 20.12. Storage provider operations Configuring a Kerberos storage provider Use the create command against the components endpoint. Specify realm id as a value of the parentId attribute. Specify kerberos as the value of the providerId attribute, and org.keycloak.storage.UserStorageProvider as the value of the providerType attribute. For example: Configuring an LDAP user storage provider Use the create command against the components endpoint. Specify ldap as a value of the providerId attribute, and org.keycloak.storage.UserStorageProvider as the value of the providerType attribute. Provide the realm ID as the value of the parentId attribute. Use the following example to create a Kerberos-integrated LDAP provider. Removing a user storage provider instance Use the storage provider instance's id attribute to compose an endpoint URI, such as components/ID . Run the delete command against this endpoint. For example: Triggering synchronization of all users for a specific user storage provider Use the storage provider's id attribute to compose an endpoint URI, such as user-storage/ID_OF_USER_STORAGE_INSTANCE/sync . Add the action=triggerFullSync query parameter and run the create command. For example: Triggering synchronization of changed users for a specific user storage provider Use the storage provider's id attribute to compose an endpoint URI, such as user-storage/ID_OF_USER_STORAGE_INSTANCE/sync . Add the action=triggerChangedUsersSync query parameter and run the create command. For example: Test LDAP user storage connectivity Run the get command on the testLDAPConnection endpoint. Provide query parameters bindCredential , bindDn , connectionUrl , and useTruststoreSpi , and then set the action query parameter to testConnection . For example: Test LDAP user storage authentication Run the get command on the testLDAPConnection endpoint. Provide the query parameters bindCredential , bindDn , connectionUrl , and useTruststoreSpi , and then set the action query parameter to testAuthentication . For example: 20.13. Adding mappers Adding a hardcoded role LDAP mapper Run the create command on the components endpoint. Set the providerType attribute to org.keycloak.storage.ldap.mappers.LDAPStorageMapper . Set the parentId attribute to the ID of the LDAP provider instance. Set the providerId attribute to hardcoded-ldap-role-mapper . Make sure to provide a value of role configuration parameter. For example: Adding an MS Active Directory mapper Run the create command on the components endpoint. Set the providerType attribute to org.keycloak.storage.ldap.mappers.LDAPStorageMapper . Set the parentId attribute to the ID of the LDAP provider instance. Set the providerId attribute to msad-user-account-control-mapper . For example: Adding a user attribute LDAP mapper Run the create command on the components endpoint. Set the providerType attribute to org.keycloak.storage.ldap.mappers.LDAPStorageMapper . Set the parentId attribute to the ID of the LDAP provider instance. Set the providerId attribute to user-attribute-ldap-mapper . For example: Adding a group LDAP mapper Run the create command on the components endpoint. Set the providerType attribute to org.keycloak.storage.ldap.mappers.LDAPStorageMapper . Set the parentId attribute to the ID of the LDAP provider instance. Set the providerId attribute to group-ldap-mapper . For example: Adding a full name LDAP mapper Run the create command on the components endpoint. Set the providerType attribute to org.keycloak.storage.ldap.mappers.LDAPStorageMapper . Set the parentId attribute to the ID of the LDAP provider instance. Set the providerId attribute to full-name-ldap-mapper . For example: 20.14. Authentication operations Setting a password policy Set the realm's passwordPolicy attribute to an enumeration expression that includes the specific policy provider ID and optional configuration. Use the following example to set a password policy to default values. The default values include: 27,500 hashing iterations at least one special character at least one uppercase character at least one digit character not be equal to a user's username be at least eight characters long If you want to use values different from defaults, pass the configuration in brackets. Use the following example to set a password policy to: 25,000 hash iterations at least two special characters at least two uppercase characters at least two lowercase characters at least two digits be at least nine characters long not be equal to a user's username not repeat for at least four changes back Getting the current password policy Get the current realm configuration and filter everything but the passwordPolicy attribute. Use the following example to display passwordPolicy for demorealm . Listing authentication flows Run the get command on the authentication/flows endpoint. For example: Getting a specific authentication flow Run the get command on the authentication/flows/FLOW_ID endpoint. For example: Listing executions for a flow Run the get command on the authentication/flows/FLOW_ALIAS/executions endpoint. For example: Adding configuration to an execution Get execution for a flow, and take note of its ID Run the create command on the authentication/executions/{executionId}/config endpoint. For example: Getting configuration for an execution Get execution for a flow, and get its authenticationConfig attribute, containing the config ID. Run the get command on the authentication/config/ID endpoint. For example: Updating configuration for an execution Get execution for a flow, and get its authenticationConfig attribute, containing the config ID. Run the update command on the authentication/config/ID endpoint. For example: Deleting configuration for an execution Get execution for a flow, and get its authenticationConfig attribute, containing the config ID. Run the delete command on the authentication/config/ID endpoint. For example:
|
[
"export PATH=USDPATH:USDKEYCLOAK_HOME/bin kcadm.sh",
"c:\\> set PATH=%PATH%;%KEYCLOAK_HOME%\\bin c:\\> kcadm",
"kcadm.sh config credentials --server http://localhost:8080/auth --realm demo --user admin --client admin kcadm.sh create realms -s realm=demorealm -s enabled=true -o CID=USD(kcadm.sh create clients -r demorealm -s clientId=my_client -s 'redirectUris=[\"http://localhost:8980/myapp/*\"]' -i) kcadm.sh get clients/USDCID/installation/providers/keycloak-oidc-keycloak-json",
"c:\\> kcadm config credentials --server http://localhost:8080/auth --realm demo --user admin --client admin c:\\> kcadm create realms -s realm=demorealm -s enabled=true -o c:\\> kcadm create clients -r demorealm -s clientId=my_client -s \"redirectUris=[\\\"http://localhost:8980/myapp/*\\\"]\" -i > clientid.txt c:\\> set /p CID=<clientid.txt c:\\> kcadm get clients/%CID%/installation/providers/keycloak-oidc-keycloak-json",
"kcadm.sh config truststore --trustpass USDPASSWORD ~/.keycloak/truststore.jks",
"c:\\> kcadm config truststore --trustpass %PASSWORD% %HOMEPATH%\\.keycloak\\truststore.jks",
"kcadm.sh config credentials --server http://localhost:8080/auth --realm master --user admin --password admin",
"kcadm.sh get realms --no-config --server http://localhost:8080/auth --realm master --user admin --password admin",
"kcadm.sh create ENDPOINT [ARGUMENTS] kcadm.sh get ENDPOINT [ARGUMENTS] kcadm.sh update ENDPOINT [ARGUMENTS] kcadm.sh delete ENDPOINT [ARGUMENTS]",
"SERVER_URI/admin/realms/REALM/ENDPOINT",
"SERVER_URI/admin/realms",
"SERVER_URI/admin/realms/TARGET_REALM/ENDPOINT",
"kcadm.sh config credentials --server http://localhost:8080/auth --realm master --user admin --password admin kcadm.sh create users -s username=testuser -s enabled=true -r demorealm",
"kcadm.sh get realms/demorealm > demorealm.json vi demorealm.json kcadm.sh update realms/demorealm -f demorealm.json",
"kcadm.sh update realms/demorealm -s enabled=false",
"kcadm.sh create realms -s realm=demorealm -s enabled=true",
"kcadm.sh create realms -f demorealm.json",
"kcadm.sh create realms -f - << EOF { \"realm\": \"demorealm\", \"enabled\": true } EOF",
"c:\\> echo { \"realm\": \"demorealm\", \"enabled\": true } | kcadm create realms -f -",
"kcadm.sh get realms",
"kcadm.sh get realms --fields realm,enabled",
"kcadm.sh get realms --fields realm --format csv --noquotes",
"kcadm.sh get realms/master",
"kcadm.sh update realms/demorealm -s enabled=false",
"kcadm.sh get realms/demorealm > demorealm.json vi demorealm.json kcadm.sh update realms/demorealm -f demorealm.json",
"kcadm.sh delete realms/demorealm",
"kcadm.sh update realms/demorealm -s registrationAllowed=true -s registrationEmailAsUsername=true -s rememberMe=true -s verifyEmail=true -s resetPasswordAllowed=true -s editUsernameAllowed=true",
"kcadm.sh get keys -r demorealm",
"kcadm.sh get realms/demorealm --fields id --format csv --noquotes",
"kcadm.sh create components -r demorealm -s name=rsa-generated -s providerId=rsa-generated -s providerType=org.keycloak.keys.KeyProvider -s parentId=959844c1-d149-41d7-8359-6aa527fca0b0 -s 'config.priority=[\"101\"]' -s 'config.enabled=[\"true\"]' -s 'config.active=[\"true\"]' -s 'config.keySize=[\"2048\"]'",
"c:\\> kcadm create components -r demorealm -s name=rsa-generated -s providerId=rsa-generated -s providerType=org.keycloak.keys.KeyProvider -s parentId=959844c1-d149-41d7-8359-6aa527fca0b0 -s \"config.priority=[\\\"101\\\"]\" -s \"config.enabled=[\\\"true\\\"]\" -s \"config.active=[\\\"true\\\"]\" -s \"config.keySize=[\\\"2048\\\"]\"",
"kcadm.sh create components -r demorealm -s name=java-keystore -s providerId=java-keystore -s providerType=org.keycloak.keys.KeyProvider -s parentId=959844c1-d149-41d7-8359-6aa527fca0b0 -s 'config.priority=[\"101\"]' -s 'config.enabled=[\"true\"]' -s 'config.active=[\"true\"]' -s 'config.keystore=[\"/opt/keycloak/keystore.jks\"]' -s 'config.keystorePassword=[\"secret\"]' -s 'config.keyPassword=[\"secret\"]' -s 'config.alias=[\"localhost\"]'",
"c:\\> kcadm create components -r demorealm -s name=java-keystore -s providerId=java-keystore -s providerType=org.keycloak.keys.KeyProvider -s parentId=959844c1-d149-41d7-8359-6aa527fca0b0 -s \"config.priority=[\\\"101\\\"]\" -s \"config.enabled=[\\\"true\\\"]\" -s \"config.active=[\\\"true\\\"]\" -s \"config.keystore=[\\\"/opt/keycloak/keystore.jks\\\"]\" -s \"config.keystorePassword=[\\\"secret\\\"]\" -s \"config.keyPassword=[\\\"secret\\\"]\" -s \"config.alias=[\\\"localhost\\\"]\"",
"kcadm.sh get keys -r demorealm",
"kcadm.sh update components/PROVIDER_ID -r demorealm -s 'config.active=[\"false\"]'",
"c:\\> kcadm update components/PROVIDER_ID -r demorealm -s \"config.active=[\\\"false\\\"]\"",
"kcadm.sh get keys -r demorealm",
"kcadm.sh delete components/PROVIDER_ID -r demorealm",
"kcadm.sh update events/config -r demorealm -s 'eventsListeners=[\"jboss-logging\"]'",
"c:\\> kcadm update events/config -r demorealm -s \"eventsListeners=[\\\"jboss-logging\\\"]\"",
"kcadm.sh update events/config -r demorealm -s eventsEnabled=true -s 'enabledEventTypes=[\"LOGIN_ERROR\",\"REGISTER_ERROR\",\"LOGOUT_ERROR\",\"CODE_TO_TOKEN_ERROR\",\"CLIENT_LOGIN_ERROR\",\"FEDERATED_IDENTITY_LINK_ERROR\",\"REMOVE_FEDERATED_IDENTITY_ERROR\",\"UPDATE_EMAIL_ERROR\",\"UPDATE_PROFILE_ERROR\",\"UPDATE_PASSWORD_ERROR\",\"UPDATE_TOTP_ERROR\",\"VERIFY_EMAIL_ERROR\",\"REMOVE_TOTP_ERROR\",\"SEND_VERIFY_EMAIL_ERROR\",\"SEND_RESET_PASSWORD_ERROR\",\"SEND_IDENTITY_PROVIDER_LINK_ERROR\",\"RESET_PASSWORD_ERROR\",\"IDENTITY_PROVIDER_FIRST_LOGIN_ERROR\",\"IDENTITY_PROVIDER_POST_LOGIN_ERROR\",\"CUSTOM_REQUIRED_ACTION_ERROR\",\"EXECUTE_ACTIONS_ERROR\",\"CLIENT_REGISTER_ERROR\",\"CLIENT_UPDATE_ERROR\",\"CLIENT_DELETE_ERROR\"]' -s eventsExpiration=172800",
"c:\\> kcadm update events/config -r demorealm -s eventsEnabled=true -s \"enabledEventTypes=[\\\"LOGIN_ERROR\\\",\\\"REGISTER_ERROR\\\",\\\"LOGOUT_ERROR\\\",\\\"CODE_TO_TOKEN_ERROR\\\",\\\"CLIENT_LOGIN_ERROR\\\",\\\"FEDERATED_IDENTITY_LINK_ERROR\\\",\\\"REMOVE_FEDERATED_IDENTITY_ERROR\\\",\\\"UPDATE_EMAIL_ERROR\\\",\\\"UPDATE_PROFILE_ERROR\\\",\\\"UPDATE_PASSWORD_ERROR\\\",\\\"UPDATE_TOTP_ERROR\\\",\\\"VERIFY_EMAIL_ERROR\\\",\\\"REMOVE_TOTP_ERROR\\\",\\\"SEND_VERIFY_EMAIL_ERROR\\\",\\\"SEND_RESET_PASSWORD_ERROR\\\",\\\"SEND_IDENTITY_PROVIDER_LINK_ERROR\\\",\\\"RESET_PASSWORD_ERROR\\\",\\\"IDENTITY_PROVIDER_FIRST_LOGIN_ERROR\\\",\\\"IDENTITY_PROVIDER_POST_LOGIN_ERROR\\\",\\\"CUSTOM_REQUIRED_ACTION_ERROR\\\",\\\"EXECUTE_ACTIONS_ERROR\\\",\\\"CLIENT_REGISTER_ERROR\\\",\\\"CLIENT_UPDATE_ERROR\\\",\\\"CLIENT_DELETE_ERROR\\\"]\" -s eventsExpiration=172800",
"kcadm.sh update events/config -r demorealm -s enabledEventTypes=[]",
"kcadm.sh update events/config -r demorealm -s adminEventsEnabled=true -s adminEventsDetailsEnabled=true",
"kcadm.sh get events --offset 0 --limit 100",
"kcadm delete events",
"kcadm.sh create clear-realm-cache -r demorealm -s realm=demorealm kcadm.sh create clear-user-cache -r demorealm -s realm=demorealm kcadm.sh create clear-keys-cache -r demorealm -s realm=demorealm",
"kcadm.sh create partialImport -r demorealm2 -s ifResourceExists=FAIL -o -f demorealm.json",
"kcadm.sh create realms -s realm=demorealm2 -s enabled=true",
"kcadm.sh create roles -r demorealm -s name=user -s 'description=Regular user with limited set of permissions'",
"kcadm.sh get clients -r demorealm --fields id,clientId",
"kcadm.sh create clients/a95b6af3-0bdc-4878-ae2e-6d61a4eca9a0/roles -r demorealm -s name=editor -s 'description=Editor can edit, and publish any article'",
"kcadm.sh get roles -r demorealm",
"kcadm.sh get-roles -r demorealm",
"kcadm.sh get-roles -r demorealm --cclientid realm-management",
"kcadm.sh get roles/user -r demorealm",
"kcadm.sh get-roles -r demorealm --rolename user",
"kcadm.sh get-roles -r demorealm --cclientid realm-management --rolename manage-clients",
"kcadm.sh update roles/user -r demorealm -s 'description=Role representing a regular user'",
"kcadm.sh update clients/a95b6af3-0bdc-4878-ae2e-6d61a4eca9a0/roles/editor -r demorealm -s 'description=User that can edit, and publish articles'",
"kcadm.sh delete roles/user -r demorealm",
"kcadm.sh delete clients/a95b6af3-0bdc-4878-ae2e-6d61a4eca9a0/roles/editor -r demorealm",
"kcadm.sh get-roles -r demorealm --rname testrole",
"kcadm.sh get-roles -r demorealm --rname testrole --effective",
"kcadm.sh get-roles -r demorealm --rname testrole --available",
"kcadm.sh get-roles -r demorealm --rname testrole --cclientid realm-management",
"kcadm.sh get-roles -r demorealm --rname testrole --cclientid realm-management --effective",
"kcadm.sh get-roles -r demorealm --rname testrole --cclientid realm-management --available",
"kcadm.sh add-roles --rname testrole --rolename user -r demorealm",
"kcadm.sh remove-roles --rname testrole --rolename user -r demorealm",
"kcadm.sh add-roles -r demorealm --rname testrole --cclientid realm-management --rolename create-client --rolename view-users",
"kcadm.sh get-roles -r demorealm --cclientid test-client --rolename operations",
"kcadm.sh add-roles -r demorealm --cclientid test-client --rid fc400897-ef6a-4e8c-872b-1581b7fa8a71 --rolename support",
"kcadm.sh get-roles --rid fc400897-ef6a-4e8c-872b-1581b7fa8a71 --all",
"kcadm.sh remove-roles -r demorealm --rname testrole --cclientid realm-management --rolename create-client --rolename view-users",
"kcadm.sh add-roles -r demorealm --gname Group --cclientid realm-management --rolename create-client --rolename view-users",
"kcadm.sh remove-roles -r demorealm --gname Group --cclientid realm-management --rolename create-client --rolename view-users",
"kcadm.sh create clients -r demorealm -s clientId=myapp -s enabled=true",
"kcadm.sh create clients -r demorealm -s clientId=myapp -s enabled=true -s clientAuthenticatorType=client-secret -s secret=d0b8122f-8dfb-46b7-b68a-f5cc4e25d000",
"kcadm.sh get clients -r demorealm --fields id,clientId",
"kcadm.sh get clients/c7b8547f-e748-4333-95d0-410b76b3f4a3 -r demorealm",
"kcadm.sh get clients/USDCID/client-secret",
"kcadm.sh get clients/c7b8547f-e748-4333-95d0-410b76b3f4a3/installation/providers/keycloak-oidc-keycloak-json -r demorealm",
"kcadm.sh get clients/c7b8547f-e748-4333-95d0-410b76b3f4a3/installation/providers/keycloak-oidc-jboss-subsystem -r demorealm",
"kcadm.sh get http://localhost:8080/auth/admin/realms/demorealm/clients/8f271c35-44e3-446f-8953-b0893810ebe7/installation/providers/docker-v2-compose-yaml -r demorealm > keycloak-docker-compose-yaml.zip",
"kcadm.sh update clients/c7b8547f-e748-4333-95d0-410b76b3f4a3 -r demorealm -s enabled=false -s publicClient=true -s 'redirectUris=[\"http://localhost:8080/myapp/*\"]' -s baseUrl=http://localhost:8080/myapp -s adminUrl=http://localhost:8080/myapp",
"c:\\> kcadm update clients/c7b8547f-e748-4333-95d0-410b76b3f4a3 -r demorealm -s enabled=false -s publicClient=true -s \"redirectUris=[\\\"http://localhost:8080/myapp/*\\\"]\" -s baseUrl=http://localhost:8080/myapp -s adminUrl=http://localhost:8080/myapp",
"kcadm.sh delete clients/c7b8547f-e748-4333-95d0-410b76b3f4a3 -r demorealm",
"kcadm.sh create users -r demorealm -s username=testuser -s enabled=true",
"kcadm.sh get users -r demorealm --offset 0 --limit 1000",
"kcadm.sh get users -r demorealm -q email=google.com kcadm.sh get users -r demorealm -q username=testuser",
"kcadm.sh get users/0ba7a3fd-6fd8-48cd-a60b-2e8fd82d56e2 -r demorealm",
"kcadm.sh update users/0ba7a3fd-6fd8-48cd-a60b-2e8fd82d56e2 -r demorealm -s 'requiredActions=[\"VERIFY_EMAIL\",\"UPDATE_PROFILE\",\"CONFIGURE_TOTP\",\"UPDATE_PASSWORD\"]'",
"c:\\> kcadm update users/0ba7a3fd-6fd8-48cd-a60b-2e8fd82d56e2 -r demorealm -s \"requiredActions=[\\\"VERIFY_EMAIL\\\",\\\"UPDATE_PROFILE\\\",\\\"CONFIGURE_TOTP\\\",\\\"UPDATE_PASSWORD\\\"]\"",
"kcadm.sh delete users/0ba7a3fd-6fd8-48cd-a60b-2e8fd82d56e2 -r demorealm",
"kcadm.sh set-password -r demorealm --username testuser --new-password NEWPASSWORD --temporary",
"kcadm.sh update users/0ba7a3fd-6fd8-48cd-a60b-2e8fd82d56e2/reset-password -r demorealm -s type=password -s value=NEWPASSWORD -s temporary=true -n",
"kcadm.sh get-roles -r demorealm --uusername testuser",
"kcadm.sh get-roles -r demorealm --uusername testuser --effective",
"kcadm.sh get-roles -r demorealm --uusername testuser --available",
"kcadm.sh get-roles -r demorealm --uusername testuser --cclientid realm-management",
"kcadm.sh get-roles -r demorealm --uusername testuser --cclientid realm-management --effective",
"kcadm.sh get-roles -r demorealm --uusername testuser --cclientid realm-management --available",
"kcadm.sh add-roles --uusername testuser --rolename user -r demorealm",
"kcadm.sh remove-roles --uusername testuser --rolename user -r demorealm",
"kcadm.sh add-roles -r demorealm --uusername testuser --cclientid realm-management --rolename create-client --rolename view-users",
"kcadm.sh remove-roles -r demorealm --uusername testuser --cclientid realm-management --rolename create-client --rolename view-users",
"USDkcadm get users/6da5ab89-3397-4205-afaa-e201ff638f9e/sessions",
"kcadm.sh delete sessions/d0eaa7cc-8c5d-489d-811a-69d3c4ec84d1",
"kcadm.sh create users/6da5ab89-3397-4205-afaa-e201ff638f9e/logout -r demorealm -s realm=demorealm -s user=6da5ab89-3397-4205-afaa-e201ff638f9e",
"kcadm.sh create groups -r demorealm -s name=Group",
"kcadm.sh get groups -r demorealm",
"kcadm.sh get groups/51204821-0580-46db-8f2d-27106c6b5ded -r demorealm",
"kcadm.sh update groups/51204821-0580-46db-8f2d-27106c6b5ded -s 'attributes.email=[\"[email protected]\"]' -r demorealm",
"kcadm.sh delete groups/51204821-0580-46db-8f2d-27106c6b5ded -r demorealm",
"kcadm.sh create groups/51204821-0580-46db-8f2d-27106c6b5ded/children -r demorealm -s name=SubGroup",
"kcadm.sh create groups/51204821-0580-46db-8f2d-27106c6b5ded/children -r demorealm -s id=08d410c6-d585-4059-bb07-54dcb92c5094",
"kcadm.sh get users/b544f379-5fc4-49e5-8a8d-5cfb71f46f53/groups -r demorealm",
"kcadm.sh update users/b544f379-5fc4-49e5-8a8d-5cfb71f46f53/groups/ce01117a-7426-4670-a29a-5c118056fe20 -r demorealm -s realm=demorealm -s userId=b544f379-5fc4-49e5-8a8d-5cfb71f46f53 -s groupId=ce01117a-7426-4670-a29a-5c118056fe20 -n",
"kcadm.sh delete users/b544f379-5fc4-49e5-8a8d-5cfb71f46f53/groups/ce01117a-7426-4670-a29a-5c118056fe20 -r demorealm",
"kcadm.sh get-roles -r demorealm --gname Group",
"kcadm.sh get-roles -r demorealm --gname Group --effective",
"kcadm.sh get-roles -r demorealm --gname Group --available",
"kcadm.sh get-roles -r demorealm --gname Group --cclientid realm-management",
"kcadm.sh get-roles -r demorealm --gname Group --cclientid realm-management --effective",
"kcadm.sh get-roles -r demorealm --gname Group --cclientid realm-management --available",
"kcadm.sh get serverinfo -r demorealm --fields 'identityProviders(*)'",
"kcadm.sh get identity-provider/instances -r demorealm --fields alias,providerId,enabled",
"kcadm.sh get identity-provider/instances/facebook -r demorealm",
"kcadm.sh delete identity-provider/instances/facebook -r demorealm",
"kcadm.sh create identity-provider/instances -r demorealm -s alias=keycloak-oidc -s providerId=keycloak-oidc -s enabled=true -s 'config.useJwksUrl=\"true\"' -s config.authorizationUrl=http://localhost:8180/auth/realms/demorealm/protocol/openid-connect/auth -s config.tokenUrl=http://localhost:8180/auth/realms/demorealm/protocol/openid-connect/token -s config.clientId=demo-oidc-provider -s config.clientSecret=secret",
"kcadm.sh create identity-provider/instances -r demorealm -s alias=saml -s providerId=saml -s enabled=true -s 'config.useJwksUrl=\"true\"' -s config.singleSignOnServiceUrl=http://localhost:8180/auth/realms/saml-broker-realm/protocol/saml -s config.nameIDPolicyFormat=urn:oasis:names:tc:SAML:2.0:nameid-format:persistent -s config.signatureAlgorithm=RSA_SHA256",
"kcadm.sh create identity-provider/instances -r demorealm -s alias=facebook -s providerId=facebook -s enabled=true -s 'config.useJwksUrl=\"true\"' -s config.clientId=FACEBOOK_CLIENT_ID -s config.clientSecret=FACEBOOK_CLIENT_SECRET",
"kcadm.sh create identity-provider/instances -r demorealm -s alias=google -s providerId=google -s enabled=true -s 'config.useJwksUrl=\"true\"' -s config.clientId=GOOGLE_CLIENT_ID -s config.clientSecret=GOOGLE_CLIENT_SECRET",
"kcadm.sh create identity-provider/instances -r demorealm -s alias=google -s providerId=google -s enabled=true -s 'config.useJwksUrl=\"true\"' -s config.clientId=TWITTER_API_KEY -s config.clientSecret=TWITTER_API_SECRET",
"kcadm.sh create identity-provider/instances -r demorealm -s alias=github -s providerId=github -s enabled=true -s 'config.useJwksUrl=\"true\"' -s config.clientId=GITHUB_CLIENT_ID -s config.clientSecret=GITHUB_CLIENT_SECRET",
"kcadm.sh create identity-provider/instances -r demorealm -s alias=linkedin -s providerId=linkedin -s enabled=true -s 'config.useJwksUrl=\"true\"' -s config.clientId=LINKEDIN_CLIENT_ID -s config.clientSecret=LINKEDIN_CLIENT_SECRET",
"kcadm.sh create identity-provider/instances -r demorealm -s alias=microsoft -s providerId=microsoft -s enabled=true -s 'config.useJwksUrl=\"true\"' -s config.clientId=MICROSOFT_APP_ID -s config.clientSecret=MICROSOFT_PASSWORD",
"kcadm.sh create identity-provider/instances -r demorealm -s alias=stackoverflow -s providerId=stackoverflow -s enabled=true -s 'config.useJwksUrl=\"true\"' -s config.clientId=STACKAPPS_CLIENT_ID -s config.clientSecret=STACKAPPS_CLIENT_SECRET -s config.key=STACKAPPS_KEY",
"kcadm.sh create components -r demorealm -s parentId=demorealmId -s id=demokerberos -s name=demokerberos -s providerId=kerberos -s providerType=org.keycloak.storage.UserStorageProvider -s 'config.priority=[\"0\"]' -s 'config.debug=[\"false\"]' -s 'config.allowPasswordAuthentication=[\"true\"]' -s 'config.editMode=[\"UNSYNCED\"]' -s 'config.updateProfileFirstLogin=[\"true\"]' -s 'config.allowKerberosAuthentication=[\"true\"]' -s 'config.kerberosRealm=[\"KEYCLOAK.ORG\"]' -s 'config.keyTab=[\"http.keytab\"]' -s 'config.serverPrincipal=[\"HTTP/[email protected]\"]' -s 'config.cachePolicy=[\"DEFAULT\"]'",
"kcadm.sh create components -r demorealm -s name=kerberos-ldap-provider -s providerId=ldap -s providerType=org.keycloak.storage.UserStorageProvider -s parentId=3d9c572b-8f33-483f-98a6-8bb421667867 -s 'config.priority=[\"1\"]' -s 'config.fullSyncPeriod=[\"-1\"]' -s 'config.changedSyncPeriod=[\"-1\"]' -s 'config.cachePolicy=[\"DEFAULT\"]' -s config.evictionDay=[] -s config.evictionHour=[] -s config.evictionMinute=[] -s config.maxLifespan=[] -s 'config.batchSizeForSync=[\"1000\"]' -s 'config.editMode=[\"WRITABLE\"]' -s 'config.syncRegistrations=[\"false\"]' -s 'config.vendor=[\"other\"]' -s 'config.usernameLDAPAttribute=[\"uid\"]' -s 'config.rdnLDAPAttribute=[\"uid\"]' -s 'config.uuidLDAPAttribute=[\"entryUUID\"]' -s 'config.userObjectClasses=[\"inetOrgPerson, organizationalPerson\"]' -s 'config.connectionUrl=[\"ldap://localhost:10389\"]' -s 'config.usersDn=[\"ou=People,dc=keycloak,dc=org\"]' -s 'config.authType=[\"simple\"]' -s 'config.bindDn=[\"uid=admin,ou=system\"]' -s 'config.bindCredential=[\"secret\"]' -s 'config.searchScope=[\"1\"]' -s 'config.useTruststoreSpi=[\"ldapsOnly\"]' -s 'config.connectionPooling=[\"true\"]' -s 'config.pagination=[\"true\"]' -s 'config.allowKerberosAuthentication=[\"true\"]' -s 'config.serverPrincipal=[\"HTTP/[email protected]\"]' -s 'config.keyTab=[\"http.keytab\"]' -s 'config.kerberosRealm=[\"KEYCLOAK.ORG\"]' -s 'config.debug=[\"true\"]' -s 'config.useKerberosForPasswordAuthentication=[\"true\"]'",
"kcadm.sh delete components/3d9c572b-8f33-483f-98a6-8bb421667867 -r demorealm",
"kcadm.sh create user-storage/b7c63d02-b62a-4fc1-977c-947d6a09e1ea/sync?action=triggerFullSync",
"kcadm.sh create user-storage/b7c63d02-b62a-4fc1-977c-947d6a09e1ea/sync?action=triggerChangedUsersSync",
"kcadm.sh create testLDAPConnection -s action=testConnection -s bindCredential=secret -s bindDn=uid=admin,ou=system -s connectionUrl=ldap://localhost:10389 -s useTruststoreSpi=ldapsOnly",
"kcadm.sh create testLDAPConnection -s action=testAuthentication -s bindCredential=secret -s bindDn=uid=admin,ou=system -s connectionUrl=ldap://localhost:10389 -s useTruststoreSpi=ldapsOnly",
"kcadm.sh create components -r demorealm -s name=hardcoded-ldap-role-mapper -s providerId=hardcoded-ldap-role-mapper -s providerType=org.keycloak.storage.ldap.mappers.LDAPStorageMapper -s parentId=b7c63d02-b62a-4fc1-977c-947d6a09e1ea -s 'config.role=[\"realm-management.create-client\"]'",
"kcadm.sh create components -r demorealm -s name=msad-user-account-control-mapper -s providerId=msad-user-account-control-mapper -s providerType=org.keycloak.storage.ldap.mappers.LDAPStorageMapper -s parentId=b7c63d02-b62a-4fc1-977c-947d6a09e1ea",
"kcadm.sh create components -r demorealm -s name=user-attribute-ldap-mapper -s providerId=user-attribute-ldap-mapper -s providerType=org.keycloak.storage.ldap.mappers.LDAPStorageMapper -s parentId=b7c63d02-b62a-4fc1-977c-947d6a09e1ea -s 'config.\"user.model.attribute\"=[\"email\"]' -s 'config.\"ldap.attribute\"=[\"mail\"]' -s 'config.\"read.only\"=[\"false\"]' -s 'config.\"always.read.value.from.ldap\"=[\"false\"]' -s 'config.\"is.mandatory.in.ldap\"=[\"false\"]'",
"kcadm.sh create components -r demorealm -s name=group-ldap-mapper -s providerId=group-ldap-mapper -s providerType=org.keycloak.storage.ldap.mappers.LDAPStorageMapper -s parentId=b7c63d02-b62a-4fc1-977c-947d6a09e1ea -s 'config.\"groups.dn\"=[]' -s 'config.\"group.name.ldap.attribute\"=[\"cn\"]' -s 'config.\"group.object.classes\"=[\"groupOfNames\"]' -s 'config.\"preserve.group.inheritance\"=[\"true\"]' -s 'config.\"membership.ldap.attribute\"=[\"member\"]' -s 'config.\"membership.attribute.type\"=[\"DN\"]' -s 'config.\"groups.ldap.filter\"=[]' -s 'config.mode=[\"LDAP_ONLY\"]' -s 'config.\"user.roles.retrieve.strategy\"=[\"LOAD_GROUPS_BY_MEMBER_ATTRIBUTE\"]' -s 'config.\"mapped.group.attributes\"=[\"admins-group\"]' -s 'config.\"drop.non.existing.groups.during.sync\"=[\"false\"]' -s 'config.roles=[\"admins\"]' -s 'config.groups=[\"admins-group\"]' -s 'config.group=[]' -s 'config.preserve=[\"true\"]' -s 'config.membership=[\"member\"]'",
"kcadm.sh create components -r demorealm -s name=full-name-ldap-mapper -s providerId=full-name-ldap-mapper -s providerType=org.keycloak.storage.ldap.mappers.LDAPStorageMapper -s parentId=b7c63d02-b62a-4fc1-977c-947d6a09e1ea -s 'config.\"ldap.full.name.attribute\"=[\"cn\"]' -s 'config.\"read.only\"=[\"false\"]' -s 'config.\"write.only\"=[\"true\"]'",
"kcadm.sh update realms/demorealm -s 'passwordPolicy=\"hashIterations and specialChars and upperCase and digits and notUsername and length\"'",
"kcadm.sh update realms/demorealm -s 'passwordPolicy=\"hashIterations(25000) and specialChars(2) and upperCase(2) and lowerCase(2) and digits(2) and length(9) and notUsername and passwordHistory(4)\"'",
"kcadm.sh get realms/demorealm --fields passwordPolicy",
"kcadm.sh get authentication/flows -r demorealm",
"kcadm.sh get authentication/flows/febfd772-e1a1-42fb-b8ae-00c0566fafb8 -r demorealm",
"kcadm.sh get authentication/flows/Copy%20of%20browser/executions -r demorealm",
"kcadm create \"authentication/executions/a3147129-c402-4760-86d9-3f2345e401c7/config\" -r examplerealm -b '{\"config\":{\"x509-cert-auth.mapping-source-selection\":\"Match SubjectDN using regular expression\",\"x509-cert-auth.regular-expression\":\"(.*?)(?:USD)\",\"x509-cert-auth.mapper-selection\":\"Custom Attribute Mapper\",\"x509-cert-auth.mapper-selection.user-attribute-name\":\"usercertificate\",\"x509-cert-auth.crl-checking-enabled\":\"\",\"x509-cert-auth.crldp-checking-enabled\":false,\"x509-cert-auth.crl-relative-path\":\"crl.pem\",\"x509-cert-auth.ocsp-checking-enabled\":\"\",\"x509-cert-auth.ocsp-responder-uri\":\"\",\"x509-cert-auth.keyusage\":\"\",\"x509-cert-auth.extendedkeyusage\":\"\",\"x509-cert-auth.confirmation-page-disallowed\":\"\"},\"alias\":\"my_otp_config\"}'",
"kcadm get \"authentication/config/dd91611a-d25c-421a-87e2-227c18421833\" -r examplerealm",
"kcadm update \"authentication/config/dd91611a-d25c-421a-87e2-227c18421833\" -r examplerealm -b '{\"id\":\"dd91611a-d25c-421a-87e2-227c18421833\",\"alias\":\"my_otp_config\",\"config\":{\"x509-cert-auth.extendedkeyusage\":\"\",\"x509-cert-auth.mapper-selection.user-attribute-name\":\"usercertificate\",\"x509-cert-auth.ocsp-responder-uri\":\"\",\"x509-cert-auth.regular-expression\":\"(.*?)(?:USD)\",\"x509-cert-auth.crl-checking-enabled\":\"true\",\"x509-cert-auth.confirmation-page-disallowed\":\"\",\"x509-cert-auth.keyusage\":\"\",\"x509-cert-auth.mapper-selection\":\"Custom Attribute Mapper\",\"x509-cert-auth.crl-relative-path\":\"crl.pem\",\"x509-cert-auth.crldp-checking-enabled\":\"false\",\"x509-cert-auth.mapping-source-selection\":\"Match SubjectDN using regular expression\",\"x509-cert-auth.ocsp-checking-enabled\":\"\"}}'",
"kcadm delete \"authentication/config/dd91611a-d25c-421a-87e2-227c18421833\" -r examplerealm"
] |
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/server_administration_guide/the_admin_cli
|
Chapter 2. Creating Red Hat Ansible Automation Platform backup resources
|
Chapter 2. Creating Red Hat Ansible Automation Platform backup resources Backing up your Red Hat Ansible Automation Platform deployment involves creating backup resources for your deployed instances. Use the following procedures to create backup resources for your Red Hat Ansible Automation Platform deployment. We recommend taking backups before upgrading the Ansible Automation Platform Operator. Take a backup regularly in case you want to restore the platform to a state. 2.1. Backing up your Ansible Automation Platform deployment Regularly backing up your Ansible Automation Platform deployment is vital to protect against unexpected data loss and application errors. Ansible Automation Platform hosts any enabled components (such as, automation controller, automation hub, and Event-Driven Ansible), when you back up Ansible Automation Platform the operator will also back up these components. Prerequisites You must be authenticated on OpenShift cluster. You have installed Ansible Automation Platform Operator on the cluster. You have deployed a Ansible Automation Platform instance using the Ansible Automation Platform Operator. Procedure Log in to Red Hat OpenShift Container Platform. Navigate to Operators Installed Operators . Select your Ansible Automation Platform Operator deployment. Go to your All Instances tab, and click Create New . Select Ansible Automation Platform Backup from the list. Note When creating the Ansible Automation Platform Backup resource it also creates backup resources for each of the nested components that are enabled. In the Name field, enter a name for the backup. In the Deployment name field, enter the name of the deployed Ansible Automation Platform instance being backed up. For example if your Ansible Automation Platform deployment must be backed up and the deployment name is aap, enter 'aap' in the Deployment name field. Click Create . This results in an AnsibleAutomationPlatformBackup resource. The the resource YAML is similar to the following: apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatformBackup metadata: name: backup namespace: aap spec: no_log: true deployment_name: aap Verification To verify that your backup was successful you can: Log in to Red Hat OpenShift Container Platform. Navigate to Operators Installed Operators . Select your Ansible Automation Platform Operator deployment. Click All Instances . The All Instances page displays the main backup and the backups for each component with the name you specified when creating your backup resource. The status for the following instances must be either Running or Successful : AnsibleAutomationPlatformBackup AutomationControllerBackup EDABackup AutomationHubBackup 2.2. Backing up the Automation controller deployment Use this procedure to back up a deployment of the controller, including jobs, inventories, and credentials. Prerequisites You must be authenticated with an OpenShift cluster. You have installed Ansible Automation Platform Operator on the cluster. You have deployed automation controller using the Ansible Automation Platform Operator. Procedure Log in to Red Hat OpenShift Container Platform. Navigate to Operators Installed Operators . Select your Ansible Automation Platform Operator deployment. Select the Automation Controller Backup tab. Click Create AutomationControllerBackup . Enter a Name for the backup. In the Deployment name field, enter the name of the AutomationController custom resource object of the deployed Ansible Automation Platform instance being backed up. This name was created when you created your AutomationController object . If you want to use a custom, pre-created pvc: [Optional]: enter the name of the Backup persistent volume claim . [Optional]: enter the Backup PVC storage requirements , and Backup PVC storage class . Note If no pvc or storage class is provided, the cluster's default storage class is used to create the pvc. If you have a large database, specify your storage requests accordingly under Backup management pod resource requirements . Note You can check the size of the existing postgres database data directory by running the following command inside the postgres pod. USD df -h | grep "/var/lib/pgsql/data" Click Create . A backup tarball of the specified deployment is created and available for data recovery or deployment rollback. Future backups are stored in separate tar files on the same pvc. Verification Log in to Red Hat OpenShift Container Platform. Navigate to Operators Installed Operators . Select your Ansible Automation Platform Operator. Select the AutomationControllerBackup tab. Select the backup resource you want to verify. Scroll to Conditions and check that the Successful status is True . Note If the status is Failure , the backup has failed. Check the automation controller operator logs for the error to fix the issue. 2.3. Using YAML to back up the Automation controller deployment See the following procedure for how to back up a deployment of the automation controller using YAML. Prerequisites You must be authenticated with an OpenShift cluster. You have installed Ansible Automation Platform Operator on the cluster. You have deployed automation controller using the Ansible Automation Platform Operator. Procedure Create a file named "backup-automation-controller.yml" with the following contents: --- apiVersion: automationcontroller.ansible.com/v1beta1 kind: AutomationControllerBackup metadata: name: AutomationControllerBackup-2024-07-15 namespace: my-namespace spec: deployment_name: controller Note The "deployment_name" above is the name of the automation controller deployment you intend to backup from. The namespace above is the one containing the automation controller deployment you intend to back up. Use the oc apply command to create the backup object in your cluster: USD oc apply -f backup-automation-controller.yml 2.4. Backing up the Automation hub deployment Use this procedure to back up a deployment of the hub, including all hosted Ansible content. Prerequisites You must be authenticated with an OpenShift cluster. You have installed Ansible Automation Platform Operator on the cluster. You have deployed automation hub using the Ansible Automation Platform Operator. Procedure Log in to Red Hat OpenShift Container Platform. Navigate to Operators Installed Operators . Select your Ansible Automation Platform Operator deployment. Select the Automation Hub Backup tab. Click Create AutomationHubBackup . Enter a Name for the backup. Enter the Deployment name of the deployed Ansible Automation Platform instance being backed up. For example, if your automation hub must be backed up and the deployment name is aap-hub , enter 'aap-hub' in the Deployment name field. If you want to use a custom, pre-created pvc: Optionally, enter the name of the Backup persistent volume claim , Backup persistent volume claim namespace , Backup PVC storage requirements , and Backup PVC storage class . Click Create . This creates a backup of the specified deployment and is available for data recovery or deployment rollback.
|
[
"apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatformBackup metadata: name: backup namespace: aap spec: no_log: true deployment_name: aap",
"df -h | grep \"/var/lib/pgsql/data\"",
"--- apiVersion: automationcontroller.ansible.com/v1beta1 kind: AutomationControllerBackup metadata: name: AutomationControllerBackup-2024-07-15 namespace: my-namespace spec: deployment_name: controller"
] |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/backup_and_recovery_for_operator_environments/aap-backup
|
10.3. Plymouth
|
10.3. Plymouth Plymouth is a graphical boot system and logger for Red Hat Enterprise Linux 7, which makes use of the kernel-based mode setting (KMS) and Direct Rendering Manager (DRM). Plymouth also handles user interaction during boot. You can customize the boot screen appearance by choosing from various static or animated graphical themes. New themes can be created based on the existing ones. 10.3.1. Branding the Theme Each theme for Plymouth is composed of a theme data file and a compiled splash plugin module . The data file has a .plymouth extension, and is installed in the /usr/share/plymouth/themes/ directory. The configuration data is specified under the [Plymouth Theme] section, in the key-value format. Valid keys for this group are Name , Description , and ModuleName . While the first two keys are self-explanatory, the third specifies the name of a Plymouth splash plugin module. Different plugins provide different animations at boot time and the underlying implementation of the various themes: Example 10.2. A .plymouth File Specimen Procedure 10.3. Changing the Plymouth Theme Search for the existing Plymouth themes and choose the most preferable one. Run the following command: Or run the plymouth-set-default-theme --list command to view the installed themes. You can also install all the themes when installing all the plymouth packages. However, you will install a number of unnecessary packages as well. Set the new theme as default with the plymouth-set-default-theme theme_name command. Example 10.3. Set "spinfinity" as the Default Theme You have chosen the spinfinity theme, so you run: Rebuild the initrd daemon after editing otherwise your theme will not show in the boot screen. Do so by running: 10.3.2. Creating a New Plymouth Theme If you do not want to choose from the given list of themes, you can create your own. The easiest way is to copy an existing theme and modify it. Procedure 10.4. Creating Your Own Theme from an Existing Theme Copy an entire content of a plymouth/ directory. As a template directory, use, for example, the default theme for Red Hat Enterprise Linux 7, /usr/share/plymouth/themes/charge/charge.plymouth , which uses a two-step splash plugin ( two-step is a popular boot load feature of a two phased boot process that starts with a progressing animation synced to boot time and finishes with a short, fast one-shot animation): Save the charge.plymouth file with a new name in the /usr/share/plymouth/themes/ newtheme / directory, in the following format: Update the settings in your /usr/share/plymouth/themes/ newtheme / newtheme .plymouth file according to your preferences, changing color, alignment, or transition. Set your newtheme as default by running the following command: Rebuild the initrd daemon after changing the theme by running the command below: 10.3.2.1. Using Branded Logo Some of the plugins show a branded logo as part of the splash animation. If you wish to add your own logo into your theme, follow the short procedure below. Important Keep in mind that the image format of your branded logo must be of the .png format. Procedure 10.5. Add Your Logo to the Theme Create an image file named logo.png with your logo. Edit the /usr/share/plymouth/themes/ newtheme .plymouth file by updating the ImageDir key to point to the directory with the logo.png image file you created in step 1: For more information on Plymouth , see the plymouth (8) man page.
|
[
"[Plymouth Theme] Name=Charge Description=A theme that features the shadowy hull of my logo charge up and finally burst into full form. ModuleName=two-step",
"yum search plymouth-theme",
"yum install plymouth\\*",
"plymouth-set-default-theme spinfinity",
"dracut -f",
"[Plymouth Theme] Name=Charge Description=A theme that features the shadowy hull of my logo charge up and finally burst into full form. ModuleName=two-step [two-step] ImageDir=/usr/share/plymouth/themes/charge HorizontalAlignment=.5 VerticalAlignment=.5 Transition=none TransitionDuration=0.0 BackgroundStartColor=0x202020 BackgroundEndColor=0x202020",
"newtheme .plymouth",
"plymouth-set-default-theme newtheme",
"dracut -f",
"ImageDir=/usr/share/plymouth/themes/ newtheme"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/desktop_migration_and_administration_guide/plymouth
|
Chapter 4. Preparing for data loss with VM snapshots
|
Chapter 4. Preparing for data loss with VM snapshots Virtual machine (VM) snapshots are an integral component of a data recovery strategy, since they preserve the full state of an IdM server: Operating system software and settings IdM software and settings IdM customer data Preparing a VM snapshot of an IdM Certificate Authority (CA) replica allows you to rebuild an entire IdM deployment after a disaster. Warning If your environment uses the integrated CA, a snapshot of a replica without a CA will not be sufficient for rebuilding a deployment, because certificate data will not be preserved. Similarly, if your environment uses the IdM Key Recovery Authority (KRA), make sure you create snapshots of a KRA replica, or you might lose the storage key. Red Hat recommends creating snapshots of a VM that has all of the IdM server roles installed which are in use in your deployment: CA, KRA, DNS. Prerequisites A hypervisor capable of hosting RHEL VMs. Procedure Configure at least one CA replica in the deployment to run inside a VM. If IdM DNS or KRA are used in your environment, consider installing DNS and KRA services on this replica as well. Optional: Configure this VM replica as a hidden replica . Periodically shutdown this VM, take a full snapshot of it, and bring it back online so it continues to receive replication updates. If the VM is a hidden replica, IdM Clients will not be disrupted during this procedure. Additional resources Which hypervisors are certified to run Red Hat Enterprise Linux? The hidden replica mode
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/preparing_for_disaster_recovery_with_identity_management/preparing-for-data-loss-with-snapshots_preparing-for-disaster-recovery
|
Chapter 1. Preparing for a minor update
|
Chapter 1. Preparing for a minor update Keep your Red Hat OpenStack Platform (RHOSP) 17.1 environment updated with the latest packages and containers. Use the upgrade path for the following versions: Old RHOSP Version New RHOSP Version Red Hat OpenStack Platform 17.0.z Red Hat OpenStack Platform 17.1 latest Red Hat OpenStack Platform 17.1.z Red Hat OpenStack Platform 17.1 latest Minor update workflow A minor update of your RHOSP environment involves updating the RPM packages and containers on the undercloud and overcloud host, and the service configuration, if needed. The data plane and control plane are fully available during the minor update. You must complete each of the following steps to update your RHOSP environment: Update step Description Undercloud update Director packages are updated, containers are replaced, and the undercloud is rebooted. Optional ovn-controller update All ovn-controller containers are updated in parallel on all Compute and Controller hosts. ha-image-update external Updates container image names of Pacemaker-controlled services. There is no service disruption. This step applies to only customers that are updating their system from version 17.0.z to the latest 17.1 release. Overcloud update of Controller nodes and composable nodes that contain Pacemaker services During an Overcloud update, the Pacemaker services are stopped for each host. While the Pacemaker services are stopped, the RPMs on the host, the container configuration data, and the containers are updated. When the Pacemaker services restart, the host is added again. Overcloud update of composable nodes without Pacemaker services Networker, ObjectStorage, BlockStorage, or any other role that does not include Pacemaker services are updated one node at a time. Overcloud update of Compute nodes Multiple nodes are updated in parallel. The default value for running nodes in parallel is 25. Overcloud update of Ceph nodes Ceph nodes are updated one node at a time. Ceph cluster update Ceph services are updated by using cephadm . The update occurs per daemon, beginning with CephMgr , CephMon , CephOSD , and then additional daemons. Note If you have a multistack infrastructure, update each overcloud stack completely, one at a time. If you have a distributed compute node (DCN) infrastructure, update the overcloud at the central location completely, and then update the overcloud at each edge site, one at a time. Additionally, an administrator can perform the following operations during a minor update: Migrate your virtual machine Create a virtual machine network Run additional cloud operations The following operations are not supported during a minor update: Replacing a Controller node Scaling in or scaling out any role Considerations before you update your RHOSP environment To help guide you during the update process, consider the following information: Red Hat recommends backing up the undercloud and overcloud control planes. For more information about backing up nodes, see Backing up and restoring the undercloud and control plane nodes . Familiarize yourself with the known issues that might block an update. Familiarize yourself with the possible update and upgrade paths before you begin your update. For more information, see Section 1.1, "Upgrade paths for long life releases" . To identify your current maintenance release, run USD cat /etc/rhosp-release . You can also run this command after updating your environment to validate the update. Important Updates with a single Controller node are not supported. Procedure To prepare your RHOSP environment for the minor update, complete the following procedures: Section 1.3, "Locking the environment to a Red Hat Enterprise Linux release" Section 1.4, "Updating Red Hat Openstack Platform repositories" Section 1.5, "Updating the container image preparation file" Section 1.6, "Disabling fencing in the overcloud" 1.1. Upgrade paths for long life releases Familiarize yourself with the possible update and upgrade paths before you begin an update. Note You can view your current RHOSP and RHEL versions in the /etc/rhosp-release and /etc/redhat-release files. Table 1.1. Updates version path Current version Target version RHOSP 17.0.x on RHEL 9.0 RHOSP 17.0 latest on RHEL 9.0 latest RHOSP 17.1.x on RHEL 9.2 RHOSP 17.1 latest on RHEL 9.2 latest Table 1.2. Upgrades version path Current version Target version RHOSP 10 on RHEL 7.7 RHOSP 13 latest on RHEL 7.9 latest RHOSP 13 on RHEL 7.9 RHOSP 16.1 latest on RHEL 8.2 latest RHOSP 13 on RHEL 7.9 RHOSP 16.2 latest on RHEL 8.4 latest RHOSP 16 on RHEL 8.4 RHOSP 17.1 latest on RHEL 9.0 latest For more information, see Framework for upgrades (16.2 to 17.1) . 1.2. Known issues that might block a minor update Review the following known issues that might affect a successful update. BZ#2313372 - neutron control plane outage during 17.1 minor update During an update from Red Hat OpenStack Platform (RHOSP) 17.1 GA, 17.1.1, and 17.1.2 to RHOSP 17.1.4, when you use an Open Virtual Network (OVN) back end, there is a possibility of a short network API outage during the external run of the OVN update. BZ#2323725 - (rhosp 17.0 to 17.1) deployments using OVN is not possible from 17.0.GA If your Red Hat OpenStack Platform (RHOSP) 17.0 environment is deployed with ML2/OVN, you cannot update your environment directly from RHOSP 17.0 to 17.1.4. You must update to RHOSP 17.0.1 first. For more information, see Keeping Red Hat OpenStack Platform Updated . BZ#2293368 - overcloud images have console=ttyS0 in their default boot arguments If your RHOSP 17.1.3 or earlier deployment includes a filter rule in nftables or iptables with a LOG action, and the kernel command line (/proc/cmdline) has console=tty50, logging actions can cause substantial latency in packet transmission. Before updating to 17.1.4, you must apply the workaround in the Red Hat Knowledgebase solution Sometimes receiving packet(e.g. ICMP echo) has latency, around 190(ms). BZ#2259795 - Incorrect validation of Podman version If you are performing an update of your RHOSP environment to 17.1.x, the pre-update package_version validation fails because the validation cannot find a matching podman version. Workaround: To skip the package_version validation, use the --skiplist package-version option when you run the pre-update validation: Replace <stack> with the name of your stack. BZ#2322115 - (rhosp 17.0 to 17.1) undercloud-service-status validation error During an update to RHOSP 17.1.4, when you run the pre-update validation, the undercloud-service-status validation fails. The failure occurs because the validation cannot find the undercloud-service-status service. Workaround : Skip the undercloud-service-status validation when you run the pre-update validation: Replace <stack> with the name of your stack. (OSP17.1) "unauthorized: authentication required" error when try to pull image from registry During an update to RHOSP 17.1.x, the overcloud update failed to log in to the local registry and pull the correct images. Workaround : Before you run the minor update, log in to the nodes by using podman: Replace <your.registry.local> with the name of your local registry. 1.3. Locking the environment to a Red Hat Enterprise Linux release Red Hat OpenStack Platform (RHOSP) 17.1 is supported on Red Hat Enterprise Linux (RHEL) 9.2. Before you perform the update, lock the undercloud and overcloud repositories to the RHEL 9.2 release to avoid upgrading the operating system to a newer minor release. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Edit your overcloud subscription management environment file, which is the file that contains the RhsmVars parameter. The default name for this file is usually rhsm.yml . Check if your subscription management configuration includes the rhsm_release parameter. If the rhsm_release parameter is not present, add it and set it to 9.2: Save the overcloud subscription management environment file. Create a playbook that contains a task to lock the operating system version to RHEL 9.2 on all nodes: Run the set_release.yaml playbook: USD ansible-playbook -i ~/overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml -f 25 ~/set_release.yaml --limit <undercloud>, <Controller>, <Compute> Replace <stack> with the name of your stack. Use the --limit option to apply the content to all RHOSP nodes. Replace <undercloud>, <Controller>, <Compute> with the Ansible groups in your environment that contain those nodes. Do not run this playbook against Ceph Storage nodes because you might have a different subscription for these nodes. Note To manually lock a node to a version, log in to the node and run the subscription-manager release command: 1.4. Updating Red Hat Openstack Platform repositories Update your repositories to use Red Hat OpenStack Platform (RHOSP) 17.1. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Edit your overcloud subscription management environment file, which is the file that contains the RhsmVars parameter. The default name for this file is usually rhsm.yml . Check the rhsm_repos parameter in your subscription management configuration. If the rhsm_repos parameter is using the RHOSP 17.1 repositories, change the repository to the correct versions: parameter_defaults: RhsmVars: rhsm_repos: - rhel-9-for-x86_64-baseos-eus-rpms - rhel-9-for-x86_64-appstream-eus-rpms - rhel-9-for-x86_64-highavailability-eus-rpms - openstack-17.1-for-rhel-9-x86_64-rpms - fast-datapath-for-rhel-9-x86_64-rpms Save the overcloud subscription management environment file. Create a playbook that contains a task to set the repositories to RHOSP 17.1 on all nodes: USD cat > ~/update_rhosp_repos.yaml <<'EOF' - hosts: all gather_facts: false tasks: - name: change osp repos command: subscription-manager repos --enable=openstack-17.1-for-rhel-9-x86_64-rpms become: true EOF Run the update_rhosp_repos.yaml playbook: USD ansible-playbook -i ~/overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml -f 25 ~/update_rhosp_repos.yaml --limit <undercloud>,<Controller>,<Compute> Replace <stack> with the name of your stack. Use the --limit option to apply the content to all RHOSP nodes. Replace <undercloud>, <Controller>, and <Compute> with the Ansible groups in your environment that contain those nodes. Do not run this playbook against Ceph Storage nodes because they usually use a different subscription. Create a playbook that contains a task to set the repositories to RHOSP 17.1 on all ceph storage nodes: USD cat > ~/update_ceph_repos.yaml <<'EOF' - hosts: all gather_facts: false tasks: - name: change ceph repos command: subscription-manager repos --enable=openstack-17.1-deployment-tools-for-rhel-9-x86_64-rpms become: true EOF Run the update_ceph_repos.yaml playbook: USD ansible-playbook -i ~/overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml -f 25 ~/update_ceph_repos.yaml --limit CephStorage Use the --limit option to apply the content to Ceph Storage nodes. 1.5. Updating the container image preparation file The container preparation file is the file that contains the ContainerImagePrepare parameter. You use this file to define the rules for obtaining container images for the undercloud and overcloud. Before you update your environment, check the file to ensure that you obtain the correct image versions. Procedure Edit the container preparation file. The default name for this file is usually containers-prepare-parameter.yaml . Ensure that the tag parameter is set to 17.1 for each rule set: parameter_defaults: ContainerImagePrepare: - push_destination: true set: ... tag: '17.1' tag_from_label: '{version}-{release}' Note If you do not want to use a specific tag for the update, such as 17.1 or 17.1.1 , remove the tag key-value pair and specify tag_from_label only. This uses the installed Red Hat OpenStack Platform version to determine the value for the tag to use as part of the update process. Save this file. 1.6. Disabling fencing in the overcloud Before you update the overcloud, ensure that fencing is disabled. If fencing is deployed in your environment during the Controller nodes update process, the overcloud might detect certain nodes as disabled and attempt fencing operations, which can cause unintended results. If you have enabled fencing in the overcloud, you must temporarily disable fencing for the duration of the update. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: For each Controller node, log in to the Controller node and run the Pacemaker command to disable fencing: USD ssh tripleo-admin@<controller_ip> "sudo pcs property set stonith-enabled=false" Replace <controller_ip> with the IP address of a Controller node. You can find the IP addresses of your Controller nodes at /etc/hosts or /var/lib/mistral . If you use SBD fencing, disable SBD fencing on the pacemaker_remote nodes: Note Ensure that you take note of the original value of the watchdog timer device interval. You must reset the watchdog timer device interval to its original value after you upgrade the control plane nodes. For more information, see Re-enabling fencing in the overcloud . In the fencing.yaml environment file, set the EnableFencing parameter to false to ensure that fencing stays disabled during the update process. Additional Resources Fencing Controller nodes with STONITH 1.7. Firewall rule change In Red Hat OpenStack Platform (RHOSP) 17.1.4, iptables rules were replaced with nftables rules. If your RHOSP templates include firewall rules, for example, tripleo::tripleo_firewall::firewall_rules , you must redefine them by using the ExtraFirewallRules parameter. For more information about using the ExtraFirewallRules parameter, see Adding services to the overcloud firewall in Hardening Red Hat OpenStack Platform .
|
[
"validation run -i ~/overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml --group pre-update --skiplist package-version",
"validation run --group pre-update --inventory /home/stack/overcloud-deploy/<stack>/config-download/<stack>/tripleo-ansible-inventory.yaml --skiplist undercloud-service-status",
"podman login <your.registry.local>",
"source ~/stackrc",
"parameter_defaults: RhsmVars: ... rhsm_username: \"myusername\" rhsm_password: \"p@55w0rd!\" rhsm_org_id: \"1234567\" rhsm_pool_ids: \"1a85f9223e3d5e43013e3d6e8ff506fd\" rhsm_method: \"portal\" rhsm_release: \"9.2\"",
"cat > ~/set_release.yaml <<'EOF' - hosts: all gather_facts: false tasks: - name: set release to 9.2 command: subscription-manager release --set=9.2 become: true EOF",
"ansible-playbook -i ~/overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml -f 25 ~/set_release.yaml --limit <undercloud>, <Controller>, <Compute>",
"sudo subscription-manager release --set=9.2",
"source ~/stackrc",
"parameter_defaults: RhsmVars: rhsm_repos: - rhel-9-for-x86_64-baseos-eus-rpms - rhel-9-for-x86_64-appstream-eus-rpms - rhel-9-for-x86_64-highavailability-eus-rpms - openstack-17.1-for-rhel-9-x86_64-rpms - fast-datapath-for-rhel-9-x86_64-rpms",
"cat > ~/update_rhosp_repos.yaml <<'EOF' - hosts: all gather_facts: false tasks: - name: change osp repos command: subscription-manager repos --enable=openstack-17.1-for-rhel-9-x86_64-rpms become: true EOF",
"ansible-playbook -i ~/overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml -f 25 ~/update_rhosp_repos.yaml --limit <undercloud>,<Controller>,<Compute>",
"cat > ~/update_ceph_repos.yaml <<'EOF' - hosts: all gather_facts: false tasks: - name: change ceph repos command: subscription-manager repos --enable=openstack-17.1-deployment-tools-for-rhel-9-x86_64-rpms become: true EOF",
"ansible-playbook -i ~/overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml -f 25 ~/update_ceph_repos.yaml --limit CephStorage",
"parameter_defaults: ContainerImagePrepare: - push_destination: true set: tag: '17.1' tag_from_label: '{version}-{release}'",
"source ~/stackrc",
"ssh tripleo-admin@<controller_ip> \"sudo pcs property set stonith-enabled=false\"",
"pcs property set stonith-watchdog-timeout=0 --force"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/performing_a_minor_update_of_red_hat_openstack_platform/assembly_preparing-for-a-minor-update_keeping-updated
|
Chapter 5. Using Operator Lifecycle Manager in disconnected environments
|
Chapter 5. Using Operator Lifecycle Manager in disconnected environments For OpenShift Container Platform clusters in disconnected environments, Operator Lifecycle Manager (OLM) by default cannot access the Red Hat-provided OperatorHub sources hosted on remote registries because those remote sources require full internet connectivity. However, as a cluster administrator you can still enable your cluster to use OLM in a disconnected environment if you have a workstation that has full internet access. The workstation, which requires full internet access to pull the remote OperatorHub content, is used to prepare local mirrors of the remote sources, and push the content to a mirror registry. The mirror registry can be located on a bastion host, which requires connectivity to both your workstation and the disconnected cluster, or a completely disconnected, or airgapped , host, which requires removable media to physically move the mirrored content to the disconnected environment. This guide describes the following process that is required to enable OLM in disconnected environments: Disable the default remote OperatorHub sources for OLM. Use a workstation with full internet access to create and push local mirrors of the OperatorHub content to a mirror registry. Configure OLM to install and manage Operators from local sources on the mirror registry instead of the default remote sources. After enabling OLM in a disconnected environment, you can continue to use your unrestricted workstation to keep your local OperatorHub sources updated as newer versions of Operators are released. Important While OLM can manage Operators from local sources, the ability for a given Operator to run successfully in a disconnected environment still depends on the Operator itself meeting the following criteria: List any related images, or other container images that the Operator might require to perform their functions, in the relatedImages parameter of its ClusterServiceVersion (CSV) object. Reference all specified images by a digest (SHA) and not by a tag. You can search software on the Red Hat Ecosystem Catalog for a list of Red Hat Operators that support running in disconnected mode by filtering with the following selections: Type Containerized application Deployment method Operator Infrastructure features Disconnected Additional resources Red Hat-provided Operator catalogs Enabling your Operator for restricted network environments 5.1. Prerequisites You are logged in to your OpenShift Container Platform cluster as a user with cluster-admin privileges. If you are using OLM in a disconnected environment on IBM Z(R), you must have at least 12 GB allocated to the directory where you place your registry. 5.2. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. You can then configure OperatorHub to use local catalog sources. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 5.3. Mirroring an Operator catalog For instructions about mirroring Operator catalogs for use with disconnected clusters, see Mirroring Operator catalogs for use with disconnected clusters . Important As of OpenShift Container Platform 4.11, the default Red Hat-provided Operator catalog releases in the file-based catalog format. The default Red Hat-provided Operator catalogs for OpenShift Container Platform 4.6 through 4.10 released in the deprecated SQLite database format. The opm subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format. Many of the opm subcommands and flags for working with the SQLite database format, such as opm index prune , do not work with the file-based catalog format. For more information about working with file-based catalogs, see Operator Framework packaging format , Managing custom catalogs , and Mirroring images for a disconnected installation by using the oc-mirror plugin v2 . 5.4. Adding a catalog source to a cluster Adding a catalog source to an OpenShift Container Platform cluster enables the discovery and installation of Operators for users. Cluster administrators can create a CatalogSource object that references an index image. OperatorHub uses catalog sources to populate the user interface. Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. Prerequisites You built and pushed an index image to a registry. You have access to the cluster as a user with the cluster-admin role. Procedure Create a CatalogSource object that references your index image. If you used the oc adm catalog mirror command to mirror your catalog to a target registry, you can use the generated catalogSource.yaml file in your manifests directory as a starting point. Modify the following to your specifications and save it as a catalogSource.yaml file: apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog 1 namespace: openshift-marketplace 2 spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 3 image: <registry>/<namespace>/redhat-operator-index:v4.18 4 displayName: My Operator Catalog publisher: <publisher_name> 5 updateStrategy: registryPoll: 6 interval: 30m 1 If you mirrored content to local files before uploading to a registry, remove any backslash ( / ) characters from the metadata.name field to avoid an "invalid resource name" error when you create the object. 2 If you want the catalog source to be available globally to users in all namespaces, specify the openshift-marketplace namespace. Otherwise, you can specify a different namespace for the catalog to be scoped and available only for that namespace. 3 Specify the value of legacy or restricted . If the field is not set, the default value is legacy . In a future OpenShift Container Platform release, it is planned that the default value will be restricted . If your catalog cannot run with restricted permissions, it is recommended that you manually set this field to legacy . 4 Specify your index image. If you specify a tag after the image name, for example :v4.18 , the catalog source pod uses an image pull policy of Always , meaning the pod always pulls the image prior to starting the container. If you specify a digest, for example @sha256:<id> , the image pull policy is IfNotPresent , meaning the pod pulls the image only if it does not already exist on the node. 5 Specify your name or an organization name publishing the catalog. 6 Catalog sources can automatically check for new versions to keep up to date. Use the file to create the CatalogSource object: USD oc apply -f catalogSource.yaml Verify the following resources are created successfully. Check the pods: USD oc get pods -n openshift-marketplace Example output NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h Check the catalog source: USD oc get catalogsource -n openshift-marketplace Example output NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s Check the package manifest: USD oc get packagemanifest -n openshift-marketplace Example output NAME CATALOG AGE jaeger-product My Operator Catalog 93s You can now install the Operators from the OperatorHub page on your OpenShift Container Platform web console. Additional resources Accessing images for Operators from private registries Image template for custom catalog sources Image pull policy 5.5. steps Updating installed Operators
|
[
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog 1 namespace: openshift-marketplace 2 spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 3 image: <registry>/<namespace>/redhat-operator-index:v4.18 4 displayName: My Operator Catalog publisher: <publisher_name> 5 updateStrategy: registryPoll: 6 interval: 30m",
"oc apply -f catalogSource.yaml",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h",
"oc get catalogsource -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s",
"oc get packagemanifest -n openshift-marketplace",
"NAME CATALOG AGE jaeger-product My Operator Catalog 93s"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/disconnected_environments/olm-restricted-networks
|
Chapter 28. Profiling memory allocation with numastat
|
Chapter 28. Profiling memory allocation with numastat With the numastat tool, you can display statistics over memory allocations in a system. The numastat tool displays data for each NUMA node separately. You can use this information to investigate memory performance of your system or the effectiveness of different memory policies on your system. 28.1. Default numastat statistics By default, the numastat tool displays statistics over these categories of data for each NUMA node: numa_hit The number of pages that were successfully allocated to this node. numa_miss The number of pages that were allocated on this node because of low memory on the intended node. Each numa_miss event has a corresponding numa_foreign event on another node. numa_foreign The number of pages initially intended for this node that were allocated to another node instead. Each numa_foreign event has a corresponding numa_miss event on another node. interleave_hit The number of interleave policy pages successfully allocated to this node. local_node The number of pages successfully allocated on this node by a process on this node. other_node The number of pages allocated on this node by a process on another node. Note High numa_hit values and low numa_miss values (relative to each other) indicate optimal performance. 28.2. Viewing memory allocation with numastat You can view the memory allocation of the system by using the numastat tool. Prerequisites Install the numactl package: Procedure View the memory allocation of your system: Additional resources numastat(8) man page on your system
|
[
"dnf install numactl",
"numastat node0 node1 numa_hit 76557759 92126519 numa_miss 30772308 30827638 numa_foreign 30827638 30772308 interleave_hit 106507 103832 local_node 76502227 92086995 other_node 30827840 30867162"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/monitoring_and_managing_system_status_and_performance/profiling-memory-allocation-with-numastat_monitoring-and-managing-system-status-and-performance
|
3.18. Checking System Events
|
3.18. Checking System Events This Ruby example retrieves logged system events. # In order to ensure that no events are lost, it is recommended to write # the index of the last processed event, in persistent storage. # Here, it is stored in a file called `index.txt`. In a production environment, # it will likely be stored in a database. INDEX_TXT = 'index.txt'.freeze def write_index(index) File.open(INDEX_TXT, 'w') { |f| f.write(index.to_s) } end def read_index return File.read(INDEX_TXT).to_i if File.exist?(INDEX_TXT) nil end # This is the function that is called to process the events. It prints # the identifier and description of each event. def process_event(event) puts("#{event.id} - #{event.description}") end # Find the root of the tree of services: system_service = connection.system_service # Find the service that manages the collection of events: events_service = system_service.events_service # If no index is stored yet, retrieve the last event and start with it. # Events are ordered by index, in ascending order. `max=1` retrieves only one event, # the last event. unless read_index events = events_service.list(max: 1) unless events.empty? first = events.first process_event(first) write_index(first.id.to_i) end end # This loop retrieves the events, always starting from the last index. It waits # before repeating. The `from` parameter specifies that you want to retrieve # events that are newer than the last index that was processed. Note: the `max` # parameter is not used, so that all pending events will be retrieved. loop do sleep(5) events = events_service.list(from: read_index) events.each do |event| process_event(event) write_index(event.id.to_i) end end For more information, see http://www.rubydoc.info/gems/ovirt-engine-sdk/OvirtSDK4%2FEventsService:list .
|
[
"In order to ensure that no events are lost, it is recommended to write the index of the last processed event, in persistent storage. Here, it is stored in a file called `index.txt`. In a production environment, it will likely be stored in a database. INDEX_TXT = 'index.txt'.freeze def write_index(index) File.open(INDEX_TXT, 'w') { |f| f.write(index.to_s) } end def read_index return File.read(INDEX_TXT).to_i if File.exist?(INDEX_TXT) nil end This is the function that is called to process the events. It prints the identifier and description of each event. def process_event(event) puts(\"#{event.id} - #{event.description}\") end Find the root of the tree of services: system_service = connection.system_service Find the service that manages the collection of events: events_service = system_service.events_service If no index is stored yet, retrieve the last event and start with it. Events are ordered by index, in ascending order. `max=1` retrieves only one event, the last event. unless read_index events = events_service.list(max: 1) unless events.empty? first = events.first process_event(first) write_index(first.id.to_i) end end This loop retrieves the events, always starting from the last index. It waits before repeating. The `from` parameter specifies that you want to retrieve events that are newer than the last index that was processed. Note: the `max` parameter is not used, so that all pending events will be retrieved. loop do sleep(5) events = events_service.list(from: read_index) events.each do |event| process_event(event) write_index(event.id.to_i) end end"
] |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/ruby_sdk_guide/checking_system_events
|
Chapter 92. MavenArtifact schema reference
|
Chapter 92. MavenArtifact schema reference Used in: Plugin The type property is a discriminator that distinguishes use of the MavenArtifact type from JarArtifact , TgzArtifact , ZipArtifact , OtherArtifact . It must have the value maven for the type MavenArtifact . Property Description repository Maven repository to download the artifact from. Applicable to the maven artifact type only. string group Maven group id. Applicable to the maven artifact type only. string artifact Maven artifact id. Applicable to the maven artifact type only. string version Maven version number. Applicable to the maven artifact type only. string insecure By default, connections using TLS are verified to check they are secure. The server certificate used must be valid, trusted, and contain the server name. By setting this option to true , all TLS verification is disabled and the artifacts will be downloaded, even when the server is considered insecure. boolean type Must be maven . string
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-MavenArtifact-reference
|
Chapter 15. Changing a subscription service
|
Chapter 15. Changing a subscription service To manage the subscriptions, you can register a RHEL system with either Red Hat Subscription Management Server or Red Hat Satellite Server. If required, you can change the subscription service at a later point. To change the subscription service under which you are registered, unregister the system from the current service and then register it with a new service. To receive the system updates, register your system with either of the management servers. This section contains information about how to unregister your RHEL system from the Red Hat Subscription Management Server and Red Hat Satellite Server. Prerequisites You have registered your system with any one of the following: Red Hat Subscription Management Server Red Hat Satellite Server version 6.11 To receive the system updates, register your system with either of the management servers. 15.1. Unregistering from Subscription Management Server This section contains information about how to unregister a RHEL system from Red Hat Subscription Management Server, using a command line and the Subscription Manager user interface. 15.1.1. Unregistering using command line Use the unregister command to unregister a RHEL system from Red Hat Subscription Management Server. Procedure Run the unregister command as a root user, without any additional parameters. When prompted, provide a root password. The system is unregistered from the Subscription Management Server, and the status 'The system is currently not registered' is displayed with the Register button enabled. To continue uninterrupted services, re-register the system with either of the management services. If you do not register the system with a management service, you may fail to receive the system updates. For more information about registering a system, see Registering your system using the command line . Additional resources Using and Configuring Red Hat Subscription Manager 15.1.2. Unregistering using Subscription Manager user interface You can unregister a RHEL system from Red Hat Subscription Management Server by using Subscription Manager user interface. Procedure Log in to your system. From the top left-hand side of the window, click Activities . From the menu options, click the Show Applications icon. Click the Red Hat Subscription Manager icon, or enter Red Hat Subscription Manager in the search. Enter your administrator password in the Authentication Required dialog box. The Subscriptions window appears and displays the current status of Subscriptions, System Purpose, and installed products. Unregistered products display a red X. Authentication is required to perform privileged tasks on the system. Click the Unregister button. The system is unregistered from the Subscription Management Server, and the status 'The system is currently not registered' is displayed with the Register button enabled. To continue uninterrupted services, re-register the system with either of the management services. If you do not register the system with a management service, you may fail to receive the system updates. For more information about registering a system, see Registering your system using the Subscription Manager User Interface . Additional resources Using and Configuring Red Hat Subscription Manager 15.2. Unregistering from Satellite Server To unregister a Red Hat Enterprise Linux system from Satellite Server, remove the system from Satellite Server. For more information, see Removing a Host from Red Hat Satellite .
|
[
"subscription-manager unregister"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_from_installation_media/changing-a-subscripton-service_rhel-installer
|
Chapter 6. Administer
|
Chapter 6. Administer 6.1. Global configuration The OpenShift Serverless Operator manages the global configuration of a Knative installation, including propagating values from the KnativeServing and KnativeEventing custom resources to system config maps . Any updates to config maps which are applied manually are overwritten by the Operator. However, modifying the Knative custom resources allows you to set values for these config maps. Knative has multiple config maps that are named with the prefix config- . All Knative config maps are created in the same namespace as the custom resource that they apply to. For example, if the KnativeServing custom resource is created in the knative-serving namespace, all Knative Serving config maps are also created in this namespace. The spec.config in the Knative custom resources have one <name> entry for each config map, named config-<name> , with a value which is be used for the config map data . 6.1.1. Configuring the default channel implementation You can use the default-ch-webhook config map to specify the default channel implementation of Knative Eventing. You can specify the default channel implementation for the entire cluster or for one or more namespaces. Currently the InMemoryChannel and KafkaChannel channel types are supported. Prerequisites You have administrator permissions on OpenShift Container Platform. You have installed the OpenShift Serverless Operator and Knative Eventing on your cluster. If you want to use Kafka channels as the default channel implementation, you must also install the KnativeKafka CR on your cluster. Procedure Modify the KnativeEventing custom resource to add configuration details for the default-ch-webhook config map: apiVersion: operator.knative.dev/v1alpha1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: 1 default-ch-webhook: 2 default-ch-config: | clusterDefault: 3 apiVersion: messaging.knative.dev/v1 kind: InMemoryChannel spec: delivery: backoffDelay: PT0.5S backoffPolicy: exponential retry: 5 namespaceDefaults: 4 my-namespace: apiVersion: messaging.knative.dev/v1beta1 kind: KafkaChannel spec: numPartitions: 1 replicationFactor: 1 1 In spec.config , you can specify the config maps that you want to add modified configurations for. 2 The default-ch-webhook config map can be used to specify the default channel implementation for the cluster or for one or more namespaces. 3 The cluster-wide default channel type configuration. In this example, the default channel implementation for the cluster is InMemoryChannel . 4 The namespace-scoped default channel type configuration. In this example, the default channel implementation for the my-namespace namespace is KafkaChannel . Important Configuring a namespace-specific default overrides any cluster-wide settings. 6.1.2. Configuring the default broker backing channel If you are using a channel-based broker, you can set the default backing channel type for the broker to either InMemoryChannel or KafkaChannel . Prerequisites You have administrator permissions on OpenShift Container Platform. You have installed the OpenShift Serverless Operator and Knative Eventing on your cluster. You have installed the OpenShift ( oc ) CLI. If you want to use Kafka channels as the default backing channel type, you must also install the KnativeKafka CR on your cluster. Procedure Modify the KnativeEventing custom resource (CR) to add configuration details for the config-br-default-channel config map: apiVersion: operator.knative.dev/v1alpha1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: 1 config-br-default-channel: channel-template-spec: | apiVersion: messaging.knative.dev/v1beta1 kind: KafkaChannel 2 spec: numPartitions: 6 3 replicationFactor: 3 4 1 In spec.config , you can specify the config maps that you want to add modified configurations for. 2 The default backing channel type configuration. In this example, the default channel implementation for the cluster is KafkaChannel . 3 The number of partitions for the Kafka channel that backs the broker. 4 The replication factor for the Kafka channel that backs the broker. Apply the updated KnativeEventing CR: USD oc apply -f <filename> 6.1.3. Configuring the default broker class You can use the config-br-defaults config map to specify default broker class settings for Knative Eventing. You can specify the default broker class for the entire cluster or for one or more namespaces. Currently the MTChannelBasedBroker and Kafka broker types are supported. Prerequisites You have administrator permissions on OpenShift Container Platform. You have installed the OpenShift Serverless Operator and Knative Eventing on your cluster. If you want to use Kafka broker as the default broker implementation, you must also install the KnativeKafka CR on your cluster. Procedure Modify the KnativeEventing custom resource to add configuration details for the config-br-defaults config map: apiVersion: operator.knative.dev/v1alpha1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: defaultBrokerClass: Kafka 1 config: 2 config-br-defaults: 3 default-br-config: | clusterDefault: 4 brokerClass: Kafka apiVersion: v1 kind: ConfigMap name: kafka-broker-config 5 namespace: knative-eventing 6 namespaceDefaults: 7 my-namespace: brokerClass: MTChannelBasedBroker apiVersion: v1 kind: ConfigMap name: config-br-default-channel 8 namespace: knative-eventing 9 ... 1 The default broker class for Knative Eventing. 2 In spec.config , you can specify the config maps that you want to add modified configurations for. 3 The config-br-defaults config map specifies the default settings for any broker that does not specify spec.config settings or a broker class. 4 The cluster-wide default broker class configuration. In this example, the default broker class implementation for the cluster is Kafka . 5 The kafka-broker-config config map specifies default settings for the Kafka broker. See "Configuring Kafka broker settings" in the "Additional resources" section. 6 The namespace where the kafka-broker-config config map exists. 7 The namespace-scoped default broker class configuration. In this example, the default broker class implementation for the my-namespace namespace is MTChannelBasedBroker . You can specify default broker class implementations for multiple namespaces. 8 The config-br-default-channel config map specifies the default backing channel for the broker. See "Configuring the default broker backing channel" in the "Additional resources" section. 9 The namespace where the config-br-default-channel config map exists. Important Configuring a namespace-specific default overrides any cluster-wide settings. Additional resources Configuring Kafka broker settings Configuring the default broker backing channel 6.1.4. Enabling scale-to-zero Knative Serving provides automatic scaling, or autoscaling , for applications to match incoming demand. You can use the enable-scale-to-zero spec to enable or disable scale-to-zero globally for applications on the cluster. Prerequisites You have installed OpenShift Serverless Operator and Knative Serving on your cluster. You have cluster administrator permissions. You are using the default Knative Pod Autoscaler. The scale to zero feature is not available if you are using the Kubernetes Horizontal Pod Autoscaler. Procedure Modify the enable-scale-to-zero spec in the KnativeServing custom resource (CR): Example KnativeServing CR apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving spec: config: autoscaler: enable-scale-to-zero: "false" 1 1 The enable-scale-to-zero spec can be either "true" or "false" . If set to true, scale-to-zero is enabled. If set to false, applications are scaled down to the configured minimum scale bound . The default value is "true" . 6.1.5. Configuring the scale-to-zero grace period Knative Serving provides automatic scaling down to zero pods for applications. You can use the scale-to-zero-grace-period spec to define an upper bound time limit that Knative waits for scale-to-zero machinery to be in place before the last replica of an application is removed. Prerequisites You have installed OpenShift Serverless Operator and Knative Serving on your cluster. You have cluster administrator permissions. You are using the default Knative Pod Autoscaler. The scale to zero feature is not available if you are using the Kubernetes Horizontal Pod Autoscaler. Procedure Modify the scale-to-zero-grace-period spec in the KnativeServing custom resource (CR): Example KnativeServing CR apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving spec: config: autoscaler: scale-to-zero-grace-period: "30s" 1 1 The grace period time in seconds. The default value is 30 seconds. 6.1.6. Overriding system deployment configurations You can override the default configurations for some specific deployments by modifying the deployments spec in the KnativeServing and KnativeEventing custom resources (CRs). 6.1.6.1. Overriding Knative Serving system deployment configurations You can override the default configurations for some specific deployments by modifying the deployments spec in the KnativeServing custom resource (CR). Currently, overriding default configuration settings is supported for the resources , replicas , labels , annotations , and nodeSelector fields. In the following example, a KnativeServing CR overrides the webhook deployment so that: The deployment has specified CPU and memory resource limits. The deployment has 3 replicas. The example-label: label label is added. The example-annotation: annotation annotation is added. The nodeSelector field is set to select nodes with the disktype: hdd label. Note The KnativeServing CR label and annotation settings override the deployment's labels and annotations for both the deployment itself and the resulting pods. KnativeServing CR example apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: ks namespace: knative-serving spec: high-availability: replicas: 2 deployments: - name: webhook resources: - container: webhook requests: cpu: 300m memory: 60Mi limits: cpu: 1000m memory: 1000Mi replicas: 3 labels: example-label: label annotations: example-annotation: annotation nodeSelector: disktype: hdd 6.1.6.2. Overriding Knative Eventing system deployment configurations You can override the default configurations for some specific deployments by modifying the deployments spec in the KnativeEventing custom resource (CR). Currently, overriding default configuration settings is supported for the eventing-controller , eventing-webhook , and imc-controller fields. Important The replicas spec cannot override the number of replicas for deployments that use the Horizontal Pod Autoscaler (HPA), and does not work for the eventing-webhook deployment. In the following example, a KnativeEventing CR overrides the eventing-controller deployment so that: The deployment has specified CPU and memory resource limits. The deployment has 3 replicas. The example-label: label label is added. The example-annotation: annotation annotation is added. The nodeSelector field is set to select nodes with the disktype: hdd label. KnativeEventing CR example apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: deployments: - name: eventing-controller resources: - container: eventing-controller requests: cpu: 300m memory: 100Mi limits: cpu: 1000m memory: 250Mi replicas: 3 labels: example-label: label annotations: example-annotation: annotation nodeSelector: disktype: hdd Note The KnativeEventing CR label and annotation settings override the deployment's labels and annotations for both the deployment itself and the resulting pods. 6.1.7. Configuring the EmptyDir extension emptyDir volumes are empty volumes that are created when a pod is created, and are used to provide temporary working disk space. emptyDir volumes are deleted when the pod they were created for is deleted. The kubernetes.podspec-volumes-emptydir extension controls whether emptyDir volumes can be used with Knative Serving. To enable using emptyDir volumes, you must modify the KnativeServing custom resource (CR) to include the following YAML: Example KnativeServing CR apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving spec: config: features: kubernetes.podspec-volumes-emptydir: enabled ... 6.1.8. HTTPS redirection global settings HTTPS redirection provides redirection for incoming HTTP requests. These redirected HTTP requests are encrypted. You can enable HTTPS redirection for all services on the cluster by configuring the httpProtocol spec for the KnativeServing custom resource (CR). Example KnativeServing CR that enables HTTPS redirection apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving spec: config: network: httpProtocol: "redirected" ... 6.1.9. Setting the URL scheme for external routes The URL scheme of external routes defaults to HTTPS for enhanced security. This scheme is determined by the default-external-scheme key in the KnativeServing custom resource (CR) spec. Default spec ... spec: config: network: default-external-scheme: "https" ... You can override the default spec to use HTTP by modifying the default-external-scheme key: HTTP override spec ... spec: config: network: default-external-scheme: "http" ... 6.1.10. Setting the Kourier Gateway service type The Kourier Gateway is exposed by default as the ClusterIP service type. This service type is determined by the service-type ingress spec in the KnativeServing custom resource (CR). Default spec ... spec: ingress: kourier: service-type: ClusterIP ... You can override the default service type to use a load balancer service type instead by modifying the service-type spec: LoadBalancer override spec ... spec: ingress: kourier: service-type: LoadBalancer ... 6.1.11. Enabling PVC support Some serverless applications need permanent data storage. To achieve this, you can configure persistent volume claims (PVCs) for your Knative services. Important PVC support for Knative services is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . Procedure To enable Knative Serving to use PVCs and write to them, modify the KnativeServing custom resource (CR) to include the following YAML: Enabling PVCs with write access ... spec: config: features: "kubernetes.podspec-persistent-volume-claim": enabled "kubernetes.podspec-persistent-volume-write": enabled ... The kubernetes.podspec-persistent-volume-claim extension controls whether persistent volumes (PVs) can be used with Knative Serving. The kubernetes.podspec-persistent-volume-write extension controls whether PVs are available to Knative Serving with the write access. To claim a PV, modify your service to include the PV configuration. For example, you might have a persistent volume claim with the following configuration: Note Use the storage class that supports the access mode that you are requesting. For example, you can use the ocs-storagecluster-cephfs class for the ReadWriteMany access mode. PersistentVolumeClaim configuration apiVersion: v1 kind: PersistentVolumeClaim metadata: name: example-pv-claim namespace: my-ns spec: accessModes: - ReadWriteMany storageClassName: ocs-storagecluster-cephfs resources: requests: storage: 1Gi In this case, to claim a PV with write access, modify your service as follows: Knative service PVC configuration apiVersion: serving.knative.dev/v1 kind: Service metadata: namespace: my-ns ... spec: template: spec: containers: ... volumeMounts: 1 - mountPath: /data name: mydata readOnly: false volumes: - name: mydata persistentVolumeClaim: 2 claimName: example-pv-claim readOnly: false 3 1 Volume mount specification. 2 Persistent volume claim specification. 3 Flag that enables read-only access. Note To successfully use persistent storage in Knative services, you need additional configuration, such as the user permissions for the Knative container user. 6.1.12. Enabling init containers Init containers are specialized containers that are run before application containers in a pod. They are generally used to implement initialization logic for an application, which may include running setup scripts or downloading required configurations. You can enable the use of init containers for Knative services by modifying the KnativeServing custom resource (CR). Important Init containers for Knative services is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . Note Init containers may cause longer application start-up times and should be used with caution for serverless applications, which are expected to scale up and down frequently. Prerequisites You have installed OpenShift Serverless Operator and Knative Serving on your cluster. You have cluster administrator permissions. Procedure Enable the use of init containers by adding the kubernetes.podspec-init-containers flag to the KnativeServing CR: Example KnativeServing CR apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving spec: config: features: kubernetes.podspec-init-containers: enabled ... 6.1.13. Tag-to-digest resolution If the Knative Serving controller has access to the container registry, Knative Serving resolves image tags to a digest when you create a revision of a service. This is known as tag-to-digest resolution , and helps to provide consistency for deployments. To give the controller access to the container registry on OpenShift Container Platform, you must create a secret and then configure controller custom certificates. You can configure controller custom certificates by modifying the controller-custom-certs spec in the KnativeServing custom resource (CR). The secret must reside in the same namespace as the KnativeServing CR. If a secret is not included in the KnativeServing CR, this setting defaults to using public key infrastructure (PKI). When using PKI, the cluster-wide certificates are automatically injected into the Knative Serving controller by using the config-service-sa config map. The OpenShift Serverless Operator populates the config-service-sa config map with cluster-wide certificates and mounts the config map as a volume to the controller. 6.1.13.1. Configuring tag-to-digest resolution by using a secret If the controller-custom-certs spec uses the Secret type, the secret is mounted as a secret volume. Knative components consume the secret directly, assuming that the secret has the required certificates. Prerequisites You have cluster administrator permissions on OpenShift Container Platform. You have installed the OpenShift Serverless Operator and Knative Serving on your cluster. Procedure Create a secret: Example command USD oc -n knative-serving create secret generic custom-secret --from-file=<secret_name>.crt=<path_to_certificate> Configure the controller-custom-certs spec in the KnativeServing custom resource (CR) to use the Secret type: Example KnativeServing CR apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: controller-custom-certs: name: custom-secret type: Secret 6.1.14. Additional resources Managing resources from custom resource definitions Understanding persistent storage Configuring a custom PKI 6.2. Configuring Knative Kafka Knative Kafka provides integration options for you to use supported versions of the Apache Kafka message streaming platform with OpenShift Serverless. Kafka provides options for event source, channel, broker, and event sink capabilities. In addition to the Knative Eventing components that are provided as part of a core OpenShift Serverless installation, cluster administrators can install the KnativeKafka custom resource (CR). Note Knative Kafka is not currently supported for IBM Z and IBM Power Systems. The KnativeKafka CR provides users with additional options, such as: Kafka source Kafka channel Kafka broker (Technology Preview) Kafka sink (Technology Preview) 6.2.1. Installing Knative Kafka Knative Kafka provides integration options for you to use supported versions of the Apache Kafka message streaming platform with OpenShift Serverless. Knative Kafka functionality is available in an OpenShift Serverless installation if you have installed the KnativeKafka custom resource. Prerequisites You have installed the OpenShift Serverless Operator and Knative Eventing on your cluster. You have access to a Red Hat AMQ Streams cluster. Install the OpenShift CLI ( oc ) if you want to use the verification steps. You have cluster administrator permissions on OpenShift Container Platform. You are logged in to the OpenShift Container Platform web console. Procedure In the Administrator perspective, navigate to Operators Installed Operators . Check that the Project dropdown at the top of the page is set to Project: knative-eventing . In the list of Provided APIs for the OpenShift Serverless Operator, find the Knative Kafka box and click Create Instance . Configure the KnativeKafka object in the Create Knative Kafka page. Important To use the Kafka channel, source, broker, or sink on your cluster, you must toggle the enabled switch for the options you want to use to true . These switches are set to false by default. Additionally, to use the Kafka channel, broker, or sink you must specify the bootstrap servers. Example KnativeKafka custom resource apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: name: knative-kafka namespace: knative-eventing spec: channel: enabled: true 1 bootstrapServers: <bootstrap_servers> 2 source: enabled: true 3 broker: enabled: true 4 defaultConfig: bootstrapServers: <bootstrap_servers> 5 numPartitions: <num_partitions> 6 replicationFactor: <replication_factor> 7 sink: enabled: true 8 1 Enables developers to use the KafkaChannel channel type in the cluster. 2 A comma-separated list of bootstrap servers from your AMQ Streams cluster. 3 Enables developers to use the KafkaSource event source type in the cluster. 4 Enables developers to use the Knative Kafka broker implementation in the cluster. 5 A comma-separated list of bootstrap servers from your Red Hat AMQ Streams cluster. 6 Defines the number of partitions of the Kafka topics, backed by the Broker objects. The default is 10 . 7 Defines the replication factor of the Kafka topics, backed by the Broker objects. The default is 3 . 8 Enables developers to use a Kafka sink in the cluster. Note The replicationFactor value must be less than or equal to the number of nodes of your Red Hat AMQ Streams cluster. Using the form is recommended for simpler configurations that do not require full control of KnativeKafka object creation. Editing the YAML is recommended for more complex configurations that require full control of KnativeKafka object creation. You can access the YAML by clicking the Edit YAML link in the top right of the Create Knative Kafka page. Click Create after you have completed any of the optional configurations for Kafka. You are automatically directed to the Knative Kafka tab where knative-kafka is in the list of resources. Verification Click on the knative-kafka resource in the Knative Kafka tab. You are automatically directed to the Knative Kafka Overview page. View the list of Conditions for the resource and confirm that they have a status of True . If the conditions have a status of Unknown or False , wait a few moments to refresh the page. Check that the Knative Kafka resources have been created: USD oc get pods -n knative-eventing Example output NAME READY STATUS RESTARTS AGE kafka-broker-dispatcher-7769fbbcbb-xgffn 2/2 Running 0 44s kafka-broker-receiver-5fb56f7656-fhq8d 2/2 Running 0 44s kafka-channel-dispatcher-84fd6cb7f9-k2tjv 2/2 Running 0 44s kafka-channel-receiver-9b7f795d5-c76xr 2/2 Running 0 44s kafka-controller-6f95659bf6-trd6r 2/2 Running 0 44s kafka-source-dispatcher-6bf98bdfff-8bcsn 2/2 Running 0 44s kafka-webhook-eventing-68dc95d54b-825xs 2/2 Running 0 44s 6.2.2. Security configuration for Knative Kafka Kafka clusters are generally secured by using the TLS or SASL authentication methods. You can configure a Kafka broker or channel to work against a protected Red Hat AMQ Streams cluster by using TLS or SASL. Note Red Hat recommends that you enable both SASL and TLS together. 6.2.2.1. Configuring TLS authentication for Kafka brokers Transport Layer Security (TLS) is used by Apache Kafka clients and servers to encrypt traffic between Knative and Kafka, as well as for authentication. TLS is the only supported method of traffic encryption for Knative Kafka. Prerequisites You have cluster administrator permissions on OpenShift Container Platform. The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka CR are installed on your OpenShift Container Platform cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have a Kafka cluster CA certificate stored as a .pem file. You have a Kafka cluster client certificate and a key stored as .pem files. Install the OpenShift CLI ( oc ). Procedure Create the certificate files as a secret in the knative-eventing namespace: USD oc create secret -n knative-eventing generic <secret_name> \ --from-literal=protocol=SSL \ --from-file=ca.crt=caroot.pem \ --from-file=user.crt=certificate.pem \ --from-file=user.key=key.pem Important Use the key names ca.crt , user.crt , and user.key . Do not change them. Edit the KnativeKafka CR and add a reference to your secret in the broker spec: apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: broker: enabled: true defaultConfig: authSecretName: <secret_name> ... 6.2.2.2. Configuring SASL authentication for Kafka brokers Simple Authentication and Security Layer (SASL) is used by Apache Kafka for authentication. If you use SASL authentication on your cluster, users must provide credentials to Knative for communicating with the Kafka cluster, otherwise events cannot be produced or consumed. Prerequisites You have cluster administrator permissions on OpenShift Container Platform. The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka CR are installed on your OpenShift Container Platform cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have a username and password for a Kafka cluster. You have chosen the SASL mechanism to use, for example, PLAIN , SCRAM-SHA-256 , or SCRAM-SHA-512 . If TLS is enabled, you also need the ca.crt certificate file for the Kafka cluster. Install the OpenShift CLI ( oc ). Procedure Create the certificate files as a secret in the knative-eventing namespace: USD oc create secret -n knative-eventing generic <secret_name> \ --from-literal=protocol=SASL_SSL \ --from-literal=sasl.mechanism=<sasl_mechanism> \ --from-file=ca.crt=caroot.pem \ --from-literal=password="SecretPassword" \ --from-literal=user="my-sasl-user" Use the key names ca.crt , password , and sasl.mechanism . Do not change them. If you want to use SASL with public CA certificates, you must use the tls.enabled=true flag, rather than the ca.crt argument, when creating the secret. For example: USD oc create secret -n <namespace> generic <kafka_auth_secret> \ --from-literal=tls.enabled=true \ --from-literal=password="SecretPassword" \ --from-literal=saslType="SCRAM-SHA-512" \ --from-literal=user="my-sasl-user" Edit the KnativeKafka CR and add a reference to your secret in the broker spec: apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: broker: enabled: true defaultConfig: authSecretName: <secret_name> ... 6.2.2.3. Configuring TLS authentication for Kafka channels Transport Layer Security (TLS) is used by Apache Kafka clients and servers to encrypt traffic between Knative and Kafka, as well as for authentication. TLS is the only supported method of traffic encryption for Knative Kafka. Prerequisites You have cluster administrator permissions on OpenShift Container Platform. The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka CR are installed on your OpenShift Container Platform cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have a Kafka cluster CA certificate stored as a .pem file. You have a Kafka cluster client certificate and a key stored as .pem files. Install the OpenShift CLI ( oc ). Procedure Create the certificate files as secrets in your chosen namespace: USD oc create secret -n <namespace> generic <kafka_auth_secret> \ --from-file=ca.crt=caroot.pem \ --from-file=user.crt=certificate.pem \ --from-file=user.key=key.pem Important Use the key names ca.crt , user.crt , and user.key . Do not change them. Start editing the KnativeKafka custom resource: USD oc edit knativekafka Reference your secret and the namespace of the secret: apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: channel: authSecretName: <kafka_auth_secret> authSecretNamespace: <kafka_auth_secret_namespace> bootstrapServers: <bootstrap_servers> enabled: true source: enabled: true Note Make sure to specify the matching port in the bootstrap server. For example: apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: channel: authSecretName: tls-user authSecretNamespace: kafka bootstrapServers: eventing-kafka-bootstrap.kafka.svc:9094 enabled: true source: enabled: true 6.2.2.4. Configuring SASL authentication for Kafka channels Simple Authentication and Security Layer (SASL) is used by Apache Kafka for authentication. If you use SASL authentication on your cluster, users must provide credentials to Knative for communicating with the Kafka cluster, otherwise events cannot be produced or consumed. Prerequisites You have cluster administrator permissions on OpenShift Container Platform. The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka CR are installed on your OpenShift Container Platform cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have a username and password for a Kafka cluster. You have chosen the SASL mechanism to use, for example, PLAIN , SCRAM-SHA-256 , or SCRAM-SHA-512 . If TLS is enabled, you also need the ca.crt certificate file for the Kafka cluster. Install the OpenShift CLI ( oc ). Procedure Create the certificate files as secrets in your chosen namespace: USD oc create secret -n <namespace> generic <kafka_auth_secret> \ --from-file=ca.crt=caroot.pem \ --from-literal=password="SecretPassword" \ --from-literal=saslType="SCRAM-SHA-512" \ --from-literal=user="my-sasl-user" Use the key names ca.crt , password , and sasl.mechanism . Do not change them. If you want to use SASL with public CA certificates, you must use the tls.enabled=true flag, rather than the ca.crt argument, when creating the secret. For example: USD oc create secret -n <namespace> generic <kafka_auth_secret> \ --from-literal=tls.enabled=true \ --from-literal=password="SecretPassword" \ --from-literal=saslType="SCRAM-SHA-512" \ --from-literal=user="my-sasl-user" Start editing the KnativeKafka custom resource: USD oc edit knativekafka Reference your secret and the namespace of the secret: apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: channel: authSecretName: <kafka_auth_secret> authSecretNamespace: <kafka_auth_secret_namespace> bootstrapServers: <bootstrap_servers> enabled: true source: enabled: true Note Make sure to specify the matching port in the bootstrap server. For example: apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: channel: authSecretName: scram-user authSecretNamespace: kafka bootstrapServers: eventing-kafka-bootstrap.kafka.svc:9093 enabled: true source: enabled: true 6.2.2.5. Configuring SASL authentication for Kafka sources Simple Authentication and Security Layer (SASL) is used by Apache Kafka for authentication. If you use SASL authentication on your cluster, users must provide credentials to Knative for communicating with the Kafka cluster, otherwise events cannot be produced or consumed. Prerequisites You have cluster or dedicated administrator permissions on OpenShift Container Platform. The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka CR are installed on your OpenShift Container Platform cluster. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have a username and password for a Kafka cluster. You have chosen the SASL mechanism to use, for example, PLAIN , SCRAM-SHA-256 , or SCRAM-SHA-512 . If TLS is enabled, you also need the ca.crt certificate file for the Kafka cluster. You have installed the OpenShift ( oc ) CLI. Procedure Create the certificate files as secrets in your chosen namespace: USD oc create secret -n <namespace> generic <kafka_auth_secret> \ --from-file=ca.crt=caroot.pem \ --from-literal=password="SecretPassword" \ --from-literal=saslType="SCRAM-SHA-512" \ 1 --from-literal=user="my-sasl-user" 1 The SASL type can be PLAIN , SCRAM-SHA-256 , or SCRAM-SHA-512 . Create or modify your Kafka source so that it contains the following spec configuration: apiVersion: sources.knative.dev/v1beta1 kind: KafkaSource metadata: name: example-source spec: ... net: sasl: enable: true user: secretKeyRef: name: <kafka_auth_secret> key: user password: secretKeyRef: name: <kafka_auth_secret> key: password saslType: secretKeyRef: name: <kafka_auth_secret> key: saslType tls: enable: true caCert: 1 secretKeyRef: name: <kafka_auth_secret> key: ca.crt ... 1 The caCert spec is not required if you are using a public cloud Kafka service, such as Red Hat OpenShift Streams for Apache Kafka. 6.2.3. Configuring Kafka broker settings You can configure the replication factor, bootstrap servers, and the number of topic partitions for a Kafka broker, by creating a config map and referencing this config map in the Kafka Broker object. Important Kafka broker is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . Prerequisites You have cluster or dedicated administrator permissions on OpenShift Container Platform. The OpenShift Serverless Operator, Knative Eventing, and the KnativeKafka custom resource (CR) are installed on your OpenShift Container Platform cluster. You have created a project or have access to a project that has the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have installed the OpenShift CLI ( oc ). Procedure Modify the kafka-broker-config config map, or create your own config map that contains the following configuration: apiVersion: v1 kind: ConfigMap metadata: name: <config_map_name> 1 namespace: <namespace> 2 data: default.topic.partitions: <integer> 3 default.topic.replication.factor: <integer> 4 bootstrap.servers: <list_of_servers> 5 1 The config map name. 2 The namespace where the config map exists. 3 The number of topic partitions for the Kafka broker. This controls how quickly events can be sent to the broker. A higher number of partitions requires greater compute resources. 4 The replication factor of topic messages. This prevents against data loss. A higher replication factor requires greater compute resources and more storage. 5 A comma separated list of bootstrap servers. This can be inside or outside of the OpenShift Container Platform cluster, and is a list of Kafka clusters that the broker receives events from and sends events to. Important The default.topic.replication.factor value must be less than or equal to the number of Kafka broker instances in your cluster. For example, if you only have one Kafka broker, the default.topic.replication.factor value should not be more than "1" . Example Kafka broker config map apiVersion: v1 kind: ConfigMap metadata: name: kafka-broker-config namespace: knative-eventing data: default.topic.partitions: "10" default.topic.replication.factor: "3" bootstrap.servers: "my-cluster-kafka-bootstrap.kafka:9092" Apply the config map: USD oc apply -f <config_map_filename> Specify the config map for the Kafka Broker object: Example Broker object apiVersion: eventing.knative.dev/v1 kind: Broker metadata: name: <broker_name> 1 namespace: <namespace> 2 annotations: eventing.knative.dev/broker.class: Kafka 3 spec: config: apiVersion: v1 kind: ConfigMap name: <config_map_name> 4 namespace: <namespace> 5 ... 1 The broker name. 2 The namespace where the broker exists. 3 The broker class annotation. In this example, the broker is a Kafka broker that uses the class value Kafka . 4 The config map name. 5 The namespace where the config map exists. Apply the broker: USD oc apply -f <broker_filename> Additional resources Creating brokers 6.2.4. Additional resources Red Hat AMQ Streams documentation TLS and SASL on Kafka 6.3. Serverless components in the Administrator perspective If you do not want to switch to the Developer perspective in the OpenShift Container Platform web console or use the Knative ( kn ) CLI or YAML files, you can create Knative components by using the Administator perspective of the OpenShift Container Platform web console. 6.3.1. Creating serverless applications using the Administrator perspective Serverless applications are created and deployed as Kubernetes services, defined by a route and a configuration, and contained in a YAML file. To deploy a serverless application using OpenShift Serverless, you must create a Knative Service object. Example Knative Service object YAML file apiVersion: serving.knative.dev/v1 kind: Service metadata: name: hello 1 namespace: default 2 spec: template: spec: containers: - image: docker.io/openshift/hello-openshift 3 env: - name: RESPONSE 4 value: "Hello Serverless!" 1 The name of the application. 2 The namespace the application uses. 3 The image of the application. 4 The environment variable printed out by the sample application. After the service is created and the application is deployed, Knative creates an immutable revision for this version of the application. Knative also performs network programming to create a route, ingress, service, and load balancer for your application and automatically scales your pods up and down based on traffic. Prerequisites To create serverless applications using the Administrator perspective, ensure that you have completed the following steps. The OpenShift Serverless Operator and Knative Serving are installed. You have logged in to the web console and are in the Administrator perspective. Procedure Navigate to the Serverless Serving page. In the Create list, select Service . Manually enter YAML or JSON definitions, or by dragging and dropping a file into the editor. Click Create . 6.3.2. Creating an event source by using the Administrator perspective A Knative event source can be any Kubernetes object that generates or imports cloud events, and relays those events to another endpoint, known as a sink . Sourcing events is critical to developing a distributed system that reacts to events. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have logged in to the web console and are in the Administrator perspective. You have cluster administrator permissions for OpenShift Container Platform. Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Serverless Eventing . In the Create list, select Event Source . You will be directed to the Event Sources page. Select the event source type that you want to create. 6.3.3. Creating a broker by using the Administrator perspective Brokers can be used in combination with triggers to deliver events from an event source to an event sink. Events are sent from an event source to a broker as an HTTP POST request. After events have entered the broker, they can be filtered by CloudEvent attributes using triggers, and sent as an HTTP POST request to an event sink. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have logged in to the web console and are in the Administrator perspective. You have cluster administrator permissions for OpenShift Container Platform. Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Serverless Eventing . In the Create list, select Broker . You will be directed to the Create Broker page. Optional: Modify the YAML configuration for the broker. Click Create . 6.3.4. Creating a trigger by using the Administrator perspective Brokers can be used in combination with triggers to deliver events from an event source to an event sink. Events are sent from an event source to a broker as an HTTP POST request. After events have entered the broker, they can be filtered by CloudEvent attributes using triggers, and sent as an HTTP POST request to an event sink. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have logged in to the web console and are in the Administrator perspective. You have cluster administrator permissions for OpenShift Container Platform. You have created a Knative broker. You have created a Knative service to use as a subscriber. Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Serverless Eventing . In the Broker tab, select the Options menu for the broker that you want to add a trigger to. Click Add Trigger in the list. In the Add Trigger dialogue box, select a Subscriber for the trigger. The subscriber is the Knative service that will receive events from the broker. Click Add . 6.3.5. Creating a channel by using the Administrator perspective Channels are custom resources that define a single event-forwarding and persistence layer. After events have been sent to a channel from an event source or producer, these events can be sent to multiple Knative services or other sinks by using a subscription. You can create channels by instantiating a supported Channel object, and configure re-delivery attempts by modifying the delivery spec in a Subscription object. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have logged in to the web console and are in the Administrator perspective. You have cluster administrator permissions for OpenShift Container Platform. Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Serverless Eventing . In the Create list, select Channel . You will be directed to the Channel page. Select the type of Channel object that you want to create in the Type list. Note Currently only InMemoryChannel channel objects are supported by default. Kafka channels are available if you have installed Knative Kafka on OpenShift Serverless. Click Create . 6.3.6. Creating a subscription by using the Administrator perspective After you have created a channel and an event sink, also known as a subscriber , you can create a subscription to enable event delivery. Subscriptions are created by configuring a Subscription object, which specifies the channel and the subscriber to deliver events to. You can also specify some subscriber-specific options, such as how to handle failures. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have logged in to the web console and are in the Administrator perspective. You have cluster administrator permissions for OpenShift Container Platform. You have created a Knative channel. You have created a Knative service to use as a subscriber. Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Serverless Eventing . In the Channel tab, select the Options menu for the channel that you want to add a subscription to. Click Add Subscription in the list. In the Add Subscription dialogue box, select a Subscriber for the subscription. The subscriber is the Knative service that receives events from the channel. Click Add . 6.3.7. Additional resources Serverless applications Event sources Brokers Triggers Channels and subscriptions 6.4. Integrating Service Mesh with OpenShift Serverless The OpenShift Serverless Operator provides Kourier as the default ingress for Knative. However, you can use Service Mesh with OpenShift Serverless whether Kourier is enabled or not. Integrating with Kourier disabled allows you to configure additional networking and routing options that the Kourier ingress does not support, such as mTLS functionality. Important OpenShift Serverless only supports the use of Red Hat OpenShift Service Mesh functionality that is explicitly documented in this guide, and does not support other undocumented features. 6.4.1. Prerequisites The examples in the following procedures use the domain example.com . The example certificate for this domain is used as a certificate authority (CA) that signs the subdomain certificate. To complete and verify these procedures in your deployment, you need either a certificate signed by a widely trusted public CA or a CA provided by your organization. Example commands must be adjusted according to your domain, subdomain, and CA. You must configure the wildcard certificate to match the domain of your OpenShift Container Platform cluster. For example, if your OpenShift Container Platform console address is https://console-openshift-console.apps.openshift.example.com , you must configure the wildcard certificate so that the domain is *.apps.openshift.example.com . For more information about configuring wildcard certificates, see the following topic about Creating a certificate to encrypt incoming external traffic . If you want to use any domain name, including those which are not subdomains of the default OpenShift Container Platform cluster domain, you must set up domain mapping for those domains. For more information, see the OpenShift Serverless documentation about Creating a custom domain mapping . 6.4.2. Creating a certificate to encrypt incoming external traffic By default, the Service Mesh mTLS feature only secures traffic inside of the Service Mesh itself, between the ingress gateway and individual pods that have sidecars. To encrypt traffic as it flows into the OpenShift Container Platform cluster, you must generate a certificate before you enable the OpenShift Serverless and Service Mesh integration. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift Serverless Operator and Knative Serving. Install the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a root certificate and private key that signs the certificates for your Knative services: USD openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 \ -subj '/O=Example Inc./CN=example.com' \ -keyout root.key \ -out root.crt Create a wildcard certificate: USD openssl req -nodes -newkey rsa:2048 \ -subj "/CN=*.apps.openshift.example.com/O=Example Inc." \ -keyout wildcard.key \ -out wildcard.csr Sign the wildcard certificate: USD openssl x509 -req -days 365 -set_serial 0 \ -CA root.crt \ -CAkey root.key \ -in wildcard.csr \ -out wildcard.crt Create a secret by using the wildcard certificate: USD oc create -n istio-system secret tls wildcard-certs \ --key=wildcard.key \ --cert=wildcard.crt This certificate is picked up by the gateways created when you integrate OpenShift Serverless with Service Mesh, so that the ingress gateway serves traffic with this certificate. 6.4.3. Integrating Service Mesh with OpenShift Serverless You can integrate Service Mesh with OpenShift Serverless without using Kourier as the default ingress. To do this, do not install the Knative Serving component before completing the following procedure. There are additional steps required when creating the KnativeServing custom resource definition (CRD) to integrate Knative Serving with Service Mesh, which are not covered in the general Knative Serving installation procedure. This procedure might be useful if you want to integrate Service Mesh as the default and only ingress for your OpenShift Serverless installation. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Install the Red Hat OpenShift Service Mesh Operator and create a ServiceMeshControlPlane resource in the istio-system namespace. If you want to use mTLS functionality, you must also set the spec.security.dataPlane.mtls field for the ServiceMeshControlPlane resource to true . Important Using OpenShift Serverless with Service Mesh is only supported with Red Hat OpenShift Service Mesh version 2.0.5 or later. Install the OpenShift Serverless Operator. Install the OpenShift CLI ( oc ). Procedure Add the namespaces that you would like to integrate with Service Mesh to the ServiceMeshMemberRoll object as members: apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: 1 - knative-serving - <namespace> 1 A list of namespaces to be integrated with Service Mesh. Important This list of namespaces must include the knative-serving namespace. Apply the ServiceMeshMemberRoll resource: USD oc apply -f <filename> Create the necessary gateways so that Service Mesh can accept traffic: Example knative-local-gateway object using HTTP apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: knative-ingress-gateway namespace: knative-serving spec: selector: istio: ingressgateway servers: - port: number: 443 name: https protocol: HTTPS hosts: - "*" tls: mode: SIMPLE credentialName: <wildcard_certs> 1 --- apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: knative-local-gateway namespace: knative-serving spec: selector: istio: ingressgateway servers: - port: number: 8081 name: http protocol: HTTP 2 hosts: - "*" --- apiVersion: v1 kind: Service metadata: name: knative-local-gateway namespace: istio-system labels: experimental.istio.io/disable-gateway-port-translation: "true" spec: type: ClusterIP selector: istio: ingressgateway ports: - name: http2 port: 80 targetPort: 8081 1 Add the name of the secret that contains the wildcard certificate. 2 The knative-local-gateway serves HTTP traffic. Using HTTP means that traffic coming from outside of Service Mesh, but using an internal hostname, such as example.default.svc.cluster.local , is not encrypted. You can set up encryption for this path by creating another wildcard certificate and an additional gateway that uses a different protocol spec. Example knative-local-gateway object using HTTPS apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: knative-local-gateway namespace: knative-serving spec: selector: istio: ingressgateway servers: - port: number: 443 name: https protocol: HTTPS hosts: - "*" tls: mode: SIMPLE credentialName: <wildcard_certs> Apply the Gateway resources: USD oc apply -f <filename> Install Knative Serving by creating the following KnativeServing custom resource definition (CRD), which also enables the Istio integration: apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: ingress: istio: enabled: true 1 deployments: 2 - name: activator annotations: "sidecar.istio.io/inject": "true" "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: autoscaler annotations: "sidecar.istio.io/inject": "true" "sidecar.istio.io/rewriteAppHTTPProbers": "true" 1 Enables Istio integration. 2 Enables sidecar injection for Knative Serving data plane pods. Apply the KnativeServing resource: USD oc apply -f <filename> Create a Knative Service that has sidecar injection enabled and uses a pass-through route: apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> namespace: <namespace> 1 annotations: serving.knative.openshift.io/enablePassthrough: "true" 2 spec: template: metadata: annotations: sidecar.istio.io/inject: "true" 3 sidecar.istio.io/rewriteAppHTTPProbers: "true" spec: containers: - image: <image_url> 1 A namespace that is part of the Service Mesh member roll. 2 Instructs Knative Serving to generate an OpenShift Container Platform pass-through enabled route, so that the certificates you have generated are served through the ingress gateway directly. 3 Injects Service Mesh sidecars into the Knative service pods. Apply the Service resource: USD oc apply -f <filename> Verification Access your serverless application by using a secure connection that is now trusted by the CA: USD curl --cacert root.crt <service_url> Example command USD curl --cacert root.crt https://hello-default.apps.openshift.example.com Example output Hello Openshift! 6.4.4. Enabling Knative Serving metrics when using Service Mesh with mTLS If Service Mesh is enabled with mTLS, metrics for Knative Serving are disabled by default, because Service Mesh prevents Prometheus from scraping metrics. This section shows how to enable Knative Serving metrics when using Service Mesh and mTLS. Prerequisites You have installed the OpenShift Serverless Operator and Knative Serving on your cluster. You have installed Red Hat OpenShift Service Mesh with the mTLS functionality enabled. You have access to an OpenShift Container Platform account with cluster administrator access. Install the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Specify prometheus as the metrics.backend-destination in the observability spec of the Knative Serving custom resource (CR): apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: observability: metrics.backend-destination: "prometheus" ... This step prevents metrics from being disabled by default. Apply the following network policy to allow traffic from the Prometheus namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring-ns namespace: knative-serving spec: ingress: - from: - namespaceSelector: matchLabels: name: "openshift-monitoring" podSelector: {} ... Modify and reapply the default Service Mesh control plane in the istio-system namespace, so that it includes the following spec: ... spec: proxy: networking: trafficControl: inbound: excludedPorts: - 8444 ... 6.4.5. Integrating Service Mesh with OpenShift Serverless when Kourier is enabled You can use Service Mesh with OpenShift Serverless even if Kourier is already enabled. This procedure might be useful if you have already installed Knative Serving with Kourier enabled, but decide to add a Service Mesh integration later. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Install the OpenShift CLI ( oc ). Install the OpenShift Serverless Operator and Knative Serving on your cluster. Install Red Hat OpenShift Service Mesh. OpenShift Serverless with Service Mesh and Kourier is supported for use with both Red Hat OpenShift Service Mesh versions 1.x and 2.x. Procedure Add the namespaces that you would like to integrate with Service Mesh to the ServiceMeshMemberRoll object as members: apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: - <namespace> 1 ... 1 A list of namespaces to be integrated with Service Mesh. Apply the ServiceMeshMemberRoll resource: USD oc apply -f <filename> Create a network policy that permits traffic flow from Knative system pods to Knative services: For each namespace that you want to integrate with Service Mesh, create a NetworkPolicy resource: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-serving-system-namespace namespace: <namespace> 1 spec: ingress: - from: - namespaceSelector: matchLabels: knative.openshift.io/part-of: "openshift-serverless" podSelector: {} policyTypes: - Ingress ... 1 Add the namespace that you want to integrate with Service Mesh. Note The knative.openshift.io/part-of: "openshift-serverless" label was added in OpenShift Serverless 1.22.0. If you are using OpenShift Serverless 1.21.1 or earlier, add the knative.openshift.io/part-of label to the knative-serving and knative-serving-ingress namespaces. Add the label to the knative-serving namespace: USD oc label namespace knative-serving knative.openshift.io/part-of=openshift-serverless Add the label to the knative-serving-ingress namespace: USD oc label namespace knative-serving-ingress knative.openshift.io/part-of=openshift-serverless Apply the NetworkPolicy resource: USD oc apply -f <filename> 6.4.6. Improving memory usage by using secret filtering for Service Mesh By default, the informers implementation for the Kubernetes client-go library fetches all resources of a particular type. This can lead to a substantial overhead when many resources are available, which can cause the Knative net-istio ingress controller to fail on large clusters due to memory leaking. However, a filtering mechanism is available for the Knative net-istio ingress controller, which enables the controller to only fetch Knative related secrets. You can enable this mechanism by adding an annotation to the KnativeServing custom resource (CR). Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Install Red Hat OpenShift Service Mesh. OpenShift Serverless with Service Mesh only is supported for use with Red Hat OpenShift Service Mesh version 2.0.5 or later. Install the OpenShift Serverless Operator and Knative Serving. Install the OpenShift CLI ( oc ). Procedure Add the serverless.openshift.io/enable-secret-informer-filtering annotation to the KnativeServing CR: Example KnativeServing CR apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving annotations: serverless.openshift.io/enable-secret-informer-filtering: "true" 1 spec: ingress: istio: enabled: true deployments: - annotations: sidecar.istio.io/inject: "true" sidecar.istio.io/rewriteAppHTTPProbers: "true" name: activator - annotations: sidecar.istio.io/inject: "true" sidecar.istio.io/rewriteAppHTTPProbers: "true" name: autoscaler 1 Adding this annotation injects an environment variable, ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID=true , to the net-istio controller pod. 6.5. Serverless administrator metrics Metrics enable cluster administrators to monitor how OpenShift Serverless cluster components and workloads are performing. You can view different metrics for OpenShift Serverless by navigating to Dashboards in the OpenShift Container Platform web console Administrator perspective. 6.5.1. Prerequisites See the OpenShift Container Platform documentation on Managing metrics for information about enabling metrics for your cluster. To view metrics for Knative components on OpenShift Container Platform, you need cluster administrator permissions, and access to the web console Administrator perspective. Warning If Service Mesh is enabled with mTLS, metrics for Knative Serving are disabled by default because Service Mesh prevents Prometheus from scraping metrics. For information about resolving this issue, see Enabling Knative Serving metrics when using Service Mesh with mTLS . Scraping the metrics does not affect autoscaling of a Knative service, because scraping requests do not go through the activator. Consequently, no scraping takes place if no pods are running. 6.5.2. Controller metrics The following metrics are emitted by any component that implements a controller logic. These metrics show details about reconciliation operations and the work queue behavior upon which reconciliation requests are added to the work queue. Metric name Description Type Tags Unit work_queue_depth The depth of the work queue. Gauge reconciler Integer (no units) reconcile_count The number of reconcile operations. Counter reconciler , success Integer (no units) reconcile_latency The latency of reconcile operations. Histogram reconciler , success Milliseconds workqueue_adds_total The total number of add actions handled by the work queue. Counter name Integer (no units) workqueue_queue_latency_seconds The length of time an item stays in the work queue before being requested. Histogram name Seconds workqueue_retries_total The total number of retries that have been handled by the work queue. Counter name Integer (no units) workqueue_work_duration_seconds The length of time it takes to process and item from the work queue. Histogram name Seconds workqueue_unfinished_work_seconds The length of time that outstanding work queue items have been in progress. Histogram name Seconds workqueue_longest_running_processor_seconds The length of time that the longest outstanding work queue items has been in progress. Histogram name Seconds 6.5.3. Webhook metrics Webhook metrics report useful information about operations. For example, if a large number of operations fail, this might indicate an issue with a user-created resource. Metric name Description Type Tags Unit request_count The number of requests that are routed to the webhook. Counter admission_allowed , kind_group , kind_kind , kind_version , request_operation , resource_group , resource_namespace , resource_resource , resource_version Integer (no units) request_latencies The response time for a webhook request. Histogram admission_allowed , kind_group , kind_kind , kind_version , request_operation , resource_group , resource_namespace , resource_resource , resource_version Milliseconds 6.5.4. Knative Eventing metrics Cluster administrators can view the following metrics for Knative Eventing components. By aggregating the metrics from HTTP code, events can be separated into two categories; successful events (2xx) and failed events (5xx). 6.5.4.1. Broker ingress metrics You can use the following metrics to debug the broker ingress, see how it is performing, and see which events are being dispatched by the ingress component. Metric name Description Type Tags Unit event_count Number of events received by a broker. Counter broker_name , event_type , namespace_name , response_code , response_code_class , unique_name Integer (no units) event_dispatch_latencies The time taken to dispatch an event to a channel. Histogram broker_name , event_type , namespace_name , response_code , response_code_class , unique_name Milliseconds 6.5.4.2. Broker filter metrics You can use the following metrics to debug broker filters, see how they are performing, and see which events are being dispatched by the filters. You can also measure the latency of the filtering action on an event. Metric name Description Type Tags Unit event_count Number of events received by a broker. Counter broker_name , container_name , filter_type , namespace_name , response_code , response_code_class , trigger_name , unique_name Integer (no units) event_dispatch_latencies The time taken to dispatch an event to a channel. Histogram broker_name , container_name , filter_type , namespace_name , response_code , response_code_class , trigger_name , unique_name Milliseconds event_processing_latencies The time it takes to process an event before it is dispatched to a trigger subscriber. Histogram broker_name , container_name , filter_type , namespace_name , trigger_name , unique_name Milliseconds 6.5.4.3. InMemoryChannel dispatcher metrics You can use the following metrics to debug InMemoryChannel channels, see how they are performing, and see which events are being dispatched by the channels. Metric name Description Type Tags Unit event_count Number of events dispatched by InMemoryChannel channels. Counter broker_name , container_name , filter_type , namespace_name , response_code , response_code_class , trigger_name , unique_name Integer (no units) event_dispatch_latencies The time taken to dispatch an event from an InMemoryChannel channel. Histogram broker_name , container_name , filter_type , namespace_name , response_code , response_code_class , trigger_name , unique_name Milliseconds 6.5.4.4. Event source metrics You can use the following metrics to verify that events have been delivered from the event source to the connected event sink. Metric name Description Type Tags Unit event_count Number of events sent by the event source. Counter broker_name , container_name , filter_type , namespace_name , response_code , response_code_class , trigger_name , unique_name Integer (no units) retry_event_count Number of retried events sent by the event source after initially failing to be delivered. Counter event_source , event_type , name , namespace_name , resource_group , response_code , response_code_class , response_error , response_timeout Integer (no units) 6.5.5. Knative Serving metrics Cluster administrators can view the following metrics for Knative Serving components. 6.5.5.1. Activator metrics You can use the following metrics to understand how applications respond when traffic passes through the activator. Metric name Description Type Tags Unit request_concurrency The number of concurrent requests that are routed to the activator, or average concurrency over a reporting period. Gauge configuration_name , container_name , namespace_name , pod_name , revision_name , service_name Integer (no units) request_count The number of requests that are routed to activator. These are requests that have been fulfilled from the activator handler. Counter configuration_name , container_name , namespace_name , pod_name , response_code , response_code_class , revision_name , service_name , Integer (no units) request_latencies The response time in milliseconds for a fulfilled, routed request. Histogram configuration_name , container_name , namespace_name , pod_name , response_code , response_code_class , revision_name , service_name Milliseconds 6.5.5.2. Autoscaler metrics The autoscaler component exposes a number of metrics related to autoscaler behavior for each revision. For example, at any given time, you can monitor the targeted number of pods the autoscaler tries to allocate for a service, the average number of requests per second during the stable window, or whether the autoscaler is in panic mode if you are using the Knative pod autoscaler (KPA). Metric name Description Type Tags Unit desired_pods The number of pods the autoscaler tries to allocate for a service. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) excess_burst_capacity The excess burst capacity served over the stable window. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) stable_request_concurrency The average number of requests for each observed pod over the stable window. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) panic_request_concurrency The average number of requests for each observed pod over the panic window. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) target_concurrency_per_pod The number of concurrent requests that the autoscaler tries to send to each pod. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) stable_requests_per_second The average number of requests-per-second for each observed pod over the stable window. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) panic_requests_per_second The average number of requests-per-second for each observed pod over the panic window. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) target_requests_per_second The number of requests-per-second that the autoscaler targets for each pod. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) panic_mode This value is 1 if the autoscaler is in panic mode, or 0 if the autoscaler is not in panic mode. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) requested_pods The number of pods that the autoscaler has requested from the Kubernetes cluster. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) actual_pods The number of pods that are allocated and currently have a ready state. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) not_ready_pods The number of pods that have a not ready state. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) pending_pods The number of pods that are currently pending. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) terminating_pods The number of pods that are currently terminating. Gauge configuration_name , namespace_name , revision_name , service_name Integer (no units) 6.5.5.3. Go runtime metrics Each Knative Serving control plane process emits a number of Go runtime memory statistics ( MemStats ). Note The name tag for each metric is an empty tag. Metric name Description Type Tags Unit go_alloc The number of bytes of allocated heap objects. This metric is the same as heap_alloc . Gauge name Integer (no units) go_total_alloc The cumulative bytes allocated for heap objects. Gauge name Integer (no units) go_sys The total bytes of memory obtained from the operating system. Gauge name Integer (no units) go_lookups The number of pointer lookups performed by the runtime. Gauge name Integer (no units) go_mallocs The cumulative count of heap objects allocated. Gauge name Integer (no units) go_frees The cumulative count of heap objects that have been freed. Gauge name Integer (no units) go_heap_alloc The number of bytes of allocated heap objects. Gauge name Integer (no units) go_heap_sys The number of bytes of heap memory obtained from the operating system. Gauge name Integer (no units) go_heap_idle The number of bytes in idle, unused spans. Gauge name Integer (no units) go_heap_in_use The number of bytes in spans that are currently in use. Gauge name Integer (no units) go_heap_released The number of bytes of physical memory returned to the operating system. Gauge name Integer (no units) go_heap_objects The number of allocated heap objects. Gauge name Integer (no units) go_stack_in_use The number of bytes in stack spans that are currently in use. Gauge name Integer (no units) go_stack_sys The number of bytes of stack memory obtained from the operating system. Gauge name Integer (no units) go_mspan_in_use The number of bytes of allocated mspan structures. Gauge name Integer (no units) go_mspan_sys The number of bytes of memory obtained from the operating system for mspan structures. Gauge name Integer (no units) go_mcache_in_use The number of bytes of allocated mcache structures. Gauge name Integer (no units) go_mcache_sys The number of bytes of memory obtained from the operating system for mcache structures. Gauge name Integer (no units) go_bucket_hash_sys The number of bytes of memory in profiling bucket hash tables. Gauge name Integer (no units) go_gc_sys The number of bytes of memory in garbage collection metadata. Gauge name Integer (no units) go_other_sys The number of bytes of memory in miscellaneous, off-heap runtime allocations. Gauge name Integer (no units) go_next_gc The target heap size of the garbage collection cycle. Gauge name Integer (no units) go_last_gc The time that the last garbage collection was completed in Epoch or Unix time . Gauge name Nanoseconds go_total_gc_pause_ns The cumulative time in garbage collection stop-the-world pauses since the program started. Gauge name Nanoseconds go_num_gc The number of completed garbage collection cycles. Gauge name Integer (no units) go_num_forced_gc The number of garbage collection cycles that were forced due to an application calling the garbage collection function. Gauge name Integer (no units) go_gc_cpu_fraction The fraction of the available CPU time of the program that has been used by the garbage collector since the program started. Gauge name Integer (no units) 6.6. Using metering with OpenShift Serverless Important Metering is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. As a cluster administrator, you can use metering to analyze what is happening in your OpenShift Serverless cluster. For more information about metering on OpenShift Container Platform, see About metering . Note Metering is not currently supported for IBM Z and IBM Power Systems. 6.6.1. Installing metering For information about installing metering on OpenShift Container Platform, see Installing Metering . 6.6.2. Datasources for Knative Serving metering The following ReportDataSources are examples of how Knative Serving can be used with OpenShift Container Platform metering. 6.6.2.1. Datasource for CPU usage in Knative Serving This datasource provides the accumulated CPU seconds used per Knative service over the report time period. YAML file apiVersion: metering.openshift.io/v1 kind: ReportDataSource metadata: name: knative-service-cpu-usage spec: prometheusMetricsImporter: query: > sum by(namespace, label_serving_knative_dev_service, label_serving_knative_dev_revision) ( label_replace(rate(container_cpu_usage_seconds_total{container!="POD",container!="",pod!=""}[1m]), "pod", "USD1", "pod", "(.*)") * on(pod, namespace) group_left(label_serving_knative_dev_service, label_serving_knative_dev_revision) kube_pod_labels{label_serving_knative_dev_service!=""} ) 6.6.2.2. Datasource for memory usage in Knative Serving This datasource provides the average memory consumption per Knative service over the report time period. YAML file apiVersion: metering.openshift.io/v1 kind: ReportDataSource metadata: name: knative-service-memory-usage spec: prometheusMetricsImporter: query: > sum by(namespace, label_serving_knative_dev_service, label_serving_knative_dev_revision) ( label_replace(container_memory_usage_bytes{container!="POD", container!="",pod!=""}, "pod", "USD1", "pod", "(.*)") * on(pod, namespace) group_left(label_serving_knative_dev_service, label_serving_knative_dev_revision) kube_pod_labels{label_serving_knative_dev_service!=""} ) 6.6.2.3. Applying Datasources for Knative Serving metering You can apply the ReportDataSources by using the following command: USD oc apply -f <datasource_name>.yaml Example USD oc apply -f knative-service-memory-usage.yaml 6.6.3. Queries for Knative Serving metering The following ReportQuery resources reference the example DataSources provided. 6.6.3.1. Query for CPU usage in Knative Serving YAML file apiVersion: metering.openshift.io/v1 kind: ReportQuery metadata: name: knative-service-cpu-usage spec: inputs: - name: ReportingStart type: time - name: ReportingEnd type: time - default: knative-service-cpu-usage name: KnativeServiceCpuUsageDataSource type: ReportDataSource columns: - name: period_start type: timestamp unit: date - name: period_end type: timestamp unit: date - name: namespace type: varchar unit: kubernetes_namespace - name: service type: varchar - name: data_start type: timestamp unit: date - name: data_end type: timestamp unit: date - name: service_cpu_seconds type: double unit: cpu_core_seconds query: | SELECT timestamp '{| default .Report.ReportingStart .Report.Inputs.ReportingStart| prestoTimestamp |}' AS period_start, timestamp '{| default .Report.ReportingEnd .Report.Inputs.ReportingEnd | prestoTimestamp |}' AS period_end, labels['namespace'] as project, labels['label_serving_knative_dev_service'] as service, min("timestamp") as data_start, max("timestamp") as data_end, sum(amount * "timeprecision") AS service_cpu_seconds FROM {| dataSourceTableName .Report.Inputs.KnativeServiceCpuUsageDataSource |} WHERE "timestamp" >= timestamp '{| default .Report.ReportingStart .Report.Inputs.ReportingStart | prestoTimestamp |}' AND "timestamp" < timestamp '{| default .Report.ReportingEnd .Report.Inputs.ReportingEnd | prestoTimestamp |}' GROUP BY labels['namespace'],labels['label_serving_knative_dev_service'] 6.6.3.2. Query for memory usage in Knative Serving YAML file apiVersion: metering.openshift.io/v1 kind: ReportQuery metadata: name: knative-service-memory-usage spec: inputs: - name: ReportingStart type: time - name: ReportingEnd type: time - default: knative-service-memory-usage name: KnativeServiceMemoryUsageDataSource type: ReportDataSource columns: - name: period_start type: timestamp unit: date - name: period_end type: timestamp unit: date - name: namespace type: varchar unit: kubernetes_namespace - name: service type: varchar - name: data_start type: timestamp unit: date - name: data_end type: timestamp unit: date - name: service_usage_memory_byte_seconds type: double unit: byte_seconds query: | SELECT timestamp '{| default .Report.ReportingStart .Report.Inputs.ReportingStart| prestoTimestamp |}' AS period_start, timestamp '{| default .Report.ReportingEnd .Report.Inputs.ReportingEnd | prestoTimestamp |}' AS period_end, labels['namespace'] as project, labels['label_serving_knative_dev_service'] as service, min("timestamp") as data_start, max("timestamp") as data_end, sum(amount * "timeprecision") AS service_usage_memory_byte_seconds FROM {| dataSourceTableName .Report.Inputs.KnativeServiceMemoryUsageDataSource |} WHERE "timestamp" >= timestamp '{| default .Report.ReportingStart .Report.Inputs.ReportingStart | prestoTimestamp |}' AND "timestamp" < timestamp '{| default .Report.ReportingEnd .Report.Inputs.ReportingEnd | prestoTimestamp |}' GROUP BY labels['namespace'],labels['label_serving_knative_dev_service'] 6.6.3.3. Applying Queries for Knative Serving metering Apply the ReportQuery by entering the following command: USD oc apply -f <query-name>.yaml Example command USD oc apply -f knative-service-memory-usage.yaml 6.6.4. Metering reports for Knative Serving You can run metering reports against Knative Serving by creating Report resources. Before you run a report, you must modify the input parameter within the Report resource to specify the start and end dates of the reporting period. YAML file apiVersion: metering.openshift.io/v1 kind: Report metadata: name: knative-service-cpu-usage spec: reportingStart: '2019-06-01T00:00:00Z' 1 reportingEnd: '2019-06-30T23:59:59Z' 2 query: knative-service-cpu-usage 3 runImmediately: true 1 Start date of the report, in ISO 8601 format. 2 End date of the report, in ISO 8601 format. 3 Either knative-service-cpu-usage for CPU usage report or knative-service-memory-usage for a memory usage report. 6.6.4.1. Running a metering report Run the report by entering the following command: USD oc apply -f <report-name>.yml You can then check the report by entering the following command: USD oc get report Example output NAME QUERY SCHEDULE RUNNING FAILED LAST REPORT TIME AGE knative-service-cpu-usage knative-service-cpu-usage Finished 2019-06-30T23:59:59Z 10h 6.7. High availability High availability (HA) is a standard feature of Kubernetes APIs that helps to ensure that APIs stay operational if a disruption occurs. In an HA deployment, if an active controller crashes or is deleted, another controller is readily available. This controller takes over processing of the APIs that were being serviced by the controller that is now unavailable. HA in OpenShift Serverless is available through leader election, which is enabled by default after the Knative Serving or Eventing control plane is installed. When using a leader election HA pattern, instances of controllers are already scheduled and running inside the cluster before they are required. These controller instances compete to use a shared resource, known as the leader election lock. The instance of the controller that has access to the leader election lock resource at any given time is called the leader. 6.7.1. Configuring high availability replicas for Knative Serving High availability (HA) is available by default for the Knative Serving activator , autoscaler , autoscaler-hpa , controller , webhook , kourier-control , and kourier-gateway components, which are configured to have two replicas each by default. You can change the number of replicas for these components by modifying the spec.high-availability.replicas value in the KnativeServing custom resource (CR). Prerequisites You have access to an OpenShift Container Platform cluster with cluster administrator permissions. The OpenShift Serverless Operator and Knative Serving are installed on your cluster. Procedure In the OpenShift Container Platform web console Administrator perspective, navigate to OperatorHub Installed Operators . Select the knative-serving namespace. Click Knative Serving in the list of Provided APIs for the OpenShift Serverless Operator to go to the Knative Serving tab. Click knative-serving , then go to the YAML tab in the knative-serving page. Modify the number of replicas in the KnativeServing CR: Example YAML apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: high-availability: replicas: 3 6.7.2. Configuring high availability replicas for Knative Eventing High availability (HA) is available by default for the Knative Eventing eventing-controller , eventing-webhook , imc-controller , imc-dispatcher , and mt-broker-controller components, which are configured to have two replicas each by default. You can change the number of replicas for these components by modifying the spec.high-availability.replicas value in the KnativeEventing custom resource (CR). Note For Knative Eventing, the mt-broker-filter and mt-broker-ingress deployments are not scaled by HA. If multiple deployments are needed, scale these components manually. Prerequisites You have access to an OpenShift Container Platform cluster with cluster administrator permissions. The OpenShift Serverless Operator and Knative Eventing are installed on your cluster. Procedure In the OpenShift Container Platform web console Administrator perspective, navigate to OperatorHub Installed Operators . Select the knative-eventing namespace. Click Knative Eventing in the list of Provided APIs for the OpenShift Serverless Operator to go to the Knative Eventing tab. Click knative-eventing , then go to the YAML tab in the knative-eventing page. Modify the number of replicas in the KnativeEventing CR: Example YAML apiVersion: operator.knative.dev/v1alpha1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: high-availability: replicas: 3 6.7.3. Configuring high availability replicas for Knative Kafka High availability (HA) is available by default for the Knative Kafka kafka-controller and kafka-webhook-eventing components, which are configured to have two each replicas by default. You can change the number of replicas for these components by modifying the spec.high-availability.replicas value in the KnativeKafka custom resource (CR). Prerequisites You have access to an OpenShift Container Platform cluster with cluster administrator permissions. The OpenShift Serverless Operator and Knative Kafka are installed on your cluster. Procedure In the OpenShift Container Platform web console Administrator perspective, navigate to OperatorHub Installed Operators . Select the knative-eventing namespace. Click Knative Kafka in the list of Provided APIs for the OpenShift Serverless Operator to go to the Knative Kafka tab. Click knative-kafka , then go to the YAML tab in the knative-kafka page. Modify the number of replicas in the KnativeKafka CR: Example YAML apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: name: knative-kafka namespace: knative-eventing spec: high-availability: replicas: 3
|
[
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: 1 default-ch-webhook: 2 default-ch-config: | clusterDefault: 3 apiVersion: messaging.knative.dev/v1 kind: InMemoryChannel spec: delivery: backoffDelay: PT0.5S backoffPolicy: exponential retry: 5 namespaceDefaults: 4 my-namespace: apiVersion: messaging.knative.dev/v1beta1 kind: KafkaChannel spec: numPartitions: 1 replicationFactor: 1",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: 1 config-br-default-channel: channel-template-spec: | apiVersion: messaging.knative.dev/v1beta1 kind: KafkaChannel 2 spec: numPartitions: 6 3 replicationFactor: 3 4",
"oc apply -f <filename>",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: defaultBrokerClass: Kafka 1 config: 2 config-br-defaults: 3 default-br-config: | clusterDefault: 4 brokerClass: Kafka apiVersion: v1 kind: ConfigMap name: kafka-broker-config 5 namespace: knative-eventing 6 namespaceDefaults: 7 my-namespace: brokerClass: MTChannelBasedBroker apiVersion: v1 kind: ConfigMap name: config-br-default-channel 8 namespace: knative-eventing 9",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving spec: config: autoscaler: enable-scale-to-zero: \"false\" 1",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving spec: config: autoscaler: scale-to-zero-grace-period: \"30s\" 1",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: ks namespace: knative-serving spec: high-availability: replicas: 2 deployments: - name: webhook resources: - container: webhook requests: cpu: 300m memory: 60Mi limits: cpu: 1000m memory: 1000Mi replicas: 3 labels: example-label: label annotations: example-annotation: annotation nodeSelector: disktype: hdd",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: deployments: - name: eventing-controller resources: - container: eventing-controller requests: cpu: 300m memory: 100Mi limits: cpu: 1000m memory: 250Mi replicas: 3 labels: example-label: label annotations: example-annotation: annotation nodeSelector: disktype: hdd",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving spec: config: features: kubernetes.podspec-volumes-emptydir: enabled",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving spec: config: network: httpProtocol: \"redirected\"",
"spec: config: network: default-external-scheme: \"https\"",
"spec: config: network: default-external-scheme: \"http\"",
"spec: ingress: kourier: service-type: ClusterIP",
"spec: ingress: kourier: service-type: LoadBalancer",
"spec: config: features: \"kubernetes.podspec-persistent-volume-claim\": enabled \"kubernetes.podspec-persistent-volume-write\": enabled",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: example-pv-claim namespace: my-ns spec: accessModes: - ReadWriteMany storageClassName: ocs-storagecluster-cephfs resources: requests: storage: 1Gi",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: namespace: my-ns spec: template: spec: containers: volumeMounts: 1 - mountPath: /data name: mydata readOnly: false volumes: - name: mydata persistentVolumeClaim: 2 claimName: example-pv-claim readOnly: false 3",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving spec: config: features: kubernetes.podspec-init-containers: enabled",
"oc -n knative-serving create secret generic custom-secret --from-file=<secret_name>.crt=<path_to_certificate>",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: controller-custom-certs: name: custom-secret type: Secret",
"apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: name: knative-kafka namespace: knative-eventing spec: channel: enabled: true 1 bootstrapServers: <bootstrap_servers> 2 source: enabled: true 3 broker: enabled: true 4 defaultConfig: bootstrapServers: <bootstrap_servers> 5 numPartitions: <num_partitions> 6 replicationFactor: <replication_factor> 7 sink: enabled: true 8",
"oc get pods -n knative-eventing",
"NAME READY STATUS RESTARTS AGE kafka-broker-dispatcher-7769fbbcbb-xgffn 2/2 Running 0 44s kafka-broker-receiver-5fb56f7656-fhq8d 2/2 Running 0 44s kafka-channel-dispatcher-84fd6cb7f9-k2tjv 2/2 Running 0 44s kafka-channel-receiver-9b7f795d5-c76xr 2/2 Running 0 44s kafka-controller-6f95659bf6-trd6r 2/2 Running 0 44s kafka-source-dispatcher-6bf98bdfff-8bcsn 2/2 Running 0 44s kafka-webhook-eventing-68dc95d54b-825xs 2/2 Running 0 44s",
"oc create secret -n knative-eventing generic <secret_name> --from-literal=protocol=SSL --from-file=ca.crt=caroot.pem --from-file=user.crt=certificate.pem --from-file=user.key=key.pem",
"apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: broker: enabled: true defaultConfig: authSecretName: <secret_name>",
"oc create secret -n knative-eventing generic <secret_name> --from-literal=protocol=SASL_SSL --from-literal=sasl.mechanism=<sasl_mechanism> --from-file=ca.crt=caroot.pem --from-literal=password=\"SecretPassword\" --from-literal=user=\"my-sasl-user\"",
"oc create secret -n <namespace> generic <kafka_auth_secret> --from-literal=tls.enabled=true --from-literal=password=\"SecretPassword\" --from-literal=saslType=\"SCRAM-SHA-512\" --from-literal=user=\"my-sasl-user\"",
"apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: broker: enabled: true defaultConfig: authSecretName: <secret_name>",
"oc create secret -n <namespace> generic <kafka_auth_secret> --from-file=ca.crt=caroot.pem --from-file=user.crt=certificate.pem --from-file=user.key=key.pem",
"oc edit knativekafka",
"apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: channel: authSecretName: <kafka_auth_secret> authSecretNamespace: <kafka_auth_secret_namespace> bootstrapServers: <bootstrap_servers> enabled: true source: enabled: true",
"apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: channel: authSecretName: tls-user authSecretNamespace: kafka bootstrapServers: eventing-kafka-bootstrap.kafka.svc:9094 enabled: true source: enabled: true",
"oc create secret -n <namespace> generic <kafka_auth_secret> --from-file=ca.crt=caroot.pem --from-literal=password=\"SecretPassword\" --from-literal=saslType=\"SCRAM-SHA-512\" --from-literal=user=\"my-sasl-user\"",
"oc create secret -n <namespace> generic <kafka_auth_secret> --from-literal=tls.enabled=true --from-literal=password=\"SecretPassword\" --from-literal=saslType=\"SCRAM-SHA-512\" --from-literal=user=\"my-sasl-user\"",
"oc edit knativekafka",
"apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: channel: authSecretName: <kafka_auth_secret> authSecretNamespace: <kafka_auth_secret_namespace> bootstrapServers: <bootstrap_servers> enabled: true source: enabled: true",
"apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: channel: authSecretName: scram-user authSecretNamespace: kafka bootstrapServers: eventing-kafka-bootstrap.kafka.svc:9093 enabled: true source: enabled: true",
"oc create secret -n <namespace> generic <kafka_auth_secret> --from-file=ca.crt=caroot.pem --from-literal=password=\"SecretPassword\" --from-literal=saslType=\"SCRAM-SHA-512\" \\ 1 --from-literal=user=\"my-sasl-user\"",
"apiVersion: sources.knative.dev/v1beta1 kind: KafkaSource metadata: name: example-source spec: net: sasl: enable: true user: secretKeyRef: name: <kafka_auth_secret> key: user password: secretKeyRef: name: <kafka_auth_secret> key: password saslType: secretKeyRef: name: <kafka_auth_secret> key: saslType tls: enable: true caCert: 1 secretKeyRef: name: <kafka_auth_secret> key: ca.crt",
"apiVersion: v1 kind: ConfigMap metadata: name: <config_map_name> 1 namespace: <namespace> 2 data: default.topic.partitions: <integer> 3 default.topic.replication.factor: <integer> 4 bootstrap.servers: <list_of_servers> 5",
"apiVersion: v1 kind: ConfigMap metadata: name: kafka-broker-config namespace: knative-eventing data: default.topic.partitions: \"10\" default.topic.replication.factor: \"3\" bootstrap.servers: \"my-cluster-kafka-bootstrap.kafka:9092\"",
"oc apply -f <config_map_filename>",
"apiVersion: eventing.knative.dev/v1 kind: Broker metadata: name: <broker_name> 1 namespace: <namespace> 2 annotations: eventing.knative.dev/broker.class: Kafka 3 spec: config: apiVersion: v1 kind: ConfigMap name: <config_map_name> 4 namespace: <namespace> 5",
"oc apply -f <broker_filename>",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: hello 1 namespace: default 2 spec: template: spec: containers: - image: docker.io/openshift/hello-openshift 3 env: - name: RESPONSE 4 value: \"Hello Serverless!\"",
"openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=Example Inc./CN=example.com' -keyout root.key -out root.crt",
"openssl req -nodes -newkey rsa:2048 -subj \"/CN=*.apps.openshift.example.com/O=Example Inc.\" -keyout wildcard.key -out wildcard.csr",
"openssl x509 -req -days 365 -set_serial 0 -CA root.crt -CAkey root.key -in wildcard.csr -out wildcard.crt",
"oc create -n istio-system secret tls wildcard-certs --key=wildcard.key --cert=wildcard.crt",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: 1 - knative-serving - <namespace>",
"oc apply -f <filename>",
"apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: knative-ingress-gateway namespace: knative-serving spec: selector: istio: ingressgateway servers: - port: number: 443 name: https protocol: HTTPS hosts: - \"*\" tls: mode: SIMPLE credentialName: <wildcard_certs> 1 --- apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: knative-local-gateway namespace: knative-serving spec: selector: istio: ingressgateway servers: - port: number: 8081 name: http protocol: HTTP 2 hosts: - \"*\" --- apiVersion: v1 kind: Service metadata: name: knative-local-gateway namespace: istio-system labels: experimental.istio.io/disable-gateway-port-translation: \"true\" spec: type: ClusterIP selector: istio: ingressgateway ports: - name: http2 port: 80 targetPort: 8081",
"apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: knative-local-gateway namespace: knative-serving spec: selector: istio: ingressgateway servers: - port: number: 443 name: https protocol: HTTPS hosts: - \"*\" tls: mode: SIMPLE credentialName: <wildcard_certs>",
"oc apply -f <filename>",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: ingress: istio: enabled: true 1 deployments: 2 - name: activator annotations: \"sidecar.istio.io/inject\": \"true\" \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\" - name: autoscaler annotations: \"sidecar.istio.io/inject\": \"true\" \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\"",
"oc apply -f <filename>",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> namespace: <namespace> 1 annotations: serving.knative.openshift.io/enablePassthrough: \"true\" 2 spec: template: metadata: annotations: sidecar.istio.io/inject: \"true\" 3 sidecar.istio.io/rewriteAppHTTPProbers: \"true\" spec: containers: - image: <image_url>",
"oc apply -f <filename>",
"curl --cacert root.crt <service_url>",
"curl --cacert root.crt https://hello-default.apps.openshift.example.com",
"Hello Openshift!",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: observability: metrics.backend-destination: \"prometheus\"",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring-ns namespace: knative-serving spec: ingress: - from: - namespaceSelector: matchLabels: name: \"openshift-monitoring\" podSelector: {}",
"spec: proxy: networking: trafficControl: inbound: excludedPorts: - 8444",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: - <namespace> 1",
"oc apply -f <filename>",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-serving-system-namespace namespace: <namespace> 1 spec: ingress: - from: - namespaceSelector: matchLabels: knative.openshift.io/part-of: \"openshift-serverless\" podSelector: {} policyTypes: - Ingress",
"oc label namespace knative-serving knative.openshift.io/part-of=openshift-serverless",
"oc label namespace knative-serving-ingress knative.openshift.io/part-of=openshift-serverless",
"oc apply -f <filename>",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving annotations: serverless.openshift.io/enable-secret-informer-filtering: \"true\" 1 spec: ingress: istio: enabled: true deployments: - annotations: sidecar.istio.io/inject: \"true\" sidecar.istio.io/rewriteAppHTTPProbers: \"true\" name: activator - annotations: sidecar.istio.io/inject: \"true\" sidecar.istio.io/rewriteAppHTTPProbers: \"true\" name: autoscaler",
"apiVersion: metering.openshift.io/v1 kind: ReportDataSource metadata: name: knative-service-cpu-usage spec: prometheusMetricsImporter: query: > sum by(namespace, label_serving_knative_dev_service, label_serving_knative_dev_revision) ( label_replace(rate(container_cpu_usage_seconds_total{container!=\"POD\",container!=\"\",pod!=\"\"}[1m]), \"pod\", \"USD1\", \"pod\", \"(.*)\") * on(pod, namespace) group_left(label_serving_knative_dev_service, label_serving_knative_dev_revision) kube_pod_labels{label_serving_knative_dev_service!=\"\"} )",
"apiVersion: metering.openshift.io/v1 kind: ReportDataSource metadata: name: knative-service-memory-usage spec: prometheusMetricsImporter: query: > sum by(namespace, label_serving_knative_dev_service, label_serving_knative_dev_revision) ( label_replace(container_memory_usage_bytes{container!=\"POD\", container!=\"\",pod!=\"\"}, \"pod\", \"USD1\", \"pod\", \"(.*)\") * on(pod, namespace) group_left(label_serving_knative_dev_service, label_serving_knative_dev_revision) kube_pod_labels{label_serving_knative_dev_service!=\"\"} )",
"oc apply -f <datasource_name>.yaml",
"oc apply -f knative-service-memory-usage.yaml",
"apiVersion: metering.openshift.io/v1 kind: ReportQuery metadata: name: knative-service-cpu-usage spec: inputs: - name: ReportingStart type: time - name: ReportingEnd type: time - default: knative-service-cpu-usage name: KnativeServiceCpuUsageDataSource type: ReportDataSource columns: - name: period_start type: timestamp unit: date - name: period_end type: timestamp unit: date - name: namespace type: varchar unit: kubernetes_namespace - name: service type: varchar - name: data_start type: timestamp unit: date - name: data_end type: timestamp unit: date - name: service_cpu_seconds type: double unit: cpu_core_seconds query: | SELECT timestamp '{| default .Report.ReportingStart .Report.Inputs.ReportingStart| prestoTimestamp |}' AS period_start, timestamp '{| default .Report.ReportingEnd .Report.Inputs.ReportingEnd | prestoTimestamp |}' AS period_end, labels['namespace'] as project, labels['label_serving_knative_dev_service'] as service, min(\"timestamp\") as data_start, max(\"timestamp\") as data_end, sum(amount * \"timeprecision\") AS service_cpu_seconds FROM {| dataSourceTableName .Report.Inputs.KnativeServiceCpuUsageDataSource |} WHERE \"timestamp\" >= timestamp '{| default .Report.ReportingStart .Report.Inputs.ReportingStart | prestoTimestamp |}' AND \"timestamp\" < timestamp '{| default .Report.ReportingEnd .Report.Inputs.ReportingEnd | prestoTimestamp |}' GROUP BY labels['namespace'],labels['label_serving_knative_dev_service']",
"apiVersion: metering.openshift.io/v1 kind: ReportQuery metadata: name: knative-service-memory-usage spec: inputs: - name: ReportingStart type: time - name: ReportingEnd type: time - default: knative-service-memory-usage name: KnativeServiceMemoryUsageDataSource type: ReportDataSource columns: - name: period_start type: timestamp unit: date - name: period_end type: timestamp unit: date - name: namespace type: varchar unit: kubernetes_namespace - name: service type: varchar - name: data_start type: timestamp unit: date - name: data_end type: timestamp unit: date - name: service_usage_memory_byte_seconds type: double unit: byte_seconds query: | SELECT timestamp '{| default .Report.ReportingStart .Report.Inputs.ReportingStart| prestoTimestamp |}' AS period_start, timestamp '{| default .Report.ReportingEnd .Report.Inputs.ReportingEnd | prestoTimestamp |}' AS period_end, labels['namespace'] as project, labels['label_serving_knative_dev_service'] as service, min(\"timestamp\") as data_start, max(\"timestamp\") as data_end, sum(amount * \"timeprecision\") AS service_usage_memory_byte_seconds FROM {| dataSourceTableName .Report.Inputs.KnativeServiceMemoryUsageDataSource |} WHERE \"timestamp\" >= timestamp '{| default .Report.ReportingStart .Report.Inputs.ReportingStart | prestoTimestamp |}' AND \"timestamp\" < timestamp '{| default .Report.ReportingEnd .Report.Inputs.ReportingEnd | prestoTimestamp |}' GROUP BY labels['namespace'],labels['label_serving_knative_dev_service']",
"oc apply -f <query-name>.yaml",
"oc apply -f knative-service-memory-usage.yaml",
"apiVersion: metering.openshift.io/v1 kind: Report metadata: name: knative-service-cpu-usage spec: reportingStart: '2019-06-01T00:00:00Z' 1 reportingEnd: '2019-06-30T23:59:59Z' 2 query: knative-service-cpu-usage 3 runImmediately: true",
"oc apply -f <report-name>.yml",
"oc get report",
"NAME QUERY SCHEDULE RUNNING FAILED LAST REPORT TIME AGE knative-service-cpu-usage knative-service-cpu-usage Finished 2019-06-30T23:59:59Z 10h",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: high-availability: replicas: 3",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: high-availability: replicas: 3",
"apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: name: knative-kafka namespace: knative-eventing spec: high-availability: replicas: 3"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/serverless/administer
|
Load Balancer Administration
|
Load Balancer Administration Red Hat Enterprise Linux 6 Load Balancer Add-on for Red Hat Enterprise Linux Steven Levine Red Hat Customer Content Services [email protected] John Ha Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/load_balancer_administration/index
|
Chapter 26. Configuring policy-based routing to define alternative routes
|
Chapter 26. Configuring policy-based routing to define alternative routes By default, the kernel in RHEL decides where to forward network packets based on the destination address using a routing table. Policy-based routing enables you to configure complex routing scenarios. For example, you can route packets based on various criteria, such as the source address, packet metadata, or protocol. 26.1. Routing traffic from a specific subnet to a different default gateway by using nmcli You can use policy-based routing to configure a different default gateway for traffic from certain subnets. For example, you can configure RHEL as a router that, by default, routes all traffic to internet provider A using the default route. However, traffic received from the internal workstations subnet is routed to provider B. The procedure assumes the following network topology: Prerequisites The system uses NetworkManager to configure the network, which is the default. The RHEL router you want to set up in the procedure has four network interfaces: The enp7s0 interface is connected to the network of provider A. The gateway IP in the provider's network is 198.51.100.2 , and the network uses a /30 network mask. The enp1s0 interface is connected to the network of provider B. The gateway IP in the provider's network is 192.0.2.2 , and the network uses a /30 network mask. The enp8s0 interface is connected to the 10.0.0.0/24 subnet with internal workstations. The enp9s0 interface is connected to the 203.0.113.0/24 subnet with the company's servers. Hosts in the internal workstations subnet use 10.0.0.1 as the default gateway. In the procedure, you assign this IP address to the enp8s0 network interface of the router. Hosts in the server subnet use 203.0.113.1 as the default gateway. In the procedure, you assign this IP address to the enp9s0 network interface of the router. The firewalld service is enabled and active. Procedure Configure the network interface to provider A: The nmcli connection add command creates a NetworkManager connection profile. The command uses the following options: type ethernet : Defines that the connection type is Ethernet. con-name <connection_name> : Sets the name of the profile. Use a meaningful name to avoid confusion. ifname <network_device> : Sets the network interface. ipv4.method manual : Enables to configure a static IP address. ipv4.addresses <IP_address> / <subnet_mask> : Sets the IPv4 addresses and subnet mask. ipv4.gateway <IP_address> : Sets the default gateway address. ipv4.dns <IP_of_DNS_server> : Sets the IPv4 address of the DNS server. connection.zone <firewalld_zone> : Assigns the network interface to the defined firewalld zone. Note that firewalld automatically enables masquerading for interfaces assigned to the external zone. Configure the network interface to provider B: This command uses the ipv4.routes parameter instead of ipv4.gateway to set the default gateway. This is required to assign the default gateway for this connection to a different routing table ( 5000 ) than the default. NetworkManager automatically creates this new routing table when the connection is activated. Configure the network interface to the internal workstations subnet: This command uses the ipv4.routes parameter to add a static route to the routing table with ID 5000 . This static route for the 10.0.0.0/24 subnet uses the IP of the local network interface to provider B ( 192.0.2.1 ) as hop. Additionally, the command uses the ipv4.routing-rules parameter to add a routing rule with priority 5 that routes traffic from the 10.0.0.0/24 subnet to table 5000 . Low values have a high priority. Note that the syntax in the ipv4.routing-rules parameter is the same as in an ip rule add command, except that ipv4.routing-rules always requires specifying a priority. Configure the network interface to the server subnet: Verification On a RHEL host in the internal workstation subnet: Install the traceroute package: Use the traceroute utility to display the route to a host on the internet: The output of the command displays that the router sends packets over 192.0.2.1 , which is the network of provider B. On a RHEL host in the server subnet: Install the traceroute package: Use the traceroute utility to display the route to a host on the internet: The output of the command displays that the router sends packets over 198.51.100.2 , which is the network of provider A. Troubleshooting steps On the RHEL router: Display the rule list: By default, RHEL contains rules for the tables local , main , and default . Display the routes in table 5000 : Display the interfaces and firewall zones: Verify that the external zone has masquerading enabled: Additional resources nmcli(1) and nm-settings(5) man pages on your system 26.2. Routing traffic from a specific subnet to a different default gateway by using the network RHEL system role You can use policy-based routing to configure a different default gateway for traffic from certain subnets. For example, you can configure RHEL as a router that, by default, routes all traffic to internet provider A using the default route. However, traffic received from the internal workstations subnet is routed to provider B. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. You can use the network RHEL system role to configure the connection profiles, including routing tables and rules. This procedure assumes the following network topology: Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The managed nodes use NetworkManager and the firewalld service. The managed nodes you want to configure has four network interfaces: The enp7s0 interface is connected to the network of provider A. The gateway IP in the provider's network is 198.51.100.2 , and the network uses a /30 network mask. The enp1s0 interface is connected to the network of provider B. The gateway IP in the provider's network is 192.0.2.2 , and the network uses a /30 network mask. The enp8s0 interface is connected to the 10.0.0.0/24 subnet with internal workstations. The enp9s0 interface is connected to the 203.0.113.0/24 subnet with the company's servers. Hosts in the internal workstations subnet use 10.0.0.1 as the default gateway. In the procedure, you assign this IP address to the enp8s0 network interface of the router. Hosts in the server subnet use 203.0.113.1 as the default gateway. In the procedure, you assign this IP address to the enp9s0 network interface of the router. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configuring policy-based routing hosts: managed-node-01.example.com tasks: - name: Routing traffic from a specific subnet to a different default gateway ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: Provider-A interface_name: enp7s0 type: ethernet autoconnect: True ip: address: - 198.51.100.1/30 gateway4: 198.51.100.2 dns: - 198.51.100.200 state: up zone: external - name: Provider-B interface_name: enp1s0 type: ethernet autoconnect: True ip: address: - 192.0.2.1/30 route: - network: 0.0.0.0 prefix: 0 gateway: 192.0.2.2 table: 5000 state: up zone: external - name: Internal-Workstations interface_name: enp8s0 type: ethernet autoconnect: True ip: address: - 10.0.0.1/24 route: - network: 10.0.0.0 prefix: 24 table: 5000 routing_rule: - priority: 5 from: 10.0.0.0/24 table: 5000 state: up zone: trusted - name: Servers interface_name: enp9s0 type: ethernet autoconnect: True ip: address: - 203.0.113.1/24 state: up zone: trusted The settings specified in the example playbook include the following: table: <value> Assigns the route from the same list entry as the table variable to the specified routing table. routing_rule: <list> Defines the priority of the specified routing rule and from a connection profile to which routing table the rule is assigned. zone: <zone_name> Assigns the network interface from a connection profile to the specified firewalld zone. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification On a RHEL host in the internal workstation subnet: Install the traceroute package: Use the traceroute utility to display the route to a host on the internet: The output of the command displays that the router sends packets over 192.0.2.1 , which is the network of provider B. On a RHEL host in the server subnet: Install the traceroute package: Use the traceroute utility to display the route to a host on the internet: The output of the command displays that the router sends packets over 198.51.100.2 , which is the network of provider A. On the RHEL router that you configured using the RHEL system role: Display the rule list: By default, RHEL contains rules for the tables local , main , and default . Display the routes in table 5000 : Display the interfaces and firewall zones: Verify that the external zone has masquerading enabled: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory 26.3. Overview of configuration files involved in policy-based routing when using the legacy network scripts If you use the legacy network scripts instead of NetworkManager to configure your network, you can also configure policy-based routing. Note Configuring the network using the legacy network scripts provided by the network-scripts package is deprecated in RHEL 8. Use NetworkManager to configure policy-based routing. For an example, see Routing traffic from a specific subnet to a different default gateway by using nmcli . The following configuration files are involved in policy-based routing when you use the legacy network scripts: /etc/sysconfig/network-scripts/route- interface : This file defines the IPv4 routes. Use the table option to specify the routing table. For example: /etc/sysconfig/network-scripts/route6- interface : This file defines the IPv6 routes. /etc/sysconfig/network-scripts/rule- interface : This file defines the rules for IPv4 source networks for which the kernel routes traffic to specific routing tables. For example: /etc/sysconfig/network-scripts/rule6- interface : This file defines the rules for IPv6 source networks for which the kernel routes traffic to specific routing tables. /etc/iproute2/rt_tables : This file defines the mappings if you want to use names instead of numbers to refer to specific routing tables. For example: Additional resources ip-route(8) and ip-rule(8) man pages on your system 26.4. Routing traffic from a specific subnet to a different default gateway by using the legacy network scripts You can use policy-based routing to configure a different default gateway for traffic from certain subnets. For example, you can configure RHEL as a router that, by default, routes all traffic to internet provider A using the default route. However, traffic received from the internal workstations subnet is routed to provider B. Important Configuring the network using the legacy network scripts provided by the network-scripts package is deprecated in RHEL 8. Follow the procedure only if you use the legacy network scripts instead of NetworkManager on your host. If you use NetworkManager to manage your network settings, see Routing traffic from a specific subnet to a different default gateway by using nmcli . The procedure assumes the following network topology: Note The legacy network scripts process configuration files in alphabetical order. Therefore, you must name the configuration files in a way that ensures that an interface, that is used in rules and routes of other interfaces, are up when a depending interface requires it. To accomplish the correct order, this procedure uses numbers in the ifcfg-* , route-* , and rules-* files. Prerequisites The NetworkManager package is not installed, or the NetworkManager service is disabled. The network-scripts package is installed. The RHEL router you want to set up in the procedure has four network interfaces: The enp7s0 interface is connected to the network of provider A. The gateway IP in the provider's network is 198.51.100.2 , and the network uses a /30 network mask. The enp1s0 interface is connected to the network of provider B. The gateway IP in the provider's network is 192.0.2.2 , and the network uses a /30 network mask. The enp8s0 interface is connected to the 10.0.0.0/24 subnet with internal workstations. The enp9s0 interface is connected to the 203.0.113.0/24 subnet with the company's servers. Hosts in the internal workstations subnet use 10.0.0.1 as the default gateway. In the procedure, you assign this IP address to the enp8s0 network interface of the router. Hosts in the server subnet use 203.0.113.1 as the default gateway. In the procedure, you assign this IP address to the enp9s0 network interface of the router. The firewalld service is enabled and active. Procedure Add the configuration for the network interface to provider A by creating the /etc/sysconfig/network-scripts/ifcfg-1_Provider-A file with the following content: The configuration file uses the following parameters: TYPE = Ethernet : Defines that the connection type is Ethernet. IPADDR = IP_address : Sets the IPv4 address. PREFIX = subnet_mask : Sets the subnet mask. GATEWAY = IP_address : Sets the default gateway address. DNS1 = IP_of_DNS_server : Sets the IPv4 address of the DNS server. DEFROUTE = yes|no : Defines whether the connection is a default route or not. NAME = connection_name : Sets the name of the connection profile. Use a meaningful name to avoid confusion. DEVICE = network_device : Sets the network interface. ONBOOT = yes : Defines that RHEL starts this connection when the system boots. ZONE = firewalld_zone : Assigns the network interface to the defined firewalld zone. Note that firewalld automatically enables masquerading for interfaces assigned to the external zone. Add the configuration for the network interface to provider B: Create the /etc/sysconfig/network-scripts/ifcfg-2_Provider-B file with the following content: Note that the configuration file for this interface does not contain a default gateway setting. Assign the gateway for the 2_Provider-B connection to a separate routing table. Therefore, create the /etc/sysconfig/network-scripts/route-2_Provider-B file with the following content: This entry assigns the gateway and traffic from all subnets routed through this gateway to table 5000. Create the configuration for the network interface to the internal workstations subnet: Create the /etc/sysconfig/network-scripts/ifcfg-3_Internal-Workstations file with the following content: Add the routing rule configuration for the internal workstation subnet. Therefore, create the /etc/sysconfig/network-scripts/rule-3_Internal-Workstations file with the following content: This configuration defines a routing rule with priority 5 that routes all traffic from the 10.0.0.0/24 subnet to table 5000 . Low values have a high priority. Create the /etc/sysconfig/network-scripts/route-3_Internal-Workstations file with the following content to add a static route to the routing table with ID 5000 : This static route defines that RHEL sends traffic from the 10.0.0.0/24 subnet to the IP of the local network interface to provider B ( 192.0.2.1 ). This interface is to routing table 5000 and used as the hop. Add the configuration for the network interface to the server subnet by creating the /etc/sysconfig/network-scripts/ifcfg-4_Servers file with the following content: Restart the network: Verification On a RHEL host in the internal workstation subnet: Install the traceroute package: Use the traceroute utility to display the route to a host on the internet: The output of the command displays that the router sends packets over 192.0.2.1 , which is the network of provider B. On a RHEL host in the server subnet: Install the traceroute package: Use the traceroute utility to display the route to a host on the internet: The output of the command displays that the router sends packets over 198.51.100.2 , which is the network of provider A. Troubleshooting steps On the RHEL router: Display the rule list: By default, RHEL contains rules for the tables local , main , and default . Display the routes in table 5000 : Display the interfaces and firewall zones: Verify that the external zone has masquerading enabled: Additional resources Overview of configuration files involved in policy-based routing when using the legacy network scripts ip-route(8) and ip-rule(8) man pages on your system /usr/share/doc/network-scripts/sysconfig.txt file
|
[
"nmcli connection add type ethernet con-name Provider-A ifname enp7s0 ipv4.method manual ipv4.addresses 198.51.100.1/30 ipv4.gateway 198.51.100.2 ipv4.dns 198.51.100.200 connection.zone external",
"nmcli connection add type ethernet con-name Provider-B ifname enp1s0 ipv4.method manual ipv4.addresses 192.0.2.1/30 ipv4.routes \"0.0.0.0/0 192.0.2.2 table=5000\" connection.zone external",
"nmcli connection add type ethernet con-name Internal-Workstations ifname enp8s0 ipv4.method manual ipv4.addresses 10.0.0.1/24 ipv4.routes \"10.0.0.0/24 table=5000\" ipv4.routing-rules \"priority 5 from 10.0.0.0/24 table 5000\" connection.zone trusted",
"nmcli connection add type ethernet con-name Servers ifname enp9s0 ipv4.method manual ipv4.addresses 203.0.113.1/24 connection.zone trusted",
"yum install traceroute",
"traceroute redhat.com traceroute to redhat.com (209.132.183.105), 30 hops max, 60 byte packets 1 10.0.0.1 (10.0.0.1) 0.337 ms 0.260 ms 0.223 ms 2 192.0.2.1 (192.0.2.1) 0.884 ms 1.066 ms 1.248 ms",
"yum install traceroute",
"traceroute redhat.com traceroute to redhat.com (209.132.183.105), 30 hops max, 60 byte packets 1 203.0.113.1 (203.0.113.1) 2.179 ms 2.073 ms 1.944 ms 2 198.51.100.2 (198.51.100.2) 1.868 ms 1.798 ms 1.549 ms",
"ip rule list 0: from all lookup local 5 : from 10.0.0.0/24 lookup 5000 32766: from all lookup main 32767: from all lookup default",
"ip route list table 5000 0.0.0.0/0 via 192.0.2.2 dev enp1s0 proto static metric 100 10.0.0.0/24 dev enp8s0 proto static scope link src 192.0.2.1 metric 102",
"firewall-cmd --get-active-zones external interfaces: enp1s0 enp7s0 trusted interfaces: enp8s0 enp9s0",
"firewall-cmd --info-zone=external external (active) target: default icmp-block-inversion: no interfaces: enp1s0 enp7s0 sources: services: ssh ports: protocols: masquerade: yes",
"--- - name: Configuring policy-based routing hosts: managed-node-01.example.com tasks: - name: Routing traffic from a specific subnet to a different default gateway ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: Provider-A interface_name: enp7s0 type: ethernet autoconnect: True ip: address: - 198.51.100.1/30 gateway4: 198.51.100.2 dns: - 198.51.100.200 state: up zone: external - name: Provider-B interface_name: enp1s0 type: ethernet autoconnect: True ip: address: - 192.0.2.1/30 route: - network: 0.0.0.0 prefix: 0 gateway: 192.0.2.2 table: 5000 state: up zone: external - name: Internal-Workstations interface_name: enp8s0 type: ethernet autoconnect: True ip: address: - 10.0.0.1/24 route: - network: 10.0.0.0 prefix: 24 table: 5000 routing_rule: - priority: 5 from: 10.0.0.0/24 table: 5000 state: up zone: trusted - name: Servers interface_name: enp9s0 type: ethernet autoconnect: True ip: address: - 203.0.113.1/24 state: up zone: trusted",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"yum install traceroute",
"traceroute redhat.com traceroute to redhat.com (209.132.183.105), 30 hops max, 60 byte packets 1 10.0.0.1 (10.0.0.1) 0.337 ms 0.260 ms 0.223 ms 2 192.0.2.1 (192.0.2.1) 0.884 ms 1.066 ms 1.248 ms",
"yum install traceroute",
"traceroute redhat.com traceroute to redhat.com (209.132.183.105), 30 hops max, 60 byte packets 1 203.0.113.1 (203.0.113.1) 2.179 ms 2.073 ms 1.944 ms 2 198.51.100.2 (198.51.100.2) 1.868 ms 1.798 ms 1.549 ms",
"ip rule list 0: from all lookup local 5 : from 10.0.0.0/24 lookup 5000 32766: from all lookup main 32767: from all lookup default",
"ip route list table 5000 0.0.0.0/0 via 192.0.2.2 dev enp1s0 proto static metric 100 10.0.0.0/24 dev enp8s0 proto static scope link src 192.0.2.1 metric 102",
"firewall-cmd --get-active-zones external interfaces: enp1s0 enp7s0 trusted interfaces: enp8s0 enp9s0",
"firewall-cmd --info-zone=external external (active) target: default icmp-block-inversion: no interfaces: enp1s0 enp7s0 sources: services: ssh ports: protocols: masquerade: yes",
"192.0.2.0/24 via 198.51.100.1 table 1 203.0.113.0/24 via 198.51.100.2 table 2",
"from 192.0.2.0/24 lookup 1 from 203.0.113.0/24 lookup 2",
"1 Provider_A 2 Provider_B",
"TYPE=Ethernet IPADDR=198.51.100.1 PREFIX=30 GATEWAY=198.51.100.2 DNS1=198.51.100.200 DEFROUTE=yes NAME=1_Provider-A DEVICE=enp7s0 ONBOOT=yes ZONE=external",
"TYPE=Ethernet IPADDR=192.0.2.1 PREFIX=30 DEFROUTE=no NAME=2_Provider-B DEVICE=enp1s0 ONBOOT=yes ZONE=external",
"0.0.0.0/0 via 192.0.2.2 table 5000",
"TYPE=Ethernet IPADDR=10.0.0.1 PREFIX=24 DEFROUTE=no NAME=3_Internal-Workstations DEVICE=enp8s0 ONBOOT=yes ZONE=internal",
"pri 5 from 10.0.0.0/24 table 5000",
"10.0.0.0/24 via 192.0.2.1 table 5000",
"TYPE=Ethernet IPADDR=203.0.113.1 PREFIX=24 DEFROUTE=no NAME=4_Servers DEVICE=enp9s0 ONBOOT=yes ZONE=internal",
"systemctl restart network",
"yum install traceroute",
"traceroute redhat.com traceroute to redhat.com (209.132.183.105), 30 hops max, 60 byte packets 1 10.0.0.1 (10.0.0.1) 0.337 ms 0.260 ms 0.223 ms 2 192.0.2.1 (192.0.2.1) 0.884 ms 1.066 ms 1.248 ms",
"yum install traceroute",
"traceroute redhat.com traceroute to redhat.com (209.132.183.105), 30 hops max, 60 byte packets 1 203.0.113.1 (203.0.113.1) 2.179 ms 2.073 ms 1.944 ms 2 198.51.100.2 (198.51.100.2) 1.868 ms 1.798 ms 1.549 ms",
"ip rule list 0: from all lookup local 5 : from 10.0.0.0/24 lookup 5000 32766: from all lookup main 32767: from all lookup default",
"ip route list table 5000 default via 192.0.2.2 dev enp1s0 10.0.0.0/24 via 192.0.2.1 dev enp1s0",
"firewall-cmd --get-active-zones external interfaces: enp1s0 enp7s0 internal interfaces: enp8s0 enp9s0",
"firewall-cmd --info-zone=external external (active) target: default icmp-block-inversion: no interfaces: enp1s0 enp7s0 sources: services: ssh ports: protocols: masquerade: yes"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/configuring-policy-based-routing-to-define-alternative-routes_configuring-and-managing-networking
|
22.3. Connecting to a Samba Share
|
22.3. Connecting to a Samba Share You can use Nautilus to view available Samba shares on your network. Select Main Menu Button (on the Panel) => Network Servers to view a list of Samba workgroups on your network. You can also type smb: in the Location: bar of Nautilus to view the workgroups. As shown in Figure 22.6, "SMB Workgroups in Nautilus" , an icon appears for each available SMB workgroup on the network. Figure 22.6. SMB Workgroups in Nautilus Double-click one of the workgroup icons to view a list of computers within the workgroup. Figure 22.7. SMB Machines in Nautilus As you can see from Figure 22.7, "SMB Machines in Nautilus" , there is an icon for each machine within the workgroup. Double-click on an icon to view the Samba shares on the machine. If a username and password combination is required, you are prompted for them. Alternately, you can also specify the Samba server and sharename in the Location: bar for Nautilus using the following syntax (replace <servername> and <sharename> with the appropriate values): 22.3.1. Command Line To query the network for Samba servers, use the findsmb command. For each server found, it displays its IP address, NetBIOS name, workgroup name, operating system, and SMB server version. To connect to a Samba share from a shell prompt, type the following command: Replace <hostname> with the hostname or IP address of the Samba server you want to connect to, <sharename> with the name of the shared directory you want to browse, and <username> with the Samba username for the system. Enter the correct password or press Enter if no password is required for the user. If you see the smb:\> prompt, you have successfully logged in. Once you are logged in, type help for a list of commands. If you wish to browse the contents of your home directory, replace sharename with your username. If the -U switch is not used, the username of the current user is passed to the Samba server. To exit smbclient , type exit at the smb:\> prompt.
|
[
"smb:// <servername> / <sharename> /",
"smbclient // <hostname> / <sharename> -U <username>"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/samba-connecting_to_a_samba_share
|
4.6. Configuring Fence Devices
|
4.6. Configuring Fence Devices Configuring fence devices consists of creating, updating, and deleting fence devices for the cluster. You must configure the fence devices in a cluster before you can configure fencing for the nodes in the cluster. Creating a fence device consists of selecting a fence device type and entering parameters for that fence device (for example, name, IP address, login, and password). Updating a fence device consists of selecting an existing fence device and changing parameters for that fence device. Deleting a fence device consists of selecting an existing fence device and deleting it. Note It is recommended that you configure multiple fencing mechanisms for each node. A fencing device can fail due to network split, a power outage, or a problem in the fencing device itself. Configuring multiple fencing mechanisms can reduce the likelihood that the failure of a fencing device will have fatal results. This section provides procedures for the following tasks: Creating fence devices - Refer to Section 4.6.1, "Creating a Fence Device" . Once you have created and named a fence device, you can configure the fence devices for each node in the cluster, as described in Section 4.7, "Configuring Fencing for Cluster Members" . Updating fence devices - Refer to Section 4.6.2, "Modifying a Fence Device" . Deleting fence devices - Refer to Section 4.6.3, "Deleting a Fence Device" . From the cluster-specific page, you can configure fence devices for that cluster by clicking on Fence Devices along the top of the cluster display. This displays the fence devices for the cluster and displays the menu items for fence device configuration: Add and Delete . This is the starting point of each procedure described in the following sections. Note If this is an initial cluster configuration, no fence devices have been created, and therefore none are displayed. Figure 4.6, "luci fence devices configuration page" shows the fence devices configuration screen before any fence devices have been created. Figure 4.6. luci fence devices configuration page 4.6.1. Creating a Fence Device To create a fence device, follow these steps: From the Fence Devices configuration page, click Add . Clicking Add displays the Add Fence Device (Instance) dialog box. From this dialog box, select the type of fence device to configure. Specify the information in the Add Fence Device (Instance) dialog box according to the type of fence device. Refer to Appendix A, Fence Device Parameters for more information about fence device parameters. In some cases you will need to specify additional node-specific parameters for the fence device when you configure fencing for the individual nodes, as described in Section 4.7, "Configuring Fencing for Cluster Members" . Click Submit . After the fence device has been added, it appears on the Fence Devices configuration page.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-config-fence-devices-conga-CA
|
Automation execution API overview
|
Automation execution API overview Red Hat Ansible Automation Platform 2.5 Developer overview for the automation controller API Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/automation_execution_api_overview/index
|
Storage APIs
|
Storage APIs OpenShift Container Platform 4.15 Reference guide for storage APIs Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/storage_apis/index
|
Chapter 6. Resource Constraints
|
Chapter 6. Resource Constraints You can determine the behavior of a resource in a cluster by configuring constraints for that resource. You can configure the following categories of constraints: location constraints - A location constraint determines which nodes a resource can run on. Location constraints are described in Section 6.1, "Location Constraints" . order constraints - An order constraint determines the order in which the resources run. Order constraints are described in Section 6.2, "Order Constraints" . colocation constraints - A colocation constraint determines where resources will be placed relative to other resources. Colocation constraints are described in Section 6.3, "Colocation of Resources" . As a shorthand for configuring a set of constraints that will locate a set of resources together and ensure that the resources start sequentially and stop in reverse order, Pacemaker supports the concept of resource groups. For information on resource groups, see Section 5.5, "Resource Groups" . 6.1. Location Constraints Location constraints determine which nodes a resource can run on. You can configure location constraints to determine whether a resource will prefer or avoid a specified node. Table 6.1, "Location Constraint Options" . summarizes the options for configuring location constraints. Table 6.1. Location Constraint Options Field Description id A unique name for the constraint. This is set by the system when you configure a location constraint with pcs . rsc A resource name node A node's name score Value to indicate the preference for whether a resource should run on or avoid a node. A Value of INFINITY changes "should" to "must"; INFINITY is the default score value for a resource location constraint. resource-discovery Value to indicate the preference for whether Pacemaker should perform resource discovery on this node for the specified resource. Limiting resource discovery to a subset of nodes the resource is physically capable of running on can significantly boost performance when a large set of nodes is present. When pacemaker_remote is in use to expand the node count into the hundreds of nodes range, this option should be considered. Possible values include: always : Always perform resource discovery for the specified resource on this node. never : Never perform resource discovery for the specified resource on this node. exclusive : Perform resource discovery for the specified resource only on this node (and other nodes similarly marked as exclusive ). Multiple location constraints using exclusive discovery for the same resource across different nodes creates a subset of nodes resource-discovery is exclusive to. If a resource is marked for exclusive discovery on one or more nodes, that resource is only allowed to be placed within that subset of nodes. Note that setting this option to never or exclusive allows the possibility for the resource to be active in those locations without the cluster's knowledge. This can lead to the resource being active in more than one location if the service is started outside the cluster's control (for example, by systemd or by an administrator). This can also occur if the resource-discovery property is changed while part of the cluster is down or suffering split-brain, or if the resource-discovery property is changed for a resource and node while the resource is active on that node. For this reason, using this option is appropriate only when you have more than eight nodes and there is a way to guarantee that the resource can run only in a particular location (for example, when the required software is not installed anywhere else). always is the default resource-discovery value for a resource location constraint. The following command creates a location constraint for a resource to prefer the specified node or nodes. The following command creates a location constraint for a resource to avoid the specified node or nodes. There are two alternative strategies for specifying which nodes a resources can run on: Opt-In Clusters - Configure a cluster in which, by default, no resource can run anywhere and then selectively enable allowed nodes for specific resources. The procedure for configuring an opt-in cluster is described in Section 6.1.1, "Configuring an "Opt-In" Cluster" . Opt-Out Clusters - Configure a cluster in which, by default, all resources an run anywhere and then create location constraints for resources that are not allowed to run on specific nodes. The procedure for configuring an opt-out cluster is described in Section 6.1.2, "Configuring an "Opt-Out" Cluster" . Whether you should choose to configure an opt-in or opt-out cluster depends both on your personal preference and the make-up of your cluster. If most of your resources can run on most of the nodes, then an opt-out arrangement is likely to result in a simpler configuration. On the other-hand, if most resources can only run on a small subset of nodes an opt-in configuration might be simpler. 6.1.1. Configuring an "Opt-In" Cluster To create an opt-in cluster, set the symmetric-cluster cluster property to false to prevent resources from running anywhere by default. Enable nodes for individual resources. The following commands configure location constraints so that the resource Webserver prefers node example-1 , the resource Database prefers node example-2 , and both resources can fail over to node example-3 if their preferred node fails. 6.1.2. Configuring an "Opt-Out" Cluster To create an opt-out cluster, set the symmetric-cluster cluster property to true to allow resources to run everywhere by default. The following commands will then yield a configuration that is equivalent to the example in Section 6.1.1, "Configuring an "Opt-In" Cluster" . Both resources can fail over to node example-3 if their preferred node fails, since every node has an implicit score of 0. Note that it is not necessary to specify a score of INFINITY in these commands, since that is the default value for the score.
|
[
"pcs constraint location rsc prefers node [= score ]",
"pcs constraint location rsc avoids node [= score ]",
"pcs property set symmetric-cluster=false",
"pcs constraint location Webserver prefers example-1=200 pcs constraint location Webserver prefers example-3=0 pcs constraint location Database prefers example-2=200 pcs constraint location Database prefers example-3=0",
"pcs property set symmetric-cluster=true",
"pcs constraint location Webserver prefers example-1=200 pcs constraint location Webserver avoids example-2=INFINITY pcs constraint location Database avoids example-1=INFINITY pcs constraint location Database prefers example-2=200"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/ch-resourceconstraints-HAAR
|
Deploy Red Hat Quay for proof-of-concept (non-production) purposes
|
Deploy Red Hat Quay for proof-of-concept (non-production) purposes Red Hat Quay 3.9 Deploy Red Hat Quay Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/deploy_red_hat_quay_for_proof-of-concept_non-production_purposes/index
|
Chapter 19. Service [v1]
|
Chapter 19. Service [v1] Description Service is a named abstraction of software service (for example, mysql) consisting of local port (for example 3306) that the proxy listens on, and the selector that determines which pods will answer requests sent through the proxy. Type object 19.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ServiceSpec describes the attributes that a user creates on a service. status object ServiceStatus represents the current status of a service. 19.1.1. .spec Description ServiceSpec describes the attributes that a user creates on a service. Type object Property Type Description allocateLoadBalancerNodePorts boolean allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is "true". It may be set to "false" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. clusterIP string clusterIP is the IP address of the service and is usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be blank) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies clusterIPs array (string) ClusterIPs is a list of IP addresses assigned to this service, and are usually assigned randomly. If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail. This field may not be changed through updates unless the type field is also being changed to ExternalName (which requires this field to be empty) or the type field is being changed from ExternalName (in which case this field may optionally be specified, as describe above). Valid values are "None", empty string (""), or a valid IP address. Setting this to "None" makes a "headless service" (no virtual IP), which is useful when direct endpoint connections are preferred and proxying is not required. Only applies to types ClusterIP, NodePort, and LoadBalancer. If this field is specified when creating a Service of type ExternalName, creation will fail. This field will be wiped when updating a Service to type ExternalName. If this field is not specified, it will be initialized from the clusterIP field. If this field is specified, clients must ensure that clusterIPs[0] and clusterIP have the same value. This field may hold a maximum of two entries (dual-stack IPs, in either order). These IPs must correspond to the values of the ipFamilies field. Both clusterIPs and ipFamilies are governed by the ipFamilyPolicy field. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies externalIPs array (string) externalIPs is a list of IP addresses for which nodes in the cluster will also accept traffic for this service. These IPs are not managed by Kubernetes. The user is responsible for ensuring that traffic arrives at a node with this IP. A common example is external load-balancers that are not part of the Kubernetes system. externalName string externalName is the external reference that discovery mechanisms will return as an alias for this service (e.g. a DNS CNAME record). No proxying will be involved. Must be a lowercase RFC-1123 hostname ( https://tools.ietf.org/html/rfc1123 ) and requires type to be "ExternalName". externalTrafficPolicy string externalTrafficPolicy describes how nodes distribute service traffic they receive on one of the Service's "externally-facing" addresses (NodePorts, ExternalIPs, and LoadBalancer IPs). If set to "Local", the proxy will configure the service in a way that assumes that external load balancers will take care of balancing the service traffic between nodes, and so each node will deliver traffic only to the node-local endpoints of the service, without masquerading the client source IP. (Traffic mistakenly sent to a node with no endpoints will be dropped.) The default value, "Cluster", uses the standard behavior of routing to all endpoints evenly (possibly modified by topology and other features). Note that traffic sent to an External IP or LoadBalancer IP from within the cluster will always get "Cluster" semantics, but clients sending to a NodePort from within the cluster may need to take traffic policy into account when picking a node. Possible enum values: - "Cluster" - "Cluster" routes traffic to all endpoints. - "Local" - "Local" preserves the source IP of the traffic by routing only to endpoints on the same node as the traffic was received on (dropping the traffic if there are no local endpoints). healthCheckNodePort integer healthCheckNodePort specifies the healthcheck nodePort for the service. This only applies when type is set to LoadBalancer and externalTrafficPolicy is set to Local. If a value is specified, is in-range, and is not in use, it will be used. If not specified, a value will be automatically allocated. External systems (e.g. load-balancers) can use this port to determine if a given node holds endpoints for this service or not. If this field is specified when creating a Service which does not need it, creation will fail. This field will be wiped when updating a Service to no longer need it (e.g. changing type). This field cannot be updated once set. internalTrafficPolicy string InternalTrafficPolicy describes how nodes distribute service traffic they receive on the ClusterIP. If set to "Local", the proxy will assume that pods only want to talk to endpoints of the service on the same node as the pod, dropping the traffic if there are no local endpoints. The default value, "Cluster", uses the standard behavior of routing to all endpoints evenly (possibly modified by topology and other features). Possible enum values: - "Cluster" routes traffic to all endpoints. - "Local" routes traffic only to endpoints on the same node as the client pod (dropping the traffic if there are no local endpoints). ipFamilies array (string) IPFamilies is a list of IP families (e.g. IPv4, IPv6) assigned to this service. This field is usually assigned automatically based on cluster configuration and the ipFamilyPolicy field. If this field is specified manually, the requested family is available in the cluster, and ipFamilyPolicy allows it, it will be used; otherwise creation of the service will fail. This field is conditionally mutable: it allows for adding or removing a secondary IP family, but it does not allow changing the primary IP family of the Service. Valid values are "IPv4" and "IPv6". This field only applies to Services of types ClusterIP, NodePort, and LoadBalancer, and does apply to "headless" services. This field will be wiped when updating a Service to type ExternalName. This field may hold a maximum of two entries (dual-stack families, in either order). These families must correspond to the values of the clusterIPs field, if specified. Both clusterIPs and ipFamilies are governed by the ipFamilyPolicy field. ipFamilyPolicy string IPFamilyPolicy represents the dual-stack-ness requested or required by this Service. If there is no value provided, then this field will be set to SingleStack. Services can be "SingleStack" (a single IP family), "PreferDualStack" (two IP families on dual-stack configured clusters or a single IP family on single-stack clusters), or "RequireDualStack" (two IP families on dual-stack configured clusters, otherwise fail). The ipFamilies and clusterIPs fields depend on the value of this field. This field will be wiped when updating a service to type ExternalName. Possible enum values: - "PreferDualStack" indicates that this service prefers dual-stack when the cluster is configured for dual-stack. If the cluster is not configured for dual-stack the service will be assigned a single IPFamily. If the IPFamily is not set in service.spec.ipFamilies then the service will be assigned the default IPFamily configured on the cluster - "RequireDualStack" indicates that this service requires dual-stack. Using IPFamilyPolicyRequireDualStack on a single stack cluster will result in validation errors. The IPFamilies (and their order) assigned to this service is based on service.spec.ipFamilies. If service.spec.ipFamilies was not provided then it will be assigned according to how they are configured on the cluster. If service.spec.ipFamilies has only one entry then the alternative IPFamily will be added by apiserver - "SingleStack" indicates that this service is required to have a single IPFamily. The IPFamily assigned is based on the default IPFamily used by the cluster or as identified by service.spec.ipFamilies field loadBalancerClass string loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. "internal-vip" or "example.com/internal-vip". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type. loadBalancerIP string Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations. Using it is non-portable and it may not support dual-stack. Users are encouraged to use implementation-specific annotations when available. loadBalancerSourceRanges array (string) If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature." More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/ ports array The list of ports that are exposed by this service. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies ports[] object ServicePort contains information on service's port. publishNotReadyAddresses boolean publishNotReadyAddresses indicates that any agent which deals with endpoints for this Service should disregard any indications of ready/not-ready. The primary use case for setting this field is for a StatefulSet's Headless Service to propagate SRV DNS records for its Pods for the purpose of peer discovery. The Kubernetes controllers that generate Endpoints and EndpointSlice resources for Services interpret this to mean that all endpoints are considered "ready" even if the Pods themselves are not. Agents which consume only Kubernetes generated endpoints through the Endpoints or EndpointSlice resources can safely assume this behavior. selector object (string) Route service traffic to pods with label keys and values matching this selector. If empty or not present, the service is assumed to have an external process managing its endpoints, which Kubernetes will not modify. Only applies to types ClusterIP, NodePort, and LoadBalancer. Ignored if type is ExternalName. More info: https://kubernetes.io/docs/concepts/services-networking/service/ sessionAffinity string Supports "ClientIP" and "None". Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies Possible enum values: - "ClientIP" is the Client IP based. - "None" - no session affinity. sessionAffinityConfig object SessionAffinityConfig represents the configurations of session affinity. type string type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer. "ClusterIP" allocates a cluster-internal IP address for load-balancing to endpoints. Endpoints are determined by the selector or if that is not specified, by manual construction of an Endpoints object or EndpointSlice objects. If clusterIP is "None", no virtual IP is allocated and the endpoints are published as a set of endpoints rather than a virtual IP. "NodePort" builds on ClusterIP and allocates a port on every node which routes to the same endpoints as the clusterIP. "LoadBalancer" builds on NodePort and creates an external load-balancer (if supported in the current cloud) which routes to the same endpoints as the clusterIP. "ExternalName" aliases this service to the specified externalName. Several other fields do not apply to ExternalName services. More info: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types Possible enum values: - "ClusterIP" means a service will only be accessible inside the cluster, via the cluster IP. - "ExternalName" means a service consists of only a reference to an external name that kubedns or equivalent will return as a CNAME record, with no exposing or proxying of any pods involved. - "LoadBalancer" means a service will be exposed via an external load balancer (if the cloud provider supports it), in addition to 'NodePort' type. - "NodePort" means a service will be exposed on one port of every node, in addition to 'ClusterIP' type. 19.1.2. .spec.ports Description The list of ports that are exposed by this service. More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies Type array 19.1.3. .spec.ports[] Description ServicePort contains information on service's port. Type object Required port Property Type Description appProtocol string The application protocol for this port. This is used as a hint for implementations to offer richer behavior for protocols that they understand. This field follows standard Kubernetes label syntax. Valid values are either: * Un-prefixed protocol names - reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names ). * Kubernetes-defined prefixed names: * 'kubernetes.io/h2c' - HTTP/2 over cleartext as described in https://www.rfc-editor.org/rfc/rfc7540 * 'kubernetes.io/ws' - WebSocket over cleartext as described in https://www.rfc-editor.org/rfc/rfc6455 * 'kubernetes.io/wss' - WebSocket over TLS as described in https://www.rfc-editor.org/rfc/rfc6455 * Other protocols should use implementation-defined prefixed names such as mycompany.com/my-custom-protocol. name string The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service. nodePort integer The port on each node on which this service is exposed when type is NodePort or LoadBalancer. Usually assigned by the system. If a value is specified, in-range, and not in use it will be used, otherwise the operation will fail. If not specified, a port will be allocated if this Service requires one. If this field is specified when creating a Service which does not need it, creation will fail. This field will be wiped when updating a Service to no longer need it (e.g. changing type from NodePort to ClusterIP). More info: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport port integer The port that will be exposed by this service. protocol string The IP protocol for this port. Supports "TCP", "UDP", and "SCTP". Default is TCP. Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. targetPort IntOrString Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map). This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field. More info: https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service 19.1.4. .spec.sessionAffinityConfig Description SessionAffinityConfig represents the configurations of session affinity. Type object Property Type Description clientIP object ClientIPConfig represents the configurations of Client IP based session affinity. 19.1.5. .spec.sessionAffinityConfig.clientIP Description ClientIPConfig represents the configurations of Client IP based session affinity. Type object Property Type Description timeoutSeconds integer timeoutSeconds specifies the seconds of ClientIP type session sticky time. The value must be >0 && ⇐86400(for 1 day) if ServiceAffinity == "ClientIP". Default value is 10800(for 3 hours). 19.1.6. .status Description ServiceStatus represents the current status of a service. Type object Property Type Description conditions array (Condition) Current service state loadBalancer object LoadBalancerStatus represents the status of a load-balancer. 19.1.7. .status.loadBalancer Description LoadBalancerStatus represents the status of a load-balancer. Type object Property Type Description ingress array Ingress is a list containing ingress points for the load-balancer. Traffic intended for the service should be sent to these ingress points. ingress[] object LoadBalancerIngress represents the status of a load-balancer ingress point: traffic intended for the service should be sent to an ingress point. 19.1.8. .status.loadBalancer.ingress Description Ingress is a list containing ingress points for the load-balancer. Traffic intended for the service should be sent to these ingress points. Type array 19.1.9. .status.loadBalancer.ingress[] Description LoadBalancerIngress represents the status of a load-balancer ingress point: traffic intended for the service should be sent to an ingress point. Type object Property Type Description hostname string Hostname is set for load-balancer ingress points that are DNS based (typically AWS load-balancers) ip string IP is set for load-balancer ingress points that are IP based (typically GCE or OpenStack load-balancers) ports array Ports is a list of records of service ports If used, every port defined in the service should have an entry in it ports[] object 19.1.10. .status.loadBalancer.ingress[].ports Description Ports is a list of records of service ports If used, every port defined in the service should have an entry in it Type array 19.1.11. .status.loadBalancer.ingress[].ports[] Description Type object Required port protocol Property Type Description error string Error is to record the problem with the service port The format of the error shall comply with the following rules: - built-in error values shall be specified in this file and those shall use CamelCase names - cloud provider specific error values must have names that comply with the format foo.example.com/CamelCase. port integer Port is the port number of the service port of which status is recorded here protocol string Protocol is the protocol of the service port of which status is recorded here The supported values are: "TCP", "UDP", "SCTP" Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. 19.2. API endpoints The following API endpoints are available: /api/v1/services GET : list or watch objects of kind Service /api/v1/watch/services GET : watch individual changes to a list of Service. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/services DELETE : delete collection of Service GET : list or watch objects of kind Service POST : create a Service /api/v1/watch/namespaces/{namespace}/services GET : watch individual changes to a list of Service. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/services/{name} DELETE : delete a Service GET : read the specified Service PATCH : partially update the specified Service PUT : replace the specified Service /api/v1/watch/namespaces/{namespace}/services/{name} GET : watch changes to an object of kind Service. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /api/v1/namespaces/{namespace}/services/{name}/status GET : read status of the specified Service PATCH : partially update status of the specified Service PUT : replace status of the specified Service 19.2.1. /api/v1/services HTTP method GET Description list or watch objects of kind Service Table 19.1. HTTP responses HTTP code Reponse body 200 - OK ServiceList schema 401 - Unauthorized Empty 19.2.2. /api/v1/watch/services HTTP method GET Description watch individual changes to a list of Service. deprecated: use the 'watch' parameter with a list operation instead. Table 19.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 19.2.3. /api/v1/namespaces/{namespace}/services HTTP method DELETE Description delete collection of Service Table 19.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 19.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Service Table 19.5. HTTP responses HTTP code Reponse body 200 - OK ServiceList schema 401 - Unauthorized Empty HTTP method POST Description create a Service Table 19.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.7. Body parameters Parameter Type Description body Service schema Table 19.8. HTTP responses HTTP code Reponse body 200 - OK Service schema 201 - Created Service schema 202 - Accepted Service schema 401 - Unauthorized Empty 19.2.4. /api/v1/watch/namespaces/{namespace}/services HTTP method GET Description watch individual changes to a list of Service. deprecated: use the 'watch' parameter with a list operation instead. Table 19.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 19.2.5. /api/v1/namespaces/{namespace}/services/{name} Table 19.10. Global path parameters Parameter Type Description name string name of the Service HTTP method DELETE Description delete a Service Table 19.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 19.12. HTTP responses HTTP code Reponse body 200 - OK Service schema 202 - Accepted Service schema 401 - Unauthorized Empty HTTP method GET Description read the specified Service Table 19.13. HTTP responses HTTP code Reponse body 200 - OK Service schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Service Table 19.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.15. HTTP responses HTTP code Reponse body 200 - OK Service schema 201 - Created Service schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Service Table 19.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.17. Body parameters Parameter Type Description body Service schema Table 19.18. HTTP responses HTTP code Reponse body 200 - OK Service schema 201 - Created Service schema 401 - Unauthorized Empty 19.2.6. /api/v1/watch/namespaces/{namespace}/services/{name} Table 19.19. Global path parameters Parameter Type Description name string name of the Service HTTP method GET Description watch changes to an object of kind Service. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 19.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 19.2.7. /api/v1/namespaces/{namespace}/services/{name}/status Table 19.21. Global path parameters Parameter Type Description name string name of the Service HTTP method GET Description read status of the specified Service Table 19.22. HTTP responses HTTP code Reponse body 200 - OK Service schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Service Table 19.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.24. HTTP responses HTTP code Reponse body 200 - OK Service schema 201 - Created Service schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Service Table 19.25. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.26. Body parameters Parameter Type Description body Service schema Table 19.27. HTTP responses HTTP code Reponse body 200 - OK Service schema 201 - Created Service schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/network_apis/service-v1
|
10.4. Using KVM virtio Drivers for Existing Devices
|
10.4. Using KVM virtio Drivers for Existing Devices You can modify an existing hard disk device attached to the guest to use the virtio driver instead of the virtualized IDE driver. The example shown in this section edits libvirt configuration files. Note that the guest virtual machine does not need to be shut down to perform these steps, however the change will not be applied until the guest is completely shut down and rebooted. Procedure 10.4. Using KVM virtio drivers for existing devices Ensure that you have installed the appropriate driver ( viostor ), as described in Section 10.1, "Installing the KVM Windows virtio Drivers" , before continuing with this procedure. Run the virsh edit <guestname> command as root to edit the XML configuration file for your device. For example, virsh edit guest1 . The configuration files are located in /etc/libvirt/qemu . Below is a file-based block device using the virtualized IDE driver. This is a typical entry for a virtual machine not using the virtio drivers. Change the entry to use the virtio device by modifying the bus= entry to virtio . Note that if the disk was previously IDE it will have a target similar to hda, hdb, or hdc and so on. When changing to bus=virtio the target needs to be changed to vda, vdb, or vdc accordingly. Remove the address tag inside the disk tags. This must be done for this procedure to work. Libvirt will regenerate the address tag appropriately the time the virtual machine is started. Alternatively, virt-manager , virsh attach-disk or virsh attach-interface can add a new device using the virtio drivers. Refer to the libvirt website for more details on using Virtio: http://www.linux-kvm.org/page/Virtio
|
[
"<disk type='file' device='disk'> <source file='/var/lib/libvirt/images/disk1.img'/> <target dev='hda' bus='ide'/> </disk>",
"<disk type='file' device='disk'> <source file='/var/lib/libvirt/images/disk1.img'/> <target dev='vda' bus='virtio'/> </disk>"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/form-virtualization_host_configuration_and_guest_installation_guide-para_virtualized_drivers-using_kvm_para_virtualized_drivers_for_existing_devices
|
Getting Started with Streams for Apache Kafka on OpenShift
|
Getting Started with Streams for Apache Kafka on OpenShift Red Hat Streams for Apache Kafka 2.7 Get started using Streams for Apache Kafka 2.7 on OpenShift Container Platform
|
[
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: amq-streams-kafka spec: kafka: # listeners: # - name: listener1 port: 9094 type: route tls: true",
"get routes my-cluster-kafka-listener1-bootstrap -o=jsonpath='{.status.ingress[0].host}{\"\\n\"}'",
"extract secret/my-cluster-cluster-ca-cert --keys=ca.crt --to=- > ca.crt",
"keytool -keystore client.truststore.jks -alias CARoot -import -file ca.crt",
"run kafka-producer -ti --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic",
"run kafka-consumer -ti --image=registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning",
"kafka-console-producer.sh --bootstrap-server my-cluster-kafka-listener1-bootstrap-amq-streams-kafka.apps.ci-ln-50kcyvt-72292.origin-ci-int-gce.dev.rhcloud.com:443 --producer-property security.protocol=SSL --producer-property ssl.truststore.password=password --producer-property ssl.truststore.location=client.truststore.jks --topic my-topic",
"kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-listener1-bootstrap-amq-streams-kafka.apps.ci-ln-50kcyvt-72292.origin-ci-int-gce.dev.rhcloud.com:443 --consumer-property security.protocol=SSL --consumer-property ssl.truststore.password=password --consumer-property ssl.truststore.location=client.truststore.jks --topic my-topic --from-beginning",
"dnf install <package_name>",
"dnf install <path_to_download_package>"
] |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html-single/getting_started_with_streams_for_apache_kafka_on_openshift/index
|
Chapter 7. Working with heat templates
|
Chapter 7. Working with heat templates Use heat templates and environment files to define certain aspects of the overcloud. The structure of a heat template has three main sections: Parameters Parameters are settings passed to heat. Use these parameters to define and customize both default and non-default values. Define these parameters in the parameters section of a template. Resources Resources are the specific objects that you want to create and configure as part of a stack. RHOSP contains a set of core resources that span across all components. Output These are values passed from heat after the stack creation. You can access these values either through the heat API or through the client tools. Define these values in the output section of a template. When heat processes a template, it creates a stack for the template and a set of child stacks for resource templates. This hierarchy of stacks descends from the main stack that you define with your template. You can view the stack hierarchy with the following command: 7.1. Core heat templates Red Hat OpenStack Platform (RHOSP) contains a collection of core heat templates for the overcloud. You can find this collection in the /usr/share/openstack-tripleo-heat-templates directory. There are many heat templates and environment files in this collection. You can use the main files and directories to customize your deployment. overcloud.j2.yaml This template file creates the overcloud environment. It uses Jinja2 syntax and iterates over certain sections in the template to create custom roles. During the overcloud deployment, director renders the Jinja2 formatting into YAML. overcloud-resource-registry-puppet.j2.yaml This environment file creates the overcloud environment. It contains a set of configurations for Puppet modules on the overcloud image. After director writes the overcloud image to each node, Heat starts the Puppet configuration for each node by using the resources that are registered in this environment file. This file uses Jinja2 syntax and iterates over certain sections in the template to create custom roles. During the overcloud deployment, director renders the Jinja2 formatting into YAML. roles_data.yaml This file contains definitions of the roles in an overcloud, and maps services to each role. network_data.yaml This file contains definitions of the networks in an overcloud and their properties, including subnets, allocation pools, and VIP status. The default network_data.yaml file contains only the default networks: External, Internal Api, Storage, Storage Management, Tenant, and Management. You can create a custom network_data.yaml file and include it in the openstack overcloud deploy command with the -n option. plan-environment.yaml This file contains definitions of the metadata for your overcloud plan, including the plan name, the main template that you want to use, and environment files that you want to apply to the overcloud. capabilities-map.yaml This file contains a mapping of environment files for an overcloud plan. Use this file to describe and enable environment files in the director web UI. If you include custom environment files in the environments directory but do not define these files in the capabilities-map.yaml file, you can find these environment files in the Other sub-tab of the Overall Settings page on the web UI. environments This directory contains additional heat environment files that you can use with your overcloud creation. These environment files enable extra functions for your RHOSP environment. For example, you can use the cinder-netapp-config.yaml environment file to enable a 3rd-party back end storage option for the Block Storage service (cinder). If you include custom environment files in the environments directory but do not define these files in the capabilities-map.yaml file, you can find these environment files in the Other sub-tab of the Overall Settings page on the web UI. network This directory contains a set of heat templates that you can use to create isolated networks and ports. puppet This directory contains puppet templates. The overcloud-resource-registry-puppet.j2.yaml environment file uses the files in the puppet directory to drive the application of the Puppet configuration on each node. puppet/services This directory contains heat templates for all services in the composable service architecture. extraconfig This directory contains templates that you can use to enable extra functionality. For example, you can use the extraconfig/pre_deploy/rhel-registration directory to register your nodes with the Red Hat Content Delivery network, or with your own Red Hat Satellite server.
|
[
"heat stack-list --show-nested"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/standalone_deployment_guide/working-with-heat-templates
|
Chapter 4. Selecting a system-wide archive Red Hat build of OpenJDK version
|
Chapter 4. Selecting a system-wide archive Red Hat build of OpenJDK version If you have multiple versions of Red Hat build of OpenJDK installed with the archive on RHEL, you can select a specific Red Hat build of OpenJDK version to use system-wide. Prerequisites Know the locations of the Red Hat build of OpenJDK versions installed using the archive. Procedure To specify the Red Hat build of OpenJDK version to use for a single session: Configure JAVA_HOME with the path to the Red Hat build of OpenJDK version you want used system-wide. USD export JAVA_HOME=/opt/jdk/openjdk-17.0.0.0.35 Add USDJAVA_HOME/bin to the PATH environment variable. USD export PATH="USDJAVA_HOME/bin:USDPATH" To specify the Red Hat build of OpenJDK version to use permanently for a single user, add these commands into ~/.bashrc : To specify the Red Hat build of OpenJDK version to use permanently for all users, add these commands into /etc/bashrc : Note If you do not want to redefine JAVA_HOME , add only the PATH command to bashrc , specifying the path to the Java binary. For example, export PATH="/opt/jdk/openjdk-17.0.0.0.35/bin:USDPATH" . Additional resources Be aware of the exact meaning of JAVA_HOME . For more information, see Changes/Decouple system java setting from java command setting .
|
[
"export JAVA_HOME=/opt/jdk/openjdk-17.0.0.0.35 export PATH=\"USDJAVA_HOME/bin:USDPATH\"",
"export JAVA_HOME=/opt/jdk/openjdk-17.0.0.0.35 export PATH=\"USDJAVA_HOME/bin:USDPATH\""
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/configuring_red_hat_build_of_openjdk_21_on_rhel/selecting-systemwide-archive-openjdk-version
|
8.120. libselinux
|
8.120. libselinux 8.120.1. RHBA-2014:1469 - libselinux bug fix update Updated libselinux packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The libselinux packages contain the core library of an SELinux system. The libselinux library provides an API for SELinux applications to get and set process and file security contexts, and to obtain security policy decisions. It is required for any applications that use the SELinux API, and used by all applications that are SELinux-aware. Bug Fixes BZ# 753675 When attempting to run the virt-manager utility over SSH X11 forwarding, SELinux prevented the D-Bus system from performing actions even if SELinux was in permissive mode. As a consequence, such attempts failed and an AVC denial message was logged. With this update, a patch has been provided to fix this bug, and SELinux in permissive mode no longer blocks D-Bus in the described scenario. BZ# 1011109 Prior to this update, the selinux(8) manual page contained outdated information. This manual page has been updated, and SELinux is now documented correctly. BZ# 1025507 The Name Server Caching Daemon (nscd) uses SELinux permissions to check if a connecting user is allowed to query the cache. However, two permissions, NSCD__GETNETGRP and NSCD__SHMEMNETGRP, were missing from the SELinux list of permissions. Consequently, the netgroup caching worked only when SELinux was running in permissive mode. The missing permissions have been added to the list, and the netgroup caching now works as expected. BZ# 1091857 Previously, the matchpathcon utility did not handle non-existent files or directories properly; the "matchpathcon -V" command verified the files of directories instead of specifying that they did not exist. The underlying source code has been modified to fix this bug, and matchpathcon now correctly recognizes non-existent files or directories. As a result, an error message is returned when a file or directory do not exist. BZ# 1096816 It was not possible to add a new user inside a Docker container because SELinux in enforcing or permissive mode incorrectly blocked an attempt to modify the /etc/passwd file. With this update, when the /selinux/ or /sys/fs/selinux/ directories are mounted as read-only, the libselinux library acts as if SELinux is disabled. This behavior stops SELinux-aware applications from attempting to perform SELinux actions inside a container, and /etc/passwd can now be modified as expected. Users of libselinux are advised to upgrade to these updated packages, which fix these bugs.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/libselinux
|
Chapter 2. Tutorial: ROSA with HCP activation and account linking
|
Chapter 2. Tutorial: ROSA with HCP activation and account linking This tutorial describes the process for activating Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) and linking to an AWS account, before deploying the first cluster. Important If you have received a private offer for the product, make sure to proceed according to the instructions provided with the private offer before following this tutorial. The private offer is designed either for a case when the product is already activated, which replaces an active subscription, or for first time activations. 2.1. Prerequisites Make sure to log in to the Red Hat account that you plan to associate with the AWS account where you have activated ROSA with HCP in steps. The AWS account used for service billing can only be associated with a single Red Hat account. Typically an AWS payer account is the one that is used to subscribe to ROSA and used for account linking and billing. All team members belonging to the same Red Hat organization can use the linked AWS account for service billing while creating ROSA with HCP clusters. 2.2. Subscription enablement and AWS account setup Activate the ROSA with HCP product at the AWS console page by clicking the Get started button: Figure 2.1. Get started If you have activated ROSA before but did not complete the process, you can click the button and complete the account linking as described in the following steps. Confirm that you want your contact information to be shared with Red Hat and enable the service: Figure 2.2. Enable ROSA You will not be charged by enabling the service in this step. The connection is made for billing and metering that will take place only after you deploy your first cluster. This could take a few minutes. After the process is completed, you will see a confirmation: Figure 2.3. ROSA enablement confirmation Other sections on this verification page show the status of additional prerequisites. In case any of these prerequisites are not met, a corresponding message is shown. Here is an example of insufficient quotas in the selected region: Figure 2.4. Service quotas Click the Increase service quotas button or use the Learn more link to get more information about the about how to manage service quotas. In the case of insufficient quotas, note that quotas are region-specific. You can use the region switcher in the upper right corner of the web console to re-run the quota check for any region you are interested in and then submit service quota increase requests as needed. If all the prerequisites are met, the page will look like this: Figure 2.5. Verify ROSA prerequisites The ELB service-linked role is created for you automatically. You can click any of the small Info blue links to get contextual help and resources. 2.3. AWS and Red Hat account and subscription linking Click the orange Continue to Red Hat button to proceed with account linking: Figure 2.6. Continue to Red Hat If you are not already logged in to your Red Hat account in your current browser's session, you will be asked to log in to your account: Note Your AWS account must be linked to a single Red Hat organization. Figure 2.7. Log in to your Red Hat account You can also register for a new Red Hat account or reset your password on this page. Make sure to log in to the Red Hat account that you plan to associate with the AWS account where you have activated ROSA with HCP in steps. The AWS account used for service billing can only be associated with a single Red Hat account. Typically an AWS payer account is the one that is used to subscribe to ROSA and used for account linking and billing. All team members belonging to the same Red Hat organization can use the linked AWS account for service billing while creating ROSA with HCP clusters. Complete the Red Hat account linking after reviewing the terms and conditions: Note This step is available only if the AWS account was not linked to any Red Hat account before. This step is skipped if the AWS account is already linked to the user's logged in Red Hat account. If the AWS account is linked to a different Red Hat account, an error will be displayed. See Correcting Billing Account Information for HCP clusters for troubleshooting. Figure 2.8. Complete your account connection Both the Red Hat and AWS account numbers are shown on this screen. Click the Connect accounts button if you agree with the service terms. If this is the first time you are using the Red Hat Hybrid Cloud Console, you will be asked to agree with the general managed services terms and conditions before being able to create the first ROSA cluster: Figure 2.9. Terms and conditions Additional terms that need to be reviewed and accepted are shown after clicking the View Terms and Conditions button: Figure 2.10. Red Hat terms and conditions Submit your agreement once you have reviewed any additional terms when prompted at this time. The Hybrid Cloud Console provides a confirmation that AWS account setup was completed and lists the prerequisites for cluster deployment: Figure 2.11. Complete ROSA prerequisites The last section of this page shows cluster deployment options, either using the rosa CLI or through the web console: Figure 2.12. Deploy the cluster and set up access 2.4. Selecting the AWS billing account for ROSA with HCP during cluster deployment using the CLI Important Make sure that you have the most recent ROSA command line interface (CLI) and AWS CLI installed and have completed the ROSA prerequisites covered in the section. See Help with ROSA CLI setup and Instructions to install the AWS CLI for more information. Initiate the cluster deployment using the rosa create cluster command. You can click the copy button on the Set up Red Hat OpenShift Service on AWS (ROSA) console page and paste the command in your terminal. This launches the cluster creation process in interactive mode: Figure 2.13. Deploy the cluster and set up access To use a custom AWS profile, one of the non-default profiles specified in your ~/.aws/credentials , you can add the -profile <profile_name> selector to the rosa create cluster command so that the command looks like rosa create cluster -profile stage . If no AWS CLI profile is specified using this option, the default AWS CLI profile will determine the AWS infrastructure profile into which the cluster is deployed. The billing AWS profile is selected in one of the following steps. When deploying a ROSA with HCP cluster, the billing AWS account needs to be specified: Figure 2.14. Specify the Billing Account Only AWS accounts that are linked to the user's logged in Red Hat account are shown. The specified AWS account is charged for using the ROSA service. An indicator shows if the ROSA contract is enabled or not enabled for a given AWS billing account. If you select an AWS billing account that shows the Contract enabled label, on-demand consumption rates are charged only after the capacity of your pre-paid contract is consumed. AWS accounts without the Contract enabled label are charged the applicable on-demand consumption rates. Additional resources The detailed cluster deployment steps are beyond the scope of this tutorial. See Creating ROSA with HCP clusters using the default options for more details about how to complete the ROSA with HCP cluster deployment using the CLI. 2.5. Selecting the AWS billing account for ROSA with HCP during cluster deployment using the web console A cluster can be created using the web console by selecting the second option in the bottom section of the introductory Set up ROSA page: Figure 2.15. Deploy with web interface Note Complete the prerequisites before starting the web console deployment process. The rosa CLI is required for certain tasks, such as creating the account roles. If you are deploying ROSA for the first time, follow this the CLI steps until running the rosa whoami command, before starting the web console deployment steps. The first step when creating a ROSA cluster using the web console is the control plane selection. Make sure the Hosted option is selected before clicking the button: Figure 2.16. Select hosted option The step Accounts and roles allows you specifying the infrastructure AWS account, into which the ROSA cluster is deployed and where the resources are consumed and managed: Figure 2.17. AWS infrastructure account Click the How to associate a new AWS account , if you don not see the account into which you want to deploy the ROSA cluster for detailed information on how to create or link account roles for this association. The rosa CLI is used for this. If you are using multiple AWS accounts and have their profiles configured for the AWS CLI, you can use the --profile selector to specify the AWS profile when working with the rosa CLI commands. The billing AWS account is selected in the immediately following section: Figure 2.18. AWS billing account Only AWS accounts that are linked to the user's logged in Red Hat account are shown. The specified AWS account is charged for using the ROSA service. An indicator shows if the ROSA contract is enabled or not enabled for a given AWS billing account. If you select an AWS billing account that shows the Contract enabled label, on-demand consumption rates are charged only after the capacity of your pre-paid contract is consumed. AWS accounts without the Contract enabled label are charged the applicable on-demand consumption rates. The following steps past the billing AWS account selection are beyond the scope of this tutorial. Additional resources For information on using the CLI to create a cluster, see Creating a ROSA with HCP cluster using the CLI . See this learning path for more details on how to complete ROSA cluster deployment using the web console.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/tutorials/rosa-with-hcp-activation-and-account-linking
|
14.12.4. Listing Volume Information
|
14.12.4. Listing Volume Information The vol-info --pool pool-or-uuid vol-name-or-key-or-path command lists basic information about the given storage volume --pool , where pool-or-uuid is the name or UUID of the storage pool the volume is in. vol-name-or-key-or-path is the name or key or path of the volume to return information for. The vol-list --pool pool-or-uuid --details lists all of volumes in the specified storage pool. This command requires --pool pool-or-uuid which is the name or UUID of the storage pool. The --details option instructs virsh to additionally display volume type and capacity related information where available.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-storage_volume_commands-listing_volume_information
|
2.2. GRUB
|
2.2. GRUB The GNU GRand Unified Boot loader (GRUB) is a program which enables the selection of the installed operating system or kernel to be loaded at system boot time. It also allows the user to pass arguments to the kernel. 2.2.1. GRUB and the x86 Boot Process This section discusses the specific role GRUB plays when booting an x86 system. For a look at the overall boot process, refer to Section 1.2, "A Detailed Look at the Boot Process" . GRUB loads itself into memory in the following stages: The Stage 1 or primary boot loader is read into memory by the BIOS from the MBR [4] . The primary boot loader exists on less than 512 bytes of disk space within the MBR and is capable of loading either the Stage 1.5 or Stage 2 boot loader. The Stage 1.5 boot loader is read into memory by the Stage 1 boot loader, if necessary. Some hardware requires an intermediate step to get to the Stage 2 boot loader. This is sometimes true when the /boot/ partition is above the 1024 cylinder head of the hard drive or when using LBA mode. The Stage 1.5 boot loader is found either on the /boot/ partition or on a small part of the MBR and the /boot/ partition. The Stage 2 or secondary boot loader is read into memory. The secondary boot loader displays the GRUB menu and command environment. This interface allows the user to select which kernel or operating system to boot, pass arguments to the kernel, or look at system parameters. The secondary boot loader reads the operating system or kernel as well as the contents of /boot/sysroot/ into memory. Once GRUB determines which operating system or kernel to start, it loads it into memory and transfers control of the machine to that operating system. The method used to boot Red Hat Enterprise Linux is called direct loading because the boot loader loads the operating system directly. There is no intermediary between the boot loader and the kernel. The boot process used by other operating systems may differ. For example, the Microsoft (R) Windows (R) operating system, as well as other operating systems, are loaded using chain loading . Under this method, the MBR points to the first sector of the partition holding the operating system, where it finds the files necessary to actually boot that operating system. GRUB supports both direct and chain loading boot methods, allowing it to boot almost any operating system. Warning During installation, Microsoft's DOS and Windows installation programs completely overwrite the MBR, destroying any existing boot loaders. If creating a dual-boot system, it is best to install the Microsoft operating system first. [4] For more on the system BIOS and the MBR, refer to Section 1.2.1, "The BIOS" .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-grub-whatis
|
Chapter 1. Upgrading the Red Hat Developer Hub Operator
|
Chapter 1. Upgrading the Red Hat Developer Hub Operator If you use the Operator to deploy your Red Hat Developer Hub instance, then an administrator can use the OpenShift Container Platform web console to upgrade the Operator to a later version. OpenShift Container Platform is currently supported from version 4.14 to 4.17. See also the Red Hat Developer Hub Life Cycle . Prerequisites You are logged in as an administrator on the OpenShift Container Platform web console. You have installed the Red Hat Developer Hub Operator. You have configured the appropriate roles and permissions within your project to create or access an application. For more information, see the Red Hat OpenShift Container Platform documentation on Building applications . Procedure In the Administrator perspective of the OpenShift Container Platform web console, click Operators > Installed Operators . On the Installed Operators page, click Red Hat Developer Hub Operator . On the Red Hat Developer Hub Operator page, click the Subscription tab. From the Upgrade status field on the Subscription details page, click Upgrade available . Note If there is no upgrade available, the Upgrade status field value is Up to date . On the InstallPlan details page, click Preview InstallPlan > Approve . Verification The Upgrade status field value on the Subscription details page is Up to date . Additional resources Installing Red Hat Developer Hub on OpenShift Container Platform with the Operator . Installing from OperatorHub using the web console
| null |
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/upgrading_red_hat_developer_hub/proc-upgrade-rhdh-operator_title-upgrade-rhdh
|
Part II. Updating an integration
|
Part II. Updating an integration You can pause, resume, or remove your integrations in Red Hat Hybrid Cloud Console .
| null |
https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/integrating_amazon_web_services_aws_data_into_cost_management/assembly-updating-int
|
Chapter 3. Translator Development
|
Chapter 3. Translator Development 3.1. Environment Set-Up To create a new custom translator: Create a new (or reuse an existing) resource adapter for the data source, to be used with this translator. Decide whether to use the Teiid archetype template to create your initial custom translator project and classes or manually create your environment. Create an ExecutionFactory by: extending the org.teiid.translator.ExecutionFactory class or extending the org.teiid.translator.jdbc.JDBCExecutionFactory class . Package the translator. Deploy your translator. Deploy a Virtual Database (VDB) that uses your translator. Execute queries via the Teiid engine. For sample translator code, refer to the teiid/connectors directory of the Red Hat JBoss Data Virtualization 6.4 Source Code ZIP file which can be downloaded from the Red Hat Customer Portal at https://access.redhat.com . To set up the environment for developing a custom translator, you can either manually configure the build environment, structure and framework classes and resources or use the Teiid Translator Archetype template to generate the initial project. To create the build environment in Red Hat JBoss Developer Studio without any Maven integration, create a Java project and add dependencies to "teiid-common-core", "teiid-api" and JEE "connector-api" jars. However, if you wish to use Maven, add these dependencies: <dependencies> <dependency> <groupId>org.jboss.teiid</groupId> <artifactId>teiid-api</artifactId> <version>USD{teiid-version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.jboss.teiid</groupId> <artifactId>teiid-common-core</artifactId> <version>USD{teiid-version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>javax.resource</groupId> <artifactId>connector-api</artifactId> <version>USD{version.connector.api}</version> <scope>provided</scope> </dependency> </dependencies> In this case, the USD{teiid-version} property should be set to the expected version, such as 8.12.0.Final. You can find Teiid artifacts in the JBoss maven repository . The first way to create a translator project is by using JBoss Developer Studio: Procedure 3.1. Create a Project in JBDS Open the Java perspective. From the menu select File - New - Other. In the tree, expand Maven and select Maven Project. Click . On the "Select project name and Location" window, you can accept the defaults, so click On the "Select an Archetype" window, click the Configure button Add the remote catalog found at https://repository.jboss.org/nexus/content/repositories/releases/ then click OK to return. Uncheck Show the last version of Archetype only and enter "teiid" in the filter to see the Teiid archetypes. Select the translator-archetype 8.12.x and then click . Enter all the information (such as Group ID and, Artifact ID) needed to generate the project. Click Finish. The other method involves using the command line. You can create a project using the Teiid archetype template. When the project is created from the template, it will contain the essential classes (in other words, the ExecutionFactory) and resources for you to begin adding your custom logic. Additionally, the maven dependencies are defined in the pom.xml file so that you can begin compiling the classes. Procedure 3.2. Create a Project Using the Command Line Issue the following template command: This is what the instructions mean: -DarchetypeGroupId - this is the group ID for the archetype to use to generate -DarchetypeArtifactId - this is the artifact ID for the archetype to use to generate. -DarchetypeVersion - this is the version for the archetype to use to generate. -DgroupId - this is a (user defined) group ID for the new translator project pom.xml. -DartifactId - this is a (user defined) artifact ID for the new translator project pom.xml. -Dpackage - this is a (user defined) the package structure where the java and resource files will be created. -Dversion - this is a (user defined) the version that the new connector project pom.xml will be. -Dtranslator-name - this is a (user defined) the name (type) of the new translator project, used to create the java class names. -Dteiid-version - the Teiid version upon which the connector will depend. -Dtranslator-type - This specifies the identifier used for the translator in EAP configuration files. Here is a sample command: After you execute it, you will be asked to confirm the properties: Type Y (for Yes) and press enter. Upon creation, a directory based on the artifactId will be created, that will contain the project. Navigate to that directory. Execute a test build to confirm the project was created correctly: mvn clean package It should build successfully. If so, you are now ready to start adding your custom code.
|
[
"<dependencies> <dependency> <groupId>org.jboss.teiid</groupId> <artifactId>teiid-api</artifactId> <version>USD{teiid-version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.jboss.teiid</groupId> <artifactId>teiid-common-core</artifactId> <version>USD{teiid-version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>javax.resource</groupId> <artifactId>connector-api</artifactId> <version>USD{version.connector.api}</version> <scope>provided</scope> </dependency> </dependencies>",
"mvn archetype:generate \\ -DarchetypeGroupId=org.jboss.teiid.arche-types \\ -DarchetypeArtifactId=translator-archetype \\ -DarchetypeVersion=8.12.0 \\ -DgroupId=USD{groupId} \\ -DartifactId=translator-USD{translator-name} \\ -Dpackage=org.teiid.translator.USD{translator-name} \\ -Dversion=USD{version} \\ -Dtranslator-name=USD{translator-name} \\ -Dtranslator-type=USD{translator-type} -Dteiid-version=USD{teiid-version}",
"mvn archetype:generate \\ -DarchetypeGroupId=org.jboss.teiid.arche-types \\ -DarchetypeArtifactId=translator-archetype \\ -DarchetypeVersion=8.12.0 \\ -DgroupId=org.jboss.teiid.connector \\ -DartifactId=translator-myType -Dpackage=org.teiid.translator.myType \\ -Dversion=0.0.1-SNAPSHOT \\ -Dtranslator-name=MyType \\ -Dtranslator-type=MyType -Dteiid-version=8.12.0.Final",
"Confirm properties configuration: groupId: org.jboss.teiid.connector artifactId: translator-myType version: 0.0.1-SNAPSHOT package: org.teiid.translator.myType teiid-version: 8.12.0.Final translator-name: MyType Y: :"
] |
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_4_server_development/chap-translator_development
|
Chapter 26. Adding the IdM CA service to an IdM server in a deployment without a CA
|
Chapter 26. Adding the IdM CA service to an IdM server in a deployment without a CA If you previously installed an Identity Management (IdM) domain without the certificate authority (CA) component, you can add the IdM CA service to the domain by using the ipa-ca-install command. Depending on your requirements, you can select one of the following options: Note For details on the supported CA configurations, see Planning your CA services . 26.1. Installing the first IdM CA as the root CA into an existing IdM domain If you previously installed Identity Management (IdM) without the certificate authority (CA) component, you can install the CA on an IdM server subsequently. Follow this procedure to install, on the idmserver server, an IdM CA that is not subordinate to any external root CA. Prerequisites You have root permissions on idmserver . The IdM server is installed on idmserver . Your IdM deployment has no CA installed. You know the IdM Directory Manager password. Procedure On idmserver , install the IdM Certificate Server CA: On each IdM host in the topology, run the ipa-certupdate utility to update the host with the information about the new certificate from the IdM LDAP. Important If you do not run ipa-certupdate after generating the IdM CA certificate, the certificate will not be distributed to the other IdM machines. 26.2. Installing the first IdM CA with an external CA as the root CA into an existing IdM domain If you previously installed Identity Management (IdM) without the certificate authority (CA) component, you can install the CA on an IdM server subsequently. Follow this procedure to install, on the idmserver server, an IdM CA that is subordinate to an external root CA, with zero or several intermediate CAs in between. Prerequisites You have root permissions on idmserver . The IdM server is installed on idmserver . Your IdM deployment has no CA installed. You know the IdM Directory Manager password. Procedure Start the installation: Wait till the command line informs you that a certificate signing request (CSR) has been saved. Submit the CSR to the external CA. Copy the issued certificate to the IdM server. Continue the installation by adding the certificates and full path to the external CA files to ipa-ca-install : On each IdM host in the topology, run the ipa-certupdate utility to update the host with the information about the new certificate from the IdM LDAP. Important Failing to run ipa-certupdate after generating the IdM CA certificate means that the certificate will not be distributed to the other IdM machines.
|
[
"[root@idmserver ~] ipa-ca-install",
"[root@idmserver ~] ipa-ca-install --external-ca",
"ipa-ca-install --external-cert-file=/root/master.crt --external-cert-file=/root/ca.crt"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/installing_identity_management/adding-the-idm-ca-service-to-an-idm-server-in-a-deployment-without-a-ca_installing-identity-management
|
8.4.6. Creating a Yum Repository
|
8.4.6. Creating a Yum Repository To set up a Yum repository, follow these steps: Install the createrepo package. To do so, type the following at a shell prompt as root : Copy all packages that you want to have in your repository into one directory, such as /mnt/local_repo/ . Change to this directory and run the following command: This creates the necessary metadata for your Yum repository, as well as the sqlite database for speeding up yum operations. Important Compared to Red Hat Enterprise Linux 5, RPM packages for Red Hat Enterprise Linux 6 are compressed with the XZ lossless data compression format and can be signed with newer hash algorithms like SHA-256. Consequently, it is not recommended to use the createrepo command on Red Hat Enterprise Linux 5 to create the package metadata for Red Hat Enterprise Linux 6.
|
[
"install createrepo",
"createrepo --database /mnt/local_repo"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-Yum_Repository
|
6.5. Setting the Database Cache Size
|
6.5. Setting the Database Cache Size The database cache contains the Berkeley database index files for the database, meaning all of the *.db and other files used for attribute indexing by the database. This value is passed to the Berkeley DB API function set_cachesize() . This cache size has less of an impact on Directory Server performance than the entry cache size, but if there is available RAM after the entry cache size is set, increase the amount of memory allocated to the database cache. The operating system also has a file system cache which may compete with the database cache for RAM usage. Refer to the operating system documentation to find information on file system cache settings and monitoring the file system cache. Note Instead of manually setting the entry cache size Red Hat recommends the auto-sizing feature for optimized settings based on the hardware resources. For details, see Section 6.1.1, "Manually Re-enabling the Database and Entry Cache Auto-sizing" . 6.5.1. Manually Setting the Database Cache Size Using the Command Line To manually set the database cache size using the command line: Disable automatic cache tuning: Manually set the database cache size: This command sets the database cache to 256 megabytes. Restart the Directory Service instance: 6.5.2. Manually Setting the Database Cache Size Using the Web Console To manually set the database cache size using the Web Console: Open the Directory Server user interface in the web console. For details, see Logging Into Directory Server Using the Web Console section in the Red Hat Directory Server Administration Guide . Select the instance. On the Database tab, select Global Database Configuration . Disable Automatic Cache Tuning . Click Save Configuration . Set the Database Cache Size (bytes) field to the database cache size. Click Save Configuration . Click the Actions button, and select Restart Instance . 6.5.3. Storing the Database Cache on a RAM Disk If your system running the Directory Server instance has enough free RAM, you can optionally store the database cache on a RAM disk for further performance improvements: Create a directory for the database cache and metadata on the RAM disk: Set the following permissions on the directory: Stop the Directory Server instance: Edit the /etc/dirsrv/slapd- instance_name /dse.ldif file and set the new path in the nsslapd-db-home-directory attribute in the cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config entry: If the nsslapd-db-home-directory attribute does not exist, add it with the new value to the cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config entry. Start the Directory Server instance: Note When the database cache is stored on a RAM disk, Directory Server needs to recreate it after each reboot. As a consequence, the service start and initial operations are slower until the cache is recreated.
|
[
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend config set --cache-autosize=0",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend config set --dbcachesize=268435456",
"dsctl instance_name restart",
"mkdir -p /dev/shm/slapd- instance_name /",
"chown dirsrv:dirsrv /dev/shm/slapd- instance_name / chmod 770 /dev/shm/slapd- instance_name /",
"systemctl stop dirsrv@ instance_name",
"dn: cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config nsslapd-db-home-directory: /dev/shm/slapd- instance_name /",
"systemctl start dirsrv@ instance_name"
] |
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/performance_tuning_guide/tuning-db-cache
|
Chapter 15. Troubleshooting
|
Chapter 15. Troubleshooting There are cases where the Assisted Installer cannot begin the installation or the cluster fails to install properly. In these events, it is helpful to understand the likely failure modes as well as how to troubleshoot the failure. 15.1. Prerequisites You have created an infrastructure environment using the API or have created a cluster using the UI. 15.2. Troubleshooting discovery ISO issues The Assisted Installer uses an ISO image to run an agent that registers the host to the cluster and performs hardware and network validations before attempting to install OpenShift. You can follow these procedures to troubleshoot problems related to the host discovery. Once you start the host with the discovery ISO image, the Assisted Installer discovers the host and presents it in the Assisted Service UI. See Configuring the discovery image for additional details. 15.3. Minimal ISO Image The minimal ISO image should be used when bandwidth over the virtual media connection is limited. It includes only what is required to boot a host with networking. The majority of the content is downloaded upon boot. The resulting ISO image is about 100MB in size compared to 1GB for the full ISO image. 15.3.1. Troubleshooting minimal ISO boot failures If your environment requires static network configuration to access the Assisted Installer service, any issues with that configuration may prevent the Minimal ISO from booting properly. If the boot screen shows that the host has failed to download the root file system image, verify that any additional network configuration is correct. Switching to a Full ISO image will also allow for easier debugging. Example rootfs download failure 15.4. Verify the discovery agent is running Prerequisites You have created an Infrastructure Environment by using the API or have created a cluster by using the UI. You booted a host with the Infrastructure Environment discovery ISO and the host failed to register. You have ssh access to the host. You provided an SSH public key in the "Add hosts" dialog before generating the Discovery ISO so that you can SSH into your machine without a password. Procedure Verify that your host machine is powered on. If you selected DHCP networking , check that the DHCP server is enabled. If you selected Static IP, bridges and bonds networking, check that your configurations are correct. Verify that you can access your host machine using SSH, a console such as the BMC, or a virtual machine console: USD ssh core@<host_ip_address> You can specify private key file using the -i parameter if it isn't stored in the default directory. USD ssh -i <ssh_private_key_file> core@<host_ip_address> If you fail to ssh to the host, the host failed during boot or it failed to configure the network. Upon login you should see this message: Example login If you are not seeing this message it means that the host didn't boot with the assisted-installer ISO. Make sure you configured the boot order properly (The host should boot once from the live-ISO). Check the agent service logs: USD sudo journalctl -u agent.service In the following example, the errors indicate there is a network issue: Example agent service log screenshot of agent service log If there is an error pulling the agent image, check the proxy settings. Verify that the host is connected to the network. You can use nmcli to get additional information about your network configuration. 15.5. Verify the agent can access the assisted-service Prerequisites You have created an Infrastructure Environment by using the API or have created a cluster by using the UI. You booted a host with the Infrastructure Environment discovery ISO and the host failed to register. You verified the discovery agent is running. Procedure Check the agent logs to verify the agent can access the Assisted Service: USD sudo journalctl TAG=agent The errors in the following example indicate that the agent failed to access the Assisted Service. Example agent log Check the proxy settings you configured for the cluster. If configured, the proxy must allow access to the Assisted Service URL. 15.6. Correcting a host's boot order Once the installation that runs as part of the Discovery Image completes, the Assisted Installer reboots the host. The host must boot from its installation disk to continue forming the cluster. If you have not correctly configured the host's boot order, it will boot from another disk instead, interrupting the installation. If the host boots the discovery image again, the Assisted Installer will immediately detect this event and set the host's status to Installing Pending User Action . Alternatively, if the Assisted Installer does not detect that the host has booted the correct disk within the allotted time, it will also set this host status. Procedure Reboot the host and set its boot order to boot from the installation disk. If you didn't select an installation disk, the Assisted Installer selected one for you. To view the selected installation disk, click to expand the host's information in the host inventory, and check which disk has the "Installation disk" role. 15.7. Rectifying partially-successful installations There are cases where the Assisted Installer declares an installation to be successful even though it encountered errors: If you requested to install OLM operators and one or more failed to install, log into the cluster's console to remediate the failures. If you requested to install more than two worker nodes and at least one failed to install, but at least two succeeded, add the failed workers to the installed cluster.
|
[
"ssh core@<host_ip_address>",
"ssh -i <ssh_private_key_file> core@<host_ip_address>",
"sudo journalctl -u agent.service",
"sudo journalctl TAG=agent"
] |
https://docs.redhat.com/en/documentation/assisted_installer_for_openshift_container_platform/2023/html/assisted_installer_for_openshift_container_platform/assembly_troubleshooting
|
Chapter 12. Services and Daemons
|
Chapter 12. Services and Daemons Maintaining security on your system is extremely important, and one approach for this task is to manage access to system services carefully. Your system may need to provide open access to particular services (for example, httpd if you are running a web server). However, if you do not need to provide a service, you should turn it off to minimize your exposure to possible bug exploits. This chapter explains the concept of runlevels, and describes how to set the default one. It also covers the setup of the services to be run in each of these runlevels, and provides information on how to start, stop, and restart the services on the command line using the service command. Important When you allow access for new services, always remember that both the firewall and SELinux need to be configured as well. One of the most common mistakes committed when configuring a new service is neglecting to implement the necessary firewall configuration and SELinux policies to allow access for it. For more information, see the Red Hat Enterprise Linux 6 Security Guide . 12.1. Configuring the Default Runlevel A runlevel is a state, or mode , defined by services that are meant to be run when this runlevel is selected. Seven numbered runlevels exist (indexed from 0 ): Table 12.1. Runlevels in Red Hat Enterprise Linux Runlevel Description 0 Used to halt the system. This runlevel is reserved and cannot be changed. 1 Used to run in a single-user mode. This runlevel is reserved and cannot be changed. 2 Not used by default. You are free to define it yourself. 3 Used to run in a full multi-user mode with a command-line user interface. 4 Not used by default. You are free to define it yourself. 5 Used to run in a full multi-user mode with a graphical user interface. 6 Used to reboot the system. This runlevel is reserved and cannot be changed. To check in which runlevel you are operating, type the following: The runlevel command displays and current runlevel. In this case it is number 5 , which means the system is running in a full multi-user mode with a graphical user interface. The default runlevel can be changed by modifying the /etc/inittab file, which contains a line near the end of the file similar to the following: To do so, edit this file as root and change the number on this line to the desired value. The change will take effect the time you reboot the system.
|
[
"~]USD runlevel N 5",
"id:5:initdefault:"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch-services_and_daemons
|
Chapter 341. Swagger Java Component
|
Chapter 341. Swagger Java Component Available as of Camel 2.16 The Rest DSL can be integrated with the camel-swagger-java module which is used for exposing the REST services and their APIs using Swagger . Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-swagger-java</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> The camel-swagger-java module can be used from the REST components (without the need for servlet) 341.1. Using Swagger in rest-dsl You can enable the swagger api from the rest-dsl by configuring the apiContextPath dsl as shown below: public class UserRouteBuilder extends RouteBuilder { @Override public void configure() throws Exception { // configure we want to use servlet as the component for the rest DSL // and we enable json binding mode restConfiguration().component("netty4-http").bindingMode(RestBindingMode.json) // and output using pretty print .dataFormatProperty("prettyPrint", "true") // setup context path and port number that netty will use .contextPath("/").port(8080) // add swagger api-doc out of the box .apiContextPath("/api-doc") .apiProperty("api.title", "User API").apiProperty("api.version", "1.2.3") // and enable CORS .apiProperty("cors", "true"); // this user REST service is json only rest("/user").description("User rest service") .consumes("application/json").produces("application/json") .get("/{id}").description("Find user by id").outType(User.class) .param().name("id").type(path).description("The id of the user to get").dataType("int").endParam() .to("bean:userService?method=getUser(USD{header.id})") .put().description("Updates or create a user").type(User.class) .param().name("body").type(body).description("The user to update or create").endParam() .to("bean:userService?method=updateUser") .get("/findAll").description("Find all users").outTypeList(User.class) .to("bean:userService?method=listUsers"); } } 341.2. Options The swagger module can be configured using the following options. To configure using a servlet you use the init-param as shown above. When configuring directly in the rest-dsl, you use the appropriate method, such as enableCORS , host,contextPath , dsl. The options with api.xxx is configured using apiProperty dsl. Option Type Description cors Boolean Whether to enable CORS. Notice this only enables CORS for the api browser, and not the actual access to the REST services. Is default false. swagger.version String Swagger spec version. The default 2.0. host String To setup the hostname. If not configured camel-swagger-java will calculate the name as localhost based. schemas String The protocol schemes to use. Multiple values can be separated by comma such as "http,https". The default value is "http". This option is deprecated from Camel 2.17 onwards due it should have been named schemes. schemes String Camel 2.17: The protocol schemes to use. Multiple values can be separated by comma such as "http,https". The default value is "http". base.path String Required : To setup the base path where the REST services is available. The path is relative (eg do not start with http/https) and camel-swagger-java will calculate the absolute base path at runtime, which will be protocol://host:port/context-path/base.path api.path String To setup the path where the API is available (eg /api-docs). The path is relative (eg do not start with http/https) and camel-swagger-java will calculate the absolute base path at runtime, which will be protocol://host:port/context-path/api.path So using relative paths is much easier. See above for an example. api.version String The version of the api. Is default 0.0.0. api.title String The title of the application. api.description String A short description of the application. api.termsOfService String A URL to the Terms of Service of the API. api.contact.name String Name of person or organization to contact api.contact.email String An email to be used for API-related correspondence. api.contact.url String A URL to a website for more contact information. api.license.name String The license name used for the API. api.license.url String A URL to the license used for the API. apiContextIdListing boolean Whether to allow listing all the CamelContext names in the JVM that has REST services. When enabled then the root path of the api-doc will list all the contexts. When disabled then no context ids is listed and the root path of the api-doc lists the current CamelContext. Is default false. apiContextIdPattern String A pattern that allows to filter which CamelContext names is shown in the context listing. The pattern is using regular expression and * as wildcard. Its the same pattern matching as used by Intercept 341.3. Adding Security Definitions in API doc Available as of Camel 2.22.0 The Rest DSL now supports declaring Swagger securityDefinitions in the generated API document. For example as shown below: rest("/user").tag("dude").description("User rest service") // setup security definitions .securityDefinitions() .oauth2("petstore_auth").authorizationUrl("http://petstore.swagger.io/oauth/dialog").end() .apiKey("api_key").withHeader("myHeader").end() .end() .consumes("application/json").produces("application/json") Here we have setup two security definitions OAuth2 - with implicit authorization with the provided url Api Key - using an api key that comes from HTTP header named myHeader Then you need to specify on the rest operations which security to use by referring to their key (petstore_auth or api_key). .get("/{id}/{date}").description("Find user by id and date").outType(User.class) .security("api_key") ... .put().description("Updates or create a user").type(User.class) .security("petstore_auth", "write:pets,read:pets") Here the get operation is using the Api Key security and the put operation is using OAuth security with permitted scopes of read and write pets. 341.4. ContextIdListing enabled When contextIdListing is enabled then its detecting all the running CamelContexts in the same JVM. These contexts are listed in the root path, eg /api-docs as a simple list of names in json format. To access the swagger documentation then the context-path must be appended with the Camel context id, such as api-docs/myCamel . The option apiContextIdPattern can be used to filter the names in this list. 341.5. JSon or Yaml Available as of Camel 2.17 The camel-swagger-java module supports both JSon and Yaml out of the box. You can specify in the request url what you want returned by using /swagger.json or /swagger.yaml for either one. If none is specified then the HTTP Accept header is used to detect if json or yaml can be accepted. If either both is accepted or none was set as accepted then json is returned as the default format. 341.6. Examples In the Apache Camel distribution we ship the camel-example-swagger-cdi and camel-example-swagger-java which demonstrates using this Swagger component.
|
[
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-swagger-java</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"public class UserRouteBuilder extends RouteBuilder { @Override public void configure() throws Exception { // configure we want to use servlet as the component for the rest DSL // and we enable json binding mode restConfiguration().component(\"netty4-http\").bindingMode(RestBindingMode.json) // and output using pretty print .dataFormatProperty(\"prettyPrint\", \"true\") // setup context path and port number that netty will use .contextPath(\"/\").port(8080) // add swagger api-doc out of the box .apiContextPath(\"/api-doc\") .apiProperty(\"api.title\", \"User API\").apiProperty(\"api.version\", \"1.2.3\") // and enable CORS .apiProperty(\"cors\", \"true\"); // this user REST service is json only rest(\"/user\").description(\"User rest service\") .consumes(\"application/json\").produces(\"application/json\") .get(\"/{id}\").description(\"Find user by id\").outType(User.class) .param().name(\"id\").type(path).description(\"The id of the user to get\").dataType(\"int\").endParam() .to(\"bean:userService?method=getUser(USD{header.id})\") .put().description(\"Updates or create a user\").type(User.class) .param().name(\"body\").type(body).description(\"The user to update or create\").endParam() .to(\"bean:userService?method=updateUser\") .get(\"/findAll\").description(\"Find all users\").outTypeList(User.class) .to(\"bean:userService?method=listUsers\"); } }",
"rest(\"/user\").tag(\"dude\").description(\"User rest service\") // setup security definitions .securityDefinitions() .oauth2(\"petstore_auth\").authorizationUrl(\"http://petstore.swagger.io/oauth/dialog\").end() .apiKey(\"api_key\").withHeader(\"myHeader\").end() .end() .consumes(\"application/json\").produces(\"application/json\")",
".get(\"/{id}/{date}\").description(\"Find user by id and date\").outType(User.class) .security(\"api_key\") .put().description(\"Updates or create a user\").type(User.class) .security(\"petstore_auth\", \"write:pets,read:pets\")"
] |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/swagger_java_component
|
Chapter 9. Configuring your Logging deployment
|
Chapter 9. Configuring your Logging deployment 9.1. Configuring CPU and memory limits for logging components You can configure both the CPU and memory limits for each of the logging components as needed. 9.1.1. Configuring CPU and memory limits The logging components allow for adjustments to both the CPU and memory limits. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc -n openshift-logging edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: openshift-logging ... spec: managementState: "Managed" logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: "gp2" size: "200G" redundancyPolicy: "SingleRedundancy" visualization: type: "kibana" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi type: fluentd 1 Specify the CPU and memory limits and requests for the log store as needed. For Elasticsearch, you must adjust both the request value and the limit value. 2 3 Specify the CPU and memory limits and requests for the log visualizer as needed. 4 Specify the CPU and memory limits and requests for the log collector as needed. 9.2. Configuring systemd-journald and Fluentd Because Fluentd reads from the journal, and the journal default settings are very low, journal entries can be lost because the journal cannot keep up with the logging rate from system services. We recommend setting RateLimitIntervalSec=30s and RateLimitBurst=10000 (or even higher if necessary) to prevent the journal from losing entries. 9.2.1. Configuring systemd-journald for OpenShift Logging As you scale up your project, the default logging environment might need some adjustments. For example, if you are missing logs, you might have to increase the rate limits for journald. You can adjust the number of messages to retain for a specified period of time to ensure that OpenShift Logging does not use excessive resources without dropping logs. You can also determine if you want the logs compressed, how long to retain logs, how or if the logs are stored, and other settings. Procedure Create a Butane config file, 40-worker-custom-journald.bu , that includes an /etc/systemd/journald.conf file with the required settings. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.14.0 metadata: name: 40-worker-custom-journald labels: machineconfiguration.openshift.io/role: "worker" storage: files: - path: /etc/systemd/journald.conf mode: 0644 1 overwrite: true contents: inline: | Compress=yes 2 ForwardToConsole=no 3 ForwardToSyslog=no MaxRetentionSec=1month 4 RateLimitBurst=10000 5 RateLimitIntervalSec=30s Storage=persistent 6 SyncIntervalSec=1s 7 SystemMaxUse=8G 8 SystemKeepFree=20% 9 SystemMaxFileSize=10M 10 1 Set the permissions for the journald.conf file. It is recommended to set 0644 permissions. 2 Specify whether you want logs compressed before they are written to the file system. Specify yes to compress the message or no to not compress. The default is yes . 3 Configure whether to forward log messages. Defaults to no for each. Specify: ForwardToConsole to forward logs to the system console. ForwardToKMsg to forward logs to the kernel log buffer. ForwardToSyslog to forward to a syslog daemon. ForwardToWall to forward messages as wall messages to all logged-in users. 4 Specify the maximum time to store journal entries. Enter a number to specify seconds. Or include a unit: "year", "month", "week", "day", "h" or "m". Enter 0 to disable. The default is 1month . 5 Configure rate limiting. If more logs are received than what is specified in RateLimitBurst during the time interval defined by RateLimitIntervalSec , all further messages within the interval are dropped until the interval is over. It is recommended to set RateLimitIntervalSec=30s and RateLimitBurst=10000 , which are the defaults. 6 Specify how logs are stored. The default is persistent : volatile to store logs in memory in /run/log/journal/ . These logs are lost after rebooting. persistent to store logs to disk in /var/log/journal/ . systemd creates the directory if it does not exist. auto to store logs in /var/log/journal/ if the directory exists. If it does not exist, systemd temporarily stores logs in /run/systemd/journal . none to not store logs. systemd drops all logs. 7 Specify the timeout before synchronizing journal files to disk for ERR , WARNING , NOTICE , INFO , and DEBUG logs. systemd immediately syncs after receiving a CRIT , ALERT , or EMERG log. The default is 1s . 8 Specify the maximum size the journal can use. The default is 8G . 9 Specify how much disk space systemd must leave free. The default is 20% . 10 Specify the maximum size for individual journal files stored persistently in /var/log/journal . The default is 10M . Note If you are removing the rate limit, you might see increased CPU utilization on the system logging daemons as it processes any messages that would have previously been throttled. For more information on systemd settings, see https://www.freedesktop.org/software/systemd/man/journald.conf.html . The default settings listed on that page might not apply to OpenShift Container Platform. Use Butane to generate a MachineConfig object file, 40-worker-custom-journald.yaml , containing the configuration to be delivered to the nodes: USD butane 40-worker-custom-journald.bu -o 40-worker-custom-journald.yaml Apply the machine config. For example: USD oc apply -f 40-worker-custom-journald.yaml The controller detects the new MachineConfig object and generates a new rendered-worker-<hash> version. Monitor the status of the rollout of the new rendered configuration to each node: USD oc describe machineconfigpool/worker Example output Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool ... Conditions: Message: Reason: All nodes are updating to rendered-worker-913514517bcea7c93bd446f4830bc64e
|
[
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi type: fluentd",
"variant: openshift version: 4.14.0 metadata: name: 40-worker-custom-journald labels: machineconfiguration.openshift.io/role: \"worker\" storage: files: - path: /etc/systemd/journald.conf mode: 0644 1 overwrite: true contents: inline: | Compress=yes 2 ForwardToConsole=no 3 ForwardToSyslog=no MaxRetentionSec=1month 4 RateLimitBurst=10000 5 RateLimitIntervalSec=30s Storage=persistent 6 SyncIntervalSec=1s 7 SystemMaxUse=8G 8 SystemKeepFree=20% 9 SystemMaxFileSize=10M 10",
"butane 40-worker-custom-journald.bu -o 40-worker-custom-journald.yaml",
"oc apply -f 40-worker-custom-journald.yaml",
"oc describe machineconfigpool/worker",
"Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Conditions: Message: Reason: All nodes are updating to rendered-worker-913514517bcea7c93bd446f4830bc64e"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/logging/configuring-your-logging-deployment
|
function::d_name
|
function::d_name Name function::d_name - get the dirent name Synopsis Arguments dentry Pointer to dentry. Description Returns the dirent name (path basename).
|
[
"d_name:string(dentry:long)"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-d-name
|
4.7. Installing Extensions
|
4.7. Installing Extensions Once you have developed an extension to the JDBC translator, you must install it into the Red hat JBoss Data Virtualization server. The process of packaging or deploying the extended JDBC translators is exactly as any other translator. Since the RDBMS is accessible already through its JDBC driver, there is no need to develop a resource adapter for this source as JBoss EAP provides a wrapper JCA connector (DataSource) for any JDBC driver.
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_4_server_development/installing_extensions
|
probe::kprocess.create
|
probe::kprocess.create Name probe::kprocess.create - Fires whenever a new process or thread is successfully created Synopsis kprocess.create Values new_tid The TID of the newly created task new_pid The PID of the newly created process Context Parent of the created process. Description Fires whenever a new process is successfully created, either as a result of fork (or one of its syscall variants), or a new kernel thread.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-kprocess-create
|
14.3. DHCP Relay Agent
|
14.3. DHCP Relay Agent The DHCP Relay Agent ( dhcrelay ) enables the relay of DHCP and BOOTP requests from a subnet with no DHCP server on it to one or more DHCP servers on other subnets. When a DHCP client requests information, the DHCP Relay Agent forwards the request to the list of DHCP servers specified when the DHCP Relay Agent is started. When a DHCP server returns a reply, the reply is broadcast or unicast on the network that sent the original request. The DHCP Relay Agent for IPv4 , dhcrelay , listens for DHCPv4 and BOOTP requests on all interfaces unless the interfaces are specified in /etc/sysconfig/dhcrelay with the INTERFACES directive. See Section 14.3.1, "Configure dhcrelay as a DHCPv4 and BOOTP relay agent" . The DHCP Relay Agent for IPv6 , dhcrelay6 , does not have this default behavior and interfaces to listen for DHCPv6 requests must be specified. See Section 14.3.2, "Configure dhcrelay as a DHCPv6 relay agent" . dhcrelay can either be run as a DHCPv4 and BOOTP relay agent (by default) or as a DHCPv6 relay agent (with -6 argument). To see the usage message, issue the command dhcrelay -h . 14.3.1. Configure dhcrelay as a DHCPv4 and BOOTP relay agent To run dhcrelay in DHCPv4 and BOOTP mode specify the servers to which the requests should be forwarded to. Copy and then edit the dhcrelay.service file as the root user: Edit the ExecStart option under section [Service] and add one or more server IPv4 addresses to the end of the line, for example: ExecStart=/usr/sbin/dhcrelay -d --no-pid 192.168.1.1 If you also want to specify interfaces where the DHCP Relay Agent listens for DHCP requests, add them to the ExecStart option with -i argument (otherwise it will listen on all interfaces), for example: ExecStart=/usr/sbin/dhcrelay -d --no-pid 192.168.1.1 -i em1 For other options see the dhcrelay(8) man page. To activate the changes made, as the root user, restart the service: 14.3.2. Configure dhcrelay as a DHCPv6 relay agent To run dhcrelay in DHCPv6 mode add the -6 argument and specify the " lower interface " (on which queries will be received from clients or from other relay agents) and the " upper interface " (to which queries from clients and other relay agents should be forwarded). Copy dhcrelay.service to dhcrelay6.service and edit it as the root user: Edit the ExecStart option under section [Service] add -6 argument and add the " lower interface " and " upper interface " interface, for example: ExecStart=/usr/sbin/dhcrelay -d --no-pid -6 -l em1 -u em2 For other options see the dhcrelay(8) man page. To activate the changes made, as the root user, restart the service:
|
[
"~]# cp /lib/systemd/system/dhcrelay.service /etc/systemd/system/ ~]# vi /etc/systemd/system/dhcrelay.service",
"~]# systemctl --system daemon-reload ~]# systemctl restart dhcrelay",
"~]# cp /lib/systemd/system/dhcrelay.service /etc/systemd/system/dhcrelay6.service ~]# vi /etc/systemd/system/dhcrelay6.service",
"~]# systemctl --system daemon-reload ~]# systemctl restart dhcrelay6"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/dhcp-relay-agent
|
Operating
|
Operating Red Hat Advanced Cluster Security for Kubernetes 4.5 Operating Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift Documentation Team
|
[
"apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis-compliance profiles: - name: ocp4-cis-node kind: Profile apiGroup: compliance.openshift.io/v1alpha1 - name: ocp4-cis kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: default kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1",
"oc create -f sscan.yaml -n openshift-compliance",
"scansettingbinding.compliance.openshift.io/cis-compliance created",
"oc get ValidatingWebhookConfiguration 1",
"NAME CREATED AT stackrox 2019-09-24T06:07:34Z",
"unzip -d sensor sensor-<cluster_name>.zip",
"./sensor/sensor.sh",
"oc get pod -n stackrox -w",
"kubectl get pod -n stackrox -w",
"oc delete ValidatingWebhookConfiguration/stackrox",
"kubectl delete ValidatingWebhookConfiguration/stackrox",
"oc delete ValidatingWebhookConfiguration/stackrox",
"kubectl delete ValidatingWebhookConfiguration/stackrox",
"oc -n stackrox scale deploy/admission-control --replicas=<number_of_replicas> 1",
"oc create -f \"<generated_file>.yml\" 1",
"oc delete -f \"<generated_file>.yml\" 1",
"oc get ns -o jsonpath='{.items[*].metadata.name}' | xargs -n 1 oc delete networkpolicies -l 'network-policy-generator.stackrox.io/generated=true' -n 1",
"oc -n stackrox set env deploy/central \\ 1 ROX_NETWORK_BASELINE_OBSERVATION_PERIOD=<value> 2",
"oc -n stackrox set env deploy/central \\ 1 ROX_BASELINE_GENERATION_DURATION=<value> 2",
"(http(s)?://)?<svc>(.<ns>(.svc.cluster.local)?)?(:<portNum>)? 1",
"roxctl netpol generate -h",
"roxctl netpol generate <folder_path> [flags] 1",
"roxctl netpol connectivity map <folder_path> [flags] 1",
"dot -Tsvg connlist_output.dot > connlist_output_graph.svg",
"roxctl netpol connectivity diff --dir1= <folder_path_1> --dir2= <folder_path_2> [flags] 1",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: creationTimestamp: null name: backend-netpol spec: ingress: - from: - podSelector: matchLabels: app: frontend ports: - port: 9090 protocol: TCP podSelector: matchLabels: app: backendservice policyTypes: - Ingress - Egress status: {}",
"diff netpol-diff-example-minimal/netpols.yaml netpol-analysis-example-minimal/netpols.yaml",
"12c12 < - ports: --- > ports:",
"roxctl netpol connectivity diff --dir1=roxctl/netpol/connectivity/diff/testdata/netpol-analysis-example-minimal/ --dir2=roxctl/netpol/connectivity/diff/testdata/netpol-diff-example-minimal",
"Connectivity diff: diff-type: changed, source: default/frontend[Deployment], destination: default/backend[Deployment], dir1: TCP 9090, dir2: TCP 9090,UDP 53 diff-type: added, source: 0.0.0.0-255.255.255.255, destination: default/backend[Deployment], dir1: No Connections, dir2: TCP 9090",
"oc -n stackrox set env deploy/central ROX_SCANNER_VULN_UPDATE_INTERVAL=<value> 1",
"oc -n stackrox set env deploy/scanner \\ 1 ROX_LANGUAGE_VULNS=false 2",
"{\"result\": {\"deployment\": {...}, \"images\": [...]}} {\"result\": {\"deployment\": {...}, \"images\": [...]}}",
"curl -H \"Authorization: Bearer USDROX_API_TOKEN\" USDROX_ENDPOINT/v1/export/vuln-mgmt/workloads",
"curl -H \"Authorization: Bearer USDROX_API_TOKEN\" USDROX_ENDPOINT/v1/export/vuln-mgmt/workloads?timeout=60",
"curl -H \"Authorization: Bearer USDROX_API_TOKEN\" USDROX_ENDPOINT/v1/export/vuln-mgmt/workloads?query=Deployment%3Aapp%2BNamespace%3Adefault",
"oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"compliance\",\"env\":[{\"name\":\"ROX_METRICS_PORT\",\"value\":\"disabled\"},{\"name\":\"ROX_NODE_SCANNING_ENDPOINT\",\"value\":\"127.0.0.1:8444\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL\",\"value\":\"4h\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL_DEVIATION\",\"value\":\"24m\"},{\"name\":\"ROX_NODE_SCANNING_MAX_INITIAL_WAIT\",\"value\":\"5m\"},{\"name\":\"ROX_RHCOS_NODE_SCANNING\",\"value\":\"true\"},{\"name\":\"ROX_CALL_NODE_INVENTORY_ENABLED\",\"value\":\"true\"}]}]}}}}'",
"oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"compliance\",\"env\":[{\"name\":\"ROX_METRICS_PORT\",\"value\":\":9091\"},{\"name\":\"ROX_NODE_SCANNING_ENDPOINT\",\"value\":\"127.0.0.1:8444\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL\",\"value\":\"4h\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL_DEVIATION\",\"value\":\"24m\"},{\"name\":\"ROX_NODE_SCANNING_MAX_INITIAL_WAIT\",\"value\":\"5m\"},{\"name\":\"ROX_RHCOS_NODE_SCANNING\",\"value\":\"true\"},{\"name\":\"ROX_CALL_NODE_INVENTORY_ENABLED\",\"value\":\"true\"}]}]}}}}'",
"oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"volumes\":[{\"name\":\"tmp-volume\",\"emptyDir\":{}},{\"name\":\"cache-volume\",\"emptyDir\":{\"sizeLimit\":\"200Mi\"}}]}}}}'",
"oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"command\":[\"/scanner\",\"--nodeinventory\",\"--config=\",\"\"],\"env\":[{\"name\":\"ROX_NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"apiVersion\":\"v1\",\"fieldPath\":\"spec.nodeName\"}}},{\"name\":\"ROX_CLAIR_V4_SCANNING\",\"value\":\"true\"},{\"name\":\"ROX_COMPLIANCE_OPERATOR_INTEGRATION\",\"value\":\"true\"},{\"name\":\"ROX_CSV_EXPORT\",\"value\":\"false\"},{\"name\":\"ROX_DECLARATIVE_CONFIGURATION\",\"value\":\"false\"},{\"name\":\"ROX_INTEGRATIONS_AS_CONFIG\",\"value\":\"false\"},{\"name\":\"ROX_NETPOL_FIELDS\",\"value\":\"true\"},{\"name\":\"ROX_NETWORK_DETECTION_BASELINE_SIMULATION\",\"value\":\"true\"},{\"name\":\"ROX_NETWORK_GRAPH_PATTERNFLY\",\"value\":\"true\"},{\"name\":\"ROX_NODE_SCANNING_CACHE_TIME\",\"value\":\"3h36m\"},{\"name\":\"ROX_NODE_SCANNING_INITIAL_BACKOFF\",\"value\":\"30s\"},{\"name\":\"ROX_NODE_SCANNING_MAX_BACKOFF\",\"value\":\"5m\"},{\"name\":\"ROX_PROCESSES_LISTENING_ON_PORT\",\"value\":\"false\"},{\"name\":\"ROX_QUAY_ROBOT_ACCOUNTS\",\"value\":\"true\"},{\"name\":\"ROX_ROXCTL_NETPOL_GENERATE\",\"value\":\"true\"},{\"name\":\"ROX_SOURCED_AUTOGENERATED_INTEGRATIONS\",\"value\":\"false\"},{\"name\":\"ROX_SYSLOG_EXTRA_FIELDS\",\"value\":\"true\"},{\"name\":\"ROX_SYSTEM_HEALTH_PF\",\"value\":\"false\"},{\"name\":\"ROX_VULN_MGMT_WORKLOAD_CVES\",\"value\":\"false\"}],\"image\":\"registry.redhat.io/advanced-cluster-security/rhacs-scanner-slim-rhel8:4.5.6\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"node-inventory\",\"ports\":[{\"containerPort\":8444,\"name\":\"grpc\",\"protocol\":\"TCP\"}],\"volumeMounts\":[{\"mountPath\":\"/host\",\"name\":\"host-root-ro\",\"readOnly\":true},{\"mountPath\":\"/tmp/\",\"name\":\"tmp-volume\"},{\"mountPath\":\"/cache\",\"name\":\"cache-volume\"}]}]}}}}'",
"Failed to create collections for scope _scope-name_: Unsupported operator NOT_IN in scope's label selectors. Only operator 'IN' is supported. The scope is attached to the following report configurations: [list of report configs]; Please manually create an equivalent collection and edit the listed report configurations to use this collection. Note that reports will not function correctly until a collection is attached.",
"roxctl -e <hostname>:<port_number> central userpki create -c <ca_certificate_file> -r <default_role_name> <provider_name>",
"oc -n stackrox set env deploy/central ROX_ENABLE_OPENSHIFT_AUTH=true",
"CENTRAL_ADDITIONAL_ROUTES=' spec: central: exposure: loadBalancer: enabled: false port: 443 nodePort: enabled: false route: enabled: true persistence: persistentVolumeClaim: claimName: stackrox-db customize: annotations: serviceaccounts.openshift.io/oauth-redirecturi.main: sso/providers/openshift/callback 1 serviceaccounts.openshift.io/oauth-redirectreference.main: \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"central\\\"}}\" 2 serviceaccounts.openshift.io/oauth-redirecturi.second: sso/providers/openshift/callback 3 serviceaccounts.openshift.io/oauth-redirectreference.second: \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"second-central\\\"}}\" 4 '",
"oc patch centrals.platform.stackrox.io -n <namespace> \\ 1 <custom-resource> \\ 2 --patch \"USDCENTRAL_ADDITIONAL_ROUTES\" --type=merge",
"customize: central: annotations: serviceaccounts.openshift.io/oauth-redirecturi.main: sso/providers/openshift/callback 1 serviceaccounts.openshift.io/oauth-redirectreference.main: \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"central\\\"}}\" 2 serviceaccounts.openshift.io/oauth-redirecturi.second: sso/providers/openshift/callback 3 serviceaccounts.openshift.io/oauth-redirectreference.second: \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"second-central\\\"}}\" 4",
"helm upgrade -n stackrox stackrox-central-services rhacs/central-services -f <path_to_values_public.yaml> 1",
"{ \"idToken\": \"<id_token>\" }",
"{ \"accessToken\": \"<access_token>\" }"
] |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html-single/operating/index
|
Chapter 6. Monitoring and tuning Data Grid queries
|
Chapter 6. Monitoring and tuning Data Grid queries Data Grid exposes statistics for queries and provides attributes that you can adjust to improve query performance. 6.1. Getting query statistics Collect statistics to gather information about performance of your indexes and queries, including information such as the types of indexes and average time for queries to complete. Procedure Do one of the following: Invoke the getSearchStatistics() or getClusteredSearchStatistics() methods for embedded caches. Use GET requests to obtain statistics for remote caches from the REST API. Embedded caches // Statistics for the local cluster member SearchStatistics statistics = Search.getSearchStatistics(cache); // Consolidated statistics for the whole cluster CompletionStage<SearchStatisticsSnapshot> statistics = Search.getClusteredSearchStatistics(cache) Remote caches 6.2. Tuning query performance Use the following guidelines to help you improve the performance of indexing operations and queries. Checking index usage statistics Queries against partially indexed caches return slower results. For instance, if some fields in a schema are not annotated then the resulting index does not include those fields. Start tuning query performance by checking the time it takes for each type of query to run. If your queries seem to be slow, you should make sure that queries are using the indexes for caches and that all entities and field mappings are indexed. Adjusting the commit interval for indexes Indexing can degrade write throughput for Data Grid clusters. The commit-interval attribute defines the interval, in milliseconds, between which index changes that are buffered in memory are flushed to the index storage and a commit is performed. This operation is costly so you should avoid configuring an interval that is too small. The default is 1000 ms (1 second). Adjusting the refresh interval for queries The refresh-interval attribute defines the interval, in milliseconds, between which the index reader is refreshed. The default value is 0 , which returns data in queries as soon as it is written to a cache. A value greater than 0 results in some stale query results but substantially increases throughput, especially in write-heavy scenarios. If you do not need data returned in queries as soon as it is written, you should adjust the refresh interval to improve query performance.
|
[
"// Statistics for the local cluster member SearchStatistics statistics = Search.getSearchStatistics(cache); // Consolidated statistics for the whole cluster CompletionStage<SearchStatisticsSnapshot> statistics = Search.getClusteredSearchStatistics(cache)",
"GET /rest/v2/caches/{cacheName}/search/stats"
] |
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/querying_data_grid_caches/query-monitoring-tuning
|
Chapter 4. Disconnected installation
|
Chapter 4. Disconnected installation 4.1. Ansible Automation Platform installation on disconnected RHEL Install Ansible Automation Platform automation controller and a private automation hub, with an installer-managed database located on the automation controller without an Internet connection. 4.1.1. Prerequisites To install Ansible Automation Platform on a disconnected network, complete the following prerequisites: Create a subscription manifest. Download the Ansible Automation Platform setup bundle. Create DNS records for automation controller and private automation hub servers. Note The setup bundle includes additional components that make installing Ansible Automation Platform easier in a disconnected environment. These include the Ansible Automation Platform RPMs and the default execution environment (EE) images. 4.1.2. System Requirements Hardware requirements are documented in the Automation Platform Installation Guide. Reference the "Red Hat Ansible Automation Platform Installation Guide" in the Ansible Automation Platform Product Documentation for your version of Ansible Automation Platform. 4.1.3. RPM Source RPM dependencies for Ansible Automation Platform that come from the BaseOS and AppStream repositories are not included in the setup bundle. To add these dependencies, you must obtain access to BaseOS and AppStream repositories. Satellite is the recommended method from Red Hat to synchronize repositories reposync - Makes full copies of the required RPM repositories and hosts them on the disconnected network RHEL Binary DVD - Use the RPMs available on the RHEL 8 Binary DVD Note The RHEL Binary DVD method requires the DVD for supported versions of RHEL 8.4 or higher. See Red Hat Enterprise Linux Life Cycle for information on which versions of RHEL are currently supported. 4.2. Synchronizing RPM repositories by using reposync To perform a reposync you need a RHEL host that has access to the Internet. After the repositories are synced, you can move the repositories to the disconnected network hosted from a web server. Procedure Attach the BaseOS and AppStream required repositories: # subscription-manager repos \ --enable rhel-8-for-x86_64-baseos-rpms \ --enable rhel-8-for-x86_64-appstream-rpms Perform the reposync: # dnf install yum-utils # reposync -m --download-metadata --gpgcheck \ -p /path/to/download Make certain that you use reposync with --download-metadata and without --newest-only . See [RHEL 8] Reposync. If not using --newest-only the repos downloaded will be ~90GB. If using --newest-only the repos downloaded will be ~14GB. If you plan to use Red Hat Single Sign-On (RHSSO) you must also sync these repositories. jb-eap-7.3-for-rhel-8-x86_64-rpms rh-sso-7.4-for-rhel-8-x86_64-rpms After the reposync is completed your repositories are ready to use with a web server. Move the repositories to your disconnected network. 4.3. Creating a new web server to host repositories If you do not have an existing web server to host your repositories, create one with the synced repositories. Procedure Use the following steps if creating a new web server. Install prerequisites: USD sudo dnf install httpd Configure httpd to serve the repo directory: /etc/httpd/conf.d/repository.conf DocumentRoot '/path/to/repos' <LocationMatch "^/+USD"> Options -Indexes ErrorDocument 403 /.noindex.html </LocationMatch> <Directory '/path/to/repos'> Options All Indexes FollowSymLinks AllowOverride None Require all granted </Directory> Ensure that the directory is readable by an apache user: USD sudo chown -R apache /path/to/repos Configure SELinux: USD sudo semanage fcontext -a -t httpd_sys_content_t "/path/to/repos(/.*)?" USD sudo restorecon -ir /path/to/repos Enable httpd: USD sudo systemctl enable --now httpd.service Open firewall: USD sudo firewall-cmd --zone=public --add-service=http -add-service=https --permanent USD sudo firewall-cmd --reload On automation controller and automation hub, add a repo file at /etc/yum.repos.d/local.repo , add the optional repos if needed: [Local-BaseOS] name=Local BaseOS baseurl=http://<webserver_fqdn>/rhel-8-for-x86_64-baseos-rpms enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release [Local-AppStream] name=Local AppStream baseurl=http://<webserver_fqdn>/rhel-8-for-x86_64-appstream-rpms enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release 4.4. Accessing RPM Repositories for Locally Mounted DVD If you are going to access the repositories from the DVD, it is necessary to set up a local repository. This section shows how to do that. Procedure Mount DVD or ISO DVD # mkdir /media/rheldvd && mount /dev/sr0 /media/rheldvd ISO # mkdir /media/rheldvd && mount -o loop rhrhel-8.6-x86_64-dvd.iso /media/rheldvd Create yum repo file at /etc/yum.repos.d/dvd.repo [dvd-BaseOS] name=DVD for RHEL - BaseOS baseurl=file:///media/rheldvd/BaseOS enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release [dvd-AppStream] name=DVD for RHEL - AppStream baseurl=file:///media/rheldvd/AppStream enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release Import the gpg key # rpm --import /media/rheldvd/RPM-GPG-KEY-redhat-release Note If the key is not imported you will see an error similar to # Curl error (6): Couldn't resolve host name for https://www.redhat.com/security/data/fd431d51.txt [Could not resolve host: www.redhat.com] In order to set up a repository see Need to set up yum repository for locally-mounted DVD on Red Hat Enterprise Linux 8 . 4.5. Adding a Subscription Manifest to Ansible Automation Platform without an Internet connection To add a subscription to Ansible Automation Platform without an Internet connection, create and import a subscription manifest. Procedure Login to access.redhat.com . Navigate to Subscriptions Subscriptions . Click Subscription Allocations . Click Create New subscription allocation . Name the new subscription allocation. Select Satellite 6.14 Satellite 6.14 as the type. Click Create . The Details tab will open for your subscription allocation. Click Subscriptions tab. Click Add Subscription . Find your Ansible Automation Platform subscription, in the Entitlements box add the number of entitlements you want to assign to your environment. A single entitlement is needed for each node that is managed by Ansible Automation Platform: server, network device, etc. Click Submit . Click Export Manifest . This downloads a file manifest_<allocation name>_<date>.zip that be imported with automation controller after installation. 4.6. Installing the Ansible Automation Platform Setup Bundle The "bundle" version is strongly recommended for disconnected installations as it comes with the RPM content for Ansible Automation Platform as well as the default execution environment images that are uploaded to your private automation hub during the installation process. 4.6.1. Downloading the Setup Bundle Procedure Download the Ansible Automation Platform setup bundle package by navigating to https://access.redhat.com/downloads/content/480 and click Download Now for the Ansible Automation Platform 2.3 Setup Bundle. 4.6.1.1. Installing the Setup Bundle The download and installation of the setup bundle needs to be located on automation controller. From automation controller, untar the bundle, edit the inventory file, and run the setup. Untar the bundle USD tar xvf \ ansible-automation-platform-setup-bundle-2.3-1.2.tar.gz USD cd ansible-automation-platform-setup-bundle-2.3-1.2 Edit the inventory file to include the required options automationcontroller group automationhub group admin_password pg_password automationhub_admin_password automationhub_pg_host, automationhub_pg_port automationhub_pg_password Example Inventory [automationcontroller] automationcontroller.example.org ansible_connection=local [automationcontroller:vars] peers=execution_nodes [automationhub] automationhub.example.org [all:vars] admin_password='password123' pg_database='awx' pg_username='awx' pg_password='dbpassword123' receptor_listener_port=27199 automationhub_admin_password='hubpassword123' automationhub_pg_host='automationcontroller.example.org' automationhub_pg_port='5432' automationhub_pg_database='automationhub' automationhub_pg_username='automationhub' automationhub_pg_password='dbpassword123' automationhub_pg_sslmode='prefer' Note The inventory should be kept intact after installation since it is used for backup, restore, and upgrade functions. Consider keeping a backup copy in a secure location, given that the inventory file contains passwords. Run the AAP setup bundle executable as the root user USD sudo -i # cd /path/to/ansible-automation-platform-setup-bundle-2.3-1.2 # ./setup.sh Once installation is complete, navigate to the Fully Qualified Domain Name (FQDN) for the automation controller node that was specified in the installation inventory file. Log in with the administrator credentials specified in the installation inventory file. 4.7. Completing Post Installation Tasks 4.7.1. Adding an automation controller Subscription Procedure Navigate to the FQDN of the Automation controller. Login with admin and the password you specified as admin_password in your inventory file. Click Browse and select the manifest.zip you created earlier. Click . Uncheck User analytics and Automation analytics . These rely on an Internet connection and should be turned off. Click . Read the End User License Agreement and click Submit if you agree. 4.7.2. Updating the CA trust store 4.7.2.1. Self-Signed Certificates By default, automation hub and automation controller are installed using self signed certificates. This creates an issue where automation controller does not trust automation hub's certificate and does not download the execution environments from automation hub. The solution is to import automation hub's CA cert as a trusted cert on automation controller. You can use SCP or directly copy and paste from one file into another to perform this action. The following steps are copied from a KB article found at https://access.redhat.com/solutions/6707451 . 4.7.2.2. Copying the root certificate on the private automation hub to the automation controller using secure copy (SCP) If SSH is available as the root user between automation controller and private automation hub, use SCP to copy the root certificate on private automation hub to automation controller and run update-ca-trust on automation controller to update the CA trust store. On the Automation controller USD sudo -i # scp <hub_fqdn>:/etc/pulp/certs/root.crt /etc/pki/ca-trust/source/anchors/automationhub-root.crt # update-ca-trust 4.7.2.3. Copying and Pasting If SSH is unavailable as root between private automation hub and automation controller, copy the contents of the file /etc/pulp/certs/root.crt on private automation hub and paste it into a new file on automation controller called /etc/pki/ca-trust/source/anchors/automationhub-root.crt . After the new file is created, run the command update-ca-trust to update the CA trust store with the new certificate. On the Private automation hub USD sudo -i # cat /etc/pulp/certs/root.crt (copy the contents of the file, including the lines with 'BEGIN CERTIFICATE' and 'END CERTIFICATE') On automation controller USD sudo -i # vi /etc/pki/ca-trust/source/anchors/automationhub-root.crt (paste the contents of the root.crt file from the {PrivateHubName} into the new file and write to disk) # update-ca-trust 4.8. Importing Collections into Private Automation Hub You can download collection tarball files from the following sources: Red Hat certified collections are found on Red Hat Automation Hub . Community collections are found on Ansible Galaxy . 4.8.1. Downloading collection from Red Hat Automation Hub This section gives instructions on how to download a collection from Red Hat Automation Hub. If the collection has dependencies, they will also need to be downloaded and installed. Procedure Navigate to https://console.redhat.com/ansible/automation-hub/ and login with your Red Hat credentials. Click on the collection you wish to download. Click Download tarball To verify if a collection has dependencies, click the Dependencies tab. Download any dependencies needed for this collection. 4.9. Creating Collection Namespace The namespace of the collection must exist for the import to be successful. You can find the namespace name by looking at the first part of the collection tarball filename. For example the namespace of the collection ansible-netcommon-3.0.0.tar.gz is ansible . Procedure Login to private automation hub web console. Navigate to Collections Namespaces . Click Create . Provide the namespace name. Click Create . 4.9.1. Importing the collection tarball with GUI Login to private automation hub web console. Navigate to Collections Namespaces . Click on View collections of the namespace you will be importing the collection into. Click Upload collection . Click the folder icon and select the tarball of the collection. Click Upload . This opens the 'My Imports' page. You can see the status of the import and various details of the files and modules that have been imported. 4.9.1.1. Importing the collection tarball using ansible-galaxy via CLI You can import collections into the private automation hub by using the command-line interface rather than the GUI. Copy the collection tarballs to the private automation hub. Log in to the private automation hub server through SSH. Add the self-signed root CA cert to the trust store on the automation hub. # cp /etc/pulp/certs/root.crt \ /etc/pki/ca-trust/source/anchors/automationhub-root.crt # update-ca-trust Update the /etc/ansible/ansible.cfg file with your hub configuration. Use either a token or a username and password for authentication. [galaxy] server_list = private_hub [galaxy_server.private_hub] url=https://<hub_fqdn>/api/galaxy/ token=<token_from_private_hub> Import the collection using the ansible-galaxy command. USD ansible-galaxy collection publish <collection_tarball> Note Create the namespace that the collection belongs to in advance or publishing the collection will fail. 4.10. Approving the Imported Collection After you have imported collections with either the GUI or the CLI method, you must approve them by using the GUI. After they are approved, they are available for use. Procedure Log in to private automation hub web console. Go to Collections Approval . Click Approve for the collection you wish to approve. The collection is now available for use in your private automation hub. Note The collection is added to the "Published" repository regardless of its source. Import any dependency for the collection using these same steps. Recommended collections depend on your use case. Ansible and Red Hat provide these collections. 4.10.1. Custom Execution Environments Use the ansible-builder program to create custom execution environment images. For disconnected environments, custom EE images can be built in the following ways: Build an EE image on an internet-facing system and import it to the disconnected environment Build an EE image entirely on the disconnected environment with some modifications to the normal process of using ansible-builder Create a minimal base container image that includes all of the necessary modifications for a disconnected environment, then build custom EE images from the base container image 4.10.1.1. Transferring a Custom EE Images Across a Disconnected Boundary A custom execution environment image can be built on an internet-facing machine using the existing documentation. Once an execution environment has been created it is available in the local Podman image cache. You can then transfer the custom EE image across a disconnected boundary. To transfer the custom EE image across a disconnected boundary, first save the image: Save the image: USD podman image save localhost/custom-ee:latest | gzip -c custom-ee-latest.tar.gz Transfer the file across the disconnected boundary by using an existing mechanism such as sneakernet, one-way diode, etc.. After the image is available on the disconnected side, import it into the local podman cache, tag it, and push it to the disconnected hub: USD podman image load -i custom-ee-latest.tar.gz USD podman image tag localhost/custom-ee <hub_fqdn>/custom-ee:latest USD podman login <hub_fqdn> --tls-verify=false USD podman push <hub_fqdn>/custom-ee:latest 4.11. Building an Execution Environment in a Disconnected Environment When building a custom execution environment, the ansible-builder tool defaults to downloading the following requirements from the internet: Ansible Galaxy (galaxy.ansible.com) or Automation Hub (cloud.redhat.com) for any collections added to the EE image. PyPI (pypi.org) for any python packages required as collection dependencies. The UBI repositories (cdn.redhat.com) for updating any UBI-based EE images. The RHEL repositories might also be needed to meet certain collection requirements. registry.redhat.io for access to the ansible-builder-rhel8 container image. Building an EE image in a disconnected environment requires a subset of all of these mirrored, or otherwise made available on the disconnected network. See Importing Collections into Private Automation Hub for information about importing collections from Galaxy or Automation Hub into a private automation hub. Mirrored PyPI content once transferred into the high-side network can be made available using a web server or an artifact repository like Nexus. The UBI repositories can be mirrored on the low-side using a tool like reposync , imported to the disconnected environment, and made available from Satellite or a simple web server (since the content is freely redistributable). The ansible-builder-rhel8 container image can be imported into a private automation hub in the same way a custom EE can be imported. See Transferring a Custom EE Images Across a Disconnected Boundary for details substituting localhost/custom-ee for registry.redhat.io/ansible-automation-platform-21/ansible-builder-rhel8 . This will make the ansible-builder-rhel8 image available in the private automation hub registry along with the default EE images. Once all of the prerequisites are available on the high-side network, ansible-builder and Podman can be used to create a custom execution environment image. 4.12. Installing the ansible-builder RPM Procedure On a RHEL system, install the ansible-builder RPM. This can be done in one of several ways: Subscribe the RHEL box to a Satellite on the disconnected network. Attach the Ansible Automation Platform subscription and enable the Ansible Automation Platform repository. Install the ansible-builder RPM. Note This is preferred if a Satellite exists because the execution environment images can use RHEL content from the Satellite if the underlying build host is registered. Unarchive the Ansible Automation Platform setup bundle. Install the ansible-builder RPM and its dependencies from the included content: USD tar -xzvf ansible-automation-platform-setup-bundle-2.3-1.2.tar.gz USD cd ansible-automation-platform-setup-bundle-2.3-1.2/bundle/el8/repos/ USD sudo yum install ansible-builder-1.2.0-1.el9ap.noarch.rpm python38-requirements-parser-0.2.0-4.el9ap.noarch.rpm Create a directory for your custom EE build artifacts. USD mkdir custom-ee USD cd custom-ee/ Create an execution-environment.yml file that defines the requirements for your custom EE following the documentation at https://ansible-builder.readthedocs.io/en/stable/definition/ . Override the EE_BASE_IMAGE and EE_BUILDER_IMAGE variables to point to the EEs available in your private automation hub. USD cat execution-environment.yml --- version: 1 build_arg_defaults: EE_BASE_IMAGE: '<hub_fqdn>/ee-supported-rhel8:latest' EE_BUILDER_IMAGE: '<hub_fqdn>/ansible-builder-rhel8:latest' dependencies: python: requirements.txt galaxy: requirements.yml Create an ansible.cfg file that points to your private automation hub and contains credentials that allow uploading, such as an admin user token. USD cat ansible.cfg [galaxy] server_list = private_hub [galaxy_server.private_hub] url=https://<hub_fqdn>/api/galaxy/ token=<admin_token> Create a ubi.repo file that points to your disconnected UBI repo mirror (this could be your Satellite if the UBI content is hosted there). This is an example output where reposync was used to mirror the UBI repos. USD cat ubi.repo [ubi-8-baseos] name = Red Hat Universal Base Image 8 (RPMs) - BaseOS baseurl = http://<ubi_mirror_fqdn>/repos/ubi-8-baseos enabled = 1 gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release gpgcheck = 1 [ubi-8-appstream] name = Red Hat Universal Base Image 8 (RPMs) - AppStream baseurl = http://<ubi_mirror_fqdn>/repos/ubi-8-appstream enabled = 1 gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release gpgcheck = 1 Add the CA certificate used to sign the private automation hub web server certificate. For self-signed certificates (the installer default), make a copy of the file /etc/pulp/certs/root.crt from your private automation hub and name it hub-root.crt . If an internal certificate authority was used to request and sign the private automation hub web server certificate, make a copy of that CA certificate called hub-root.crt . Create your python requirements.txt and ansible collection requirements.yml with the content needed for your custom EE image. Note that any collections you require should already be uploaded into your private automation hub. Use ansible-builder to create the context directory used to build the EE image. USD ansible-builder create Complete! The build context can be found at: /home/cloud-user/custom-ee/context USD ls -1F ansible.cfg context/ execution-environment.yml hub-root.crt pip.conf requirements.txt requirements.yml ubi.repo Copy the files used to override the internet-facing defaults into the context directory. USD cp ansible.cfg hub-root.crt pip.conf ubi.repo context/ Edit the file context/Containerfile and add the following modifications. In the first EE_BASE_IMAGE build section, add the ansible.cfg and hub-root.crt files and run the update-ca-trust command. In the EE_BUILDER_IMAGE build section, add the ubi.repo and pip.conf files. In the final EE_BASE_IMAGE build section, add the ubi.repo and pip.conf files. USD cat context/Containerfile ARG EE_BASE_IMAGE=<hub_fqdn>/ee-supported-rhel8:latest ARG EE_BUILDER_IMAGE=<hub_fqdn>/ansible-builder-rhel8:latest FROM USDEE_BASE_IMAGE as galaxy ARG ANSIBLE_GALAXY_CLI_COLLECTION_OPTS= USER root ADD _build /build WORKDIR /build # this section added ADD ansible.cfg /etc/ansible/ansible.cfg ADD hub-root.crt /etc/pki/ca-trust/source/anchors/hub-root.crt RUN update-ca-trust # end additions RUN ansible-galaxy role install -r requirements.yml \ --roles-path /usr/share/ansible/roles RUN ansible-galaxy collection install \ USDANSIBLE_GALAXY_CLI_COLLECTION_OPTS -r requirements.yml \ --collections-path /usr/share/ansible/collections FROM USDEE_BUILDER_IMAGE as builder COPY --from=galaxy /usr/share/ansible /usr/share/ansible ADD _build/requirements.txt requirements.txt RUN ansible-builder introspect --sanitize \ --user-pip=requirements.txt \ --write-bindep=/tmp/src/bindep.txt \ --write-pip=/tmp/src/requirements.txt # this section added ADD ubi.repo /etc/yum.repos.d/ubi.repo ADD pip.conf /etc/pip.conf # end additions RUN assemble FROM USDEE_BASE_IMAGE USER root COPY --from=galaxy /usr/share/ansible /usr/share/ansible # this section added ADD ubi.repo /etc/yum.repos.d/ubi.repo ADD pip.conf /etc/pip.conf # end additions COPY --from=builder /output/ /output/ RUN /output/install-from-bindep && rm -rf /output/wheels Create the EE image in the local podman cache using the podman command. USD podman build -f context/Containerfile \ -t <hub_fqdn>/custom-ee:latest Once the custom EE image builds successfully, push it to the private automation hub. USD podman push <hub_fqdn>/custom-ee:latest 4.12.1. Workflow for upgrading between minor Ansible Automation Platform releases To upgrade between minor releases of Ansible Automation Platform 2, use this general workflow. Procedure Download and unarchive the latest Ansible Automation Platform 2 setup bundle. Take a backup of the existing installation. Copy the existing installation inventory file into the new setup bundle directory. Run ./setup.sh to upgrade the installation. For example, to upgrade from version 2.2.0-7 to 2.3-1.2, make sure that both setup bundles are on the initial controller node where the installation occurred: USD ls -1F ansible-automation-platform-setup-bundle-2.2.0-7/ ansible-automation-platform-setup-bundle-2.2.0-7.tar.gz ansible-automation-platform-setup-bundle-2.3-1.2/ ansible-automation-platform-setup-bundle-2.3-1.2.tar.gz Back up the 2.2.0-7 installation: USD cd ansible-automation-platform-setup-bundle-2.2.0-7 USD sudo ./setup.sh -b USD cd .. Copy the 2.2.0-7 inventory file into the 2.3-1.2 bundle directory: USD cd ansible-automation-platform-setup-bundle-2.2.0-7 USD cp inventory ../ansible-automation-platform-setup-bundle-2.3-1.2/ USD cd .. Upgrade from 2.2.0-7 to 2.3-1.2 with the setup.sh script: USD cd ansible-automation-platform-setup-bundle-2.3-1.2 USD sudo ./setup.sh
|
[
"subscription-manager repos --enable rhel-8-for-x86_64-baseos-rpms --enable rhel-8-for-x86_64-appstream-rpms",
"dnf install yum-utils reposync -m --download-metadata --gpgcheck -p /path/to/download",
"sudo dnf install httpd",
"/etc/httpd/conf.d/repository.conf DocumentRoot '/path/to/repos' <LocationMatch \"^/+USD\"> Options -Indexes ErrorDocument 403 /.noindex.html </LocationMatch> <Directory '/path/to/repos'> Options All Indexes FollowSymLinks AllowOverride None Require all granted </Directory>",
"sudo chown -R apache /path/to/repos",
"sudo semanage fcontext -a -t httpd_sys_content_t \"/path/to/repos(/.*)?\" sudo restorecon -ir /path/to/repos",
"sudo systemctl enable --now httpd.service",
"sudo firewall-cmd --zone=public --add-service=http -add-service=https --permanent sudo firewall-cmd --reload",
"[Local-BaseOS] name=Local BaseOS baseurl=http://<webserver_fqdn>/rhel-8-for-x86_64-baseos-rpms enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release [Local-AppStream] name=Local AppStream baseurl=http://<webserver_fqdn>/rhel-8-for-x86_64-appstream-rpms enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release",
"mkdir /media/rheldvd && mount /dev/sr0 /media/rheldvd",
"mkdir /media/rheldvd && mount -o loop rhrhel-8.6-x86_64-dvd.iso /media/rheldvd",
"[dvd-BaseOS] name=DVD for RHEL - BaseOS baseurl=file:///media/rheldvd/BaseOS enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release [dvd-AppStream] name=DVD for RHEL - AppStream baseurl=file:///media/rheldvd/AppStream enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release",
"rpm --import /media/rheldvd/RPM-GPG-KEY-redhat-release",
"Curl error (6): Couldn't resolve host name for https://www.redhat.com/security/data/fd431d51.txt [Could not resolve host: www.redhat.com]",
"tar xvf ansible-automation-platform-setup-bundle-2.3-1.2.tar.gz cd ansible-automation-platform-setup-bundle-2.3-1.2",
"[automationcontroller] automationcontroller.example.org ansible_connection=local [automationcontroller:vars] peers=execution_nodes [automationhub] automationhub.example.org [all:vars] admin_password='password123' pg_database='awx' pg_username='awx' pg_password='dbpassword123' receptor_listener_port=27199 automationhub_admin_password='hubpassword123' automationhub_pg_host='automationcontroller.example.org' automationhub_pg_port='5432' automationhub_pg_database='automationhub' automationhub_pg_username='automationhub' automationhub_pg_password='dbpassword123' automationhub_pg_sslmode='prefer'",
"sudo -i cd /path/to/ansible-automation-platform-setup-bundle-2.3-1.2 ./setup.sh",
"sudo -i scp <hub_fqdn>:/etc/pulp/certs/root.crt /etc/pki/ca-trust/source/anchors/automationhub-root.crt update-ca-trust",
"sudo -i cat /etc/pulp/certs/root.crt (copy the contents of the file, including the lines with 'BEGIN CERTIFICATE' and 'END CERTIFICATE')",
"sudo -i vi /etc/pki/ca-trust/source/anchors/automationhub-root.crt (paste the contents of the root.crt file from the {PrivateHubName} into the new file and write to disk) update-ca-trust",
"cp /etc/pulp/certs/root.crt /etc/pki/ca-trust/source/anchors/automationhub-root.crt update-ca-trust",
"[galaxy] server_list = private_hub [galaxy_server.private_hub] url=https://<hub_fqdn>/api/galaxy/ token=<token_from_private_hub>",
"ansible-galaxy collection publish <collection_tarball>",
"podman image save localhost/custom-ee:latest | gzip -c custom-ee-latest.tar.gz",
"podman image load -i custom-ee-latest.tar.gz podman image tag localhost/custom-ee <hub_fqdn>/custom-ee:latest podman login <hub_fqdn> --tls-verify=false podman push <hub_fqdn>/custom-ee:latest",
"tar -xzvf ansible-automation-platform-setup-bundle-2.3-1.2.tar.gz cd ansible-automation-platform-setup-bundle-2.3-1.2/bundle/el8/repos/ sudo yum install ansible-builder-1.2.0-1.el9ap.noarch.rpm python38-requirements-parser-0.2.0-4.el9ap.noarch.rpm",
"mkdir custom-ee cd custom-ee/",
"cat execution-environment.yml --- version: 1 build_arg_defaults: EE_BASE_IMAGE: '<hub_fqdn>/ee-supported-rhel8:latest' EE_BUILDER_IMAGE: '<hub_fqdn>/ansible-builder-rhel8:latest' dependencies: python: requirements.txt galaxy: requirements.yml",
"cat ansible.cfg [galaxy] server_list = private_hub [galaxy_server.private_hub] url=https://<hub_fqdn>/api/galaxy/ token=<admin_token>",
"cat ubi.repo [ubi-8-baseos] name = Red Hat Universal Base Image 8 (RPMs) - BaseOS baseurl = http://<ubi_mirror_fqdn>/repos/ubi-8-baseos enabled = 1 gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release gpgcheck = 1 [ubi-8-appstream] name = Red Hat Universal Base Image 8 (RPMs) - AppStream baseurl = http://<ubi_mirror_fqdn>/repos/ubi-8-appstream enabled = 1 gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release gpgcheck = 1",
"ansible-builder create Complete! The build context can be found at: /home/cloud-user/custom-ee/context ls -1F ansible.cfg context/ execution-environment.yml hub-root.crt pip.conf requirements.txt requirements.yml ubi.repo",
"cp ansible.cfg hub-root.crt pip.conf ubi.repo context/",
"cat context/Containerfile ARG EE_BASE_IMAGE=<hub_fqdn>/ee-supported-rhel8:latest ARG EE_BUILDER_IMAGE=<hub_fqdn>/ansible-builder-rhel8:latest FROM USDEE_BASE_IMAGE as galaxy ARG ANSIBLE_GALAXY_CLI_COLLECTION_OPTS= USER root ADD _build /build WORKDIR /build this section added ADD ansible.cfg /etc/ansible/ansible.cfg ADD hub-root.crt /etc/pki/ca-trust/source/anchors/hub-root.crt RUN update-ca-trust end additions RUN ansible-galaxy role install -r requirements.yml --roles-path /usr/share/ansible/roles RUN ansible-galaxy collection install USDANSIBLE_GALAXY_CLI_COLLECTION_OPTS -r requirements.yml --collections-path /usr/share/ansible/collections FROM USDEE_BUILDER_IMAGE as builder COPY --from=galaxy /usr/share/ansible /usr/share/ansible ADD _build/requirements.txt requirements.txt RUN ansible-builder introspect --sanitize --user-pip=requirements.txt --write-bindep=/tmp/src/bindep.txt --write-pip=/tmp/src/requirements.txt this section added ADD ubi.repo /etc/yum.repos.d/ubi.repo ADD pip.conf /etc/pip.conf end additions RUN assemble FROM USDEE_BASE_IMAGE USER root COPY --from=galaxy /usr/share/ansible /usr/share/ansible this section added ADD ubi.repo /etc/yum.repos.d/ubi.repo ADD pip.conf /etc/pip.conf end additions COPY --from=builder /output/ /output/ RUN /output/install-from-bindep && rm -rf /output/wheels",
"podman build -f context/Containerfile -t <hub_fqdn>/custom-ee:latest",
"podman push <hub_fqdn>/custom-ee:latest",
"ls -1F ansible-automation-platform-setup-bundle-2.2.0-7/ ansible-automation-platform-setup-bundle-2.2.0-7.tar.gz ansible-automation-platform-setup-bundle-2.3-1.2/ ansible-automation-platform-setup-bundle-2.3-1.2.tar.gz",
"cd ansible-automation-platform-setup-bundle-2.2.0-7 sudo ./setup.sh -b cd ..",
"cd ansible-automation-platform-setup-bundle-2.2.0-7 cp inventory ../ansible-automation-platform-setup-bundle-2.3-1.2/ cd ..",
"cd ansible-automation-platform-setup-bundle-2.3-1.2 sudo ./setup.sh"
] |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_installation_guide/disconnected-installation
|
3.4. Removing Control Groups
|
3.4. Removing Control Groups Remove cgroups with the cgdelete command that has syntax similar to that of cgcreate . Enter the following command as root : where: controllers is a comma-separated list of controllers. path is the path to the cgroup relative to the root of the hierarchy. For example: cgdelete can also recursively remove all subgroups when the -r option is specified. Note that when you delete a cgroup, all its processes move to its parent group.
|
[
"~]# cgdelete controllers : path",
"~]# cgdelete net_prio:/test-subgroup"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/resource_management_guide/sec-removing_cgroups-libcgroup
|
Chapter 3. Installing a cluster on OpenStack with customizations
|
Chapter 3. Installing a cluster on OpenStack with customizations In OpenShift Container Platform version 4.16, you can install a customized cluster on Red Hat OpenStack Platform (RHOSP). To customize the installation, modify parameters in the install-config.yaml before you install the cluster. 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.16 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You have a storage service installed in RHOSP, such as block storage (Cinder) or object storage (Swift). Object storage is the recommended storage technology for OpenShift Container Platform registry cluster deployment. For more information, see Optimizing storage . You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended practices for scaling the cluster . You have the metadata service enabled in RHOSP. 3.2. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 3.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 Server groups 2 - plus 1 for each additional availability zone in each machine pool A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 3.2.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 3.2.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 3.2.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 3.2.4. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you can provision your own API and application ingress load balancing infrastructure to use in place of the default, internal load balancing solution. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 3.2. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 3.3. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 3.2.4.1. Example load balancer configuration for clusters that are deployed with user-managed load balancers This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for clusters that are deployed with user-managed load balancers. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 3.1. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 3.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 3.4. Enabling Swift on RHOSP Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program. Important If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. Important RHOSP 17 sets the rgw_max_attr_size parameter of Ceph RGW to 256 characters. This setting causes issues with uploading container images to the OpenShift Container Platform registry. You must set the value of rgw_max_attr_size to at least 1024 characters. Before installation, check if your RHOSP deployment is affected by this problem. If it is, reconfigure Ceph RGW. Prerequisites You have a RHOSP administrator account on the target environment. The Swift service is installed. On Ceph RGW , the account in url option is enabled. Procedure To enable Swift on RHOSP: As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift: USD openstack role add --user <user> --project <project> swiftoperator Your RHOSP deployment can now use Swift for the image registry. 3.5. Configuring an image registry with custom storage on clusters that run on RHOSP After you install a cluster on Red Hat OpenStack Platform (RHOSP), you can use a Cinder volume that is in a specific availability zone for registry storage. Procedure Create a YAML file that specifies the storage class and availability zone to use. For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name> Note OpenShift Container Platform does not verify the existence of the availability zone you choose. Verify the name of the availability zone before you apply the configuration. From a command line, apply the configuration: USD oc apply -f <storage_class_file_name> Example output storageclass.storage.k8s.io/custom-csi-storageclass created Create a YAML file that specifies a persistent volume claim (PVC) that uses your storage class and the openshift-image-registry namespace. For example: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: "true" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3 1 Enter the namespace openshift-image-registry . This namespace allows the Cluster Image Registry Operator to consume the PVC. 2 Optional: Adjust the volume size. 3 Enter the name of the storage class that you created. From a command line, apply the configuration: USD oc apply -f <pvc_file_name> Example output persistentvolumeclaim/csi-pvc-imageregistry created Replace the original persistent volume claim in the image registry configuration with the new claim: USD oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{"op": "replace", "path": "/spec/storage/pvc/claim", "value": "csi-pvc-imageregistry"}]' Example output config.imageregistry.operator.openshift.io/cluster patched Over the several minutes, the configuration is updated. Verification To confirm that the registry is using the resources that you defined: Verify that the PVC claim value is identical to the name that you provided in your PVC definition: USD oc get configs.imageregistry.operator.openshift.io/cluster -o yaml Example output ... status: ... managementState: Managed pvc: claim: csi-pvc-imageregistry ... Verify that the status of the PVC is Bound : USD oc get pvc -n openshift-image-registry csi-pvc-imageregistry Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m 3.6. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Important If the external network's CIDR range overlaps one of the default network ranges, you must change the matching network ranges in the install-config.yaml file before you start the installation process. The default network ranges are: Network Range machineNetwork 10.0.0.0/16 serviceNetwork 172.30.0.0/16 clusterNetwork 10.128.0.0/14 Warning If the installation program finds multiple networks with the same name, it sets one of them at random. To avoid this behavior, create unique names for resources in RHOSP. Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 3.7. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 3.8. Setting OpenStack Cloud Controller Manager options Optionally, you can edit the OpenStack Cloud Controller Manager (CCM) configuration for your cluster. This configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP). For a complete list of configuration parameters, see the "OpenStack Cloud Controller Manager reference guide" page in the "Installing on OpenStack" documentation. Procedure If you have not already generated manifest files for your cluster, generate them by running the following command: USD openshift-install --dir <destination_directory> create manifests In a text editor, open the cloud-provider configuration manifest file. For example: USD vi openshift/manifests/cloud-provider-config.yaml Modify the options according to the CCM reference guide. Configuring Octavia for load balancing is a common case. For example: #... [LoadBalancer] lb-provider = "amphora" 1 floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #... 1 This property sets the Octavia provider that your load balancer uses. It accepts "ovn" or "amphora" as values. If you choose to use OVN, you must also set lb-method to SOURCE_IP_PORT . 2 This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here. 3 This property controls whether the cloud provider creates health monitors for Octavia load balancers. Set the value to True to create health monitors. As of RHOSP 16.2, this feature is only available for the Amphora provider. 4 This property sets the frequency with which endpoints are monitored. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 5 This property sets the time that monitoring requests are open before timing out. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 6 This property defines how many successful monitoring requests are required before a load balancer is marked as online. The value must be an integer. This property is required if the value of the create-monitor property is True . Important Prior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section. Important You must set the value of the create-monitor property to True if you use services that have the value of the .spec.externalTrafficPolicy property set to Local . The OVN Octavia provider in RHOSP 16.2 does not support health monitors. Therefore, services that have ETP parameter values set to Local might not respond when the lb-provider value is set to "ovn" . Save the changes to the file and proceed with installation. Tip You can update your cloud provider configuration after you run the installer. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a SchedulingDisabled status. 3.9. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 3.10. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for OpenStack 3.10.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.10.2. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIPs and platform.openstack.ingressVIPs that are outside of the DHCP allocation pool. Important The CIDR ranges for networks are not adjustable after cluster installation. Red Hat does not provide direct guidance on determining the range during cluster installation because it requires careful consideration of the number of created pods per namespace. 3.10.3. Deploying a cluster with bare metal machines If you want your cluster to use bare metal machines, modify the install-config.yaml file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines. Note Be sure that your install-config.yaml file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not. Prerequisites The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API. Bare metal is available as a RHOSP flavor . If your cluster runs on an RHOSP version that is more than 16.1.6 and less than 16.2.4, bare metal workers do not function due to a known issue that causes the metadata service to be unavailable for services on OpenShift Container Platform nodes. The RHOSP network supports both VM and bare metal server attachment. If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned. If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks. You created an install-config.yaml file as part of the OpenShift Container Platform installation process. Procedure In the install-config.yaml file, edit the flavors for machines: If you want to use bare-metal control plane machines, change the value of controlPlane.platform.openstack.type to a bare metal flavor. Change the value of compute.platform.openstack.type to a bare metal flavor. If you want to deploy your machines on a pre-existing network, change the value of platform.openstack.machinesSubnet to the RHOSP subnet UUID of the network. Control plane and compute machines must use the same subnet. An example bare metal install-config.yaml file controlPlane: platform: openstack: type: <bare_metal_control_plane_flavor> 1 ... compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: openstack: type: <bare_metal_compute_flavor> 2 replicas: 3 ... platform: openstack: machinesSubnet: <subnet_UUID> 3 ... 1 If you want to have bare-metal control plane machines, change this value to a bare metal flavor. 2 Change this value to a bare metal flavor to use for compute machines. 3 If you want to use a pre-existing network, change this value to the UUID of the RHOSP subnet. Use the updated install-config.yaml file to complete the installation process. The compute machines that are created during deployment use the flavor that you added to the file. Note The installer may time out while waiting for bare metal machines to boot. If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug 3.10.4. Cluster deployment on RHOSP provider networks You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process. RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them. In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network: OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation. Example provider network types include flat (untagged) and VLAN (802.1Q tagged). Note A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. You can learn more about provider and tenant networks in the RHOSP documentation . 3.10.4.1. RHOSP provider network requirements for cluster installation Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions: The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API. The RHOSP networking service has the port security and allowed address pairs extensions enabled . The provider network can be shared with other tenants. Tip Use the openstack network create command with the --share flag to create a network that can be shared. The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet. Tip To create a network for a project that is named "openshift," enter the following command USD openstack network create --project openshift To create a subnet for a project that is named "openshift," enter the following command USD openstack subnet create --project openshift To learn more about creating networks on RHOSP, read the provider networks documentation . If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network. Important Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network. Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default. Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example: USD openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ... Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project. 3.10.4.2. Deploying a cluster that has a primary interface on a provider network You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network. Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation". Procedure In a text editor, open the install-config.yaml file. Set the value of the platform.openstack.apiVIPs property to the IP address for the API VIP. Set the value of the platform.openstack.ingressVIPs property to the IP address for the Ingress VIP. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet. Important The platform.openstack.apiVIPs and platform.openstack.ingressVIPs properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block. Section of an installation configuration file for a cluster that relies on a RHOSP provider network ... platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # ... networking: machineNetwork: - cidr: 192.0.2.0/24 1 2 In OpenShift Container Platform 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings. Warning You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface. When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network. Tip You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list. After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks . 3.10.5. Sample customized install-config.yaml file for RHOSP The following example install-config.yaml files demonstrate all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. Example 3.2. Example single stack install-config.yaml file apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... Example 3.3. Example dual stack install-config.yaml file apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.25.0/24 - cidr: fd2e:6f44:5dd8:c956::/64 serviceNetwork: - 172.30.0.0/16 - fd02::/112 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiVIPs: - 192.168.25.10 - fd2e:6f44:5dd8:c956:f816:3eff:fec3:5955 ingressVIPs: - 192.168.25.132 - fd2e:6f44:5dd8:c956:f816:3eff:fe40:aecb controlPlanePort: fixedIPs: - subnet: name: openshift-dual4 - subnet: name: openshift-dual6 network: name: openshift-dual fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 3.10.6. Configuring a cluster with dual-stack networking You can create a dual-stack cluster on RHOSP. However, the dual-stack configuration is enabled only if you are using an RHOSP network with IPv4 and IPv6 subnets. Note RHOSP does not support the conversion of an IPv4 single-stack cluster to a dual-stack cluster network. 3.10.6.1. Deploying the dual-stack cluster For dual-stack networking in OpenShift Container Platform clusters, you can configure IPv4 and IPv6 address endpoints for cluster nodes. Prerequisites You enabled Dynamic Host Configuration Protocol (DHCP) on the subnets. Procedure Create a network with IPv4 and IPv6 subnets. The available address modes for the ipv6-ra-mode and ipv6-address-mode fields are: dhcpv6-stateful , dhcpv6-stateless , and slaac . Note The dual-stack network MTU must accommodate both the minimum MTU for IPv6, which is 1280 , and the OVN-Kubernetes encapsulation overhead, which is 100 . Create the API and Ingress VIPs ports. Add the IPv6 subnet to the router to enable router advertisements. If you are using a provider network, you can enable router advertisements by adding the network as an external gateway, which also enables external connectivity. Choose one of the following install-config.yaml configurations: For an IPv4/IPv6 dual-stack cluster where you set IPv4 as the primary endpoint for your cluster nodes, edit the install-config.yaml file in a similar way to the following example: apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: machineNetwork: 1 - cidr: "192.168.25.0/24" - cidr: "fd2e:6f44:5dd8:c956::/64" clusterNetwork: 2 - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 serviceNetwork: 3 - 172.30.0.0/16 - fd02::/112 platform: openstack: ingressVIPs: ['192.168.25.79', 'fd2e:6f44:5dd8:c956:f816:3eff:fef1:1bad'] 4 apiVIPs: ['192.168.25.199', 'fd2e:6f44:5dd8:c956:f816:3eff:fe78:cf36'] 5 controlPlanePort: 6 fixedIPs: 7 - subnet: 8 name: subnet-v4 id: subnet-v4-id - subnet: 9 name: subnet-v6 id: subnet-v6-id network: 10 name: dualstack id: network-id 1 2 3 You must specify an IP address range for both the IPv4 and IPv6 address families. 4 Specify the virtual IP (VIP) address endpoints for the Ingress VIP services to provide an interface to the cluster. 5 Specify the virtual IP (VIP) address endpoints for the API VIP services to provide an interface to the cluster. 6 Specify the dual-stack network details that all of the nodes across the cluster use for their networking needs. 7 The Classless Inter-Domain Routing (CIDR) of any subnet specified in this field must match the CIDRs listed on networks.machineNetwork . 8 9 You can specify a value for either name or id , or both. 10 Specifying the network under the ControlPlanePort field is optional. For an IPv6/IPv4 dual-stack cluster where you set IPv6 as the primary endpoint for your cluster nodes, edit the install-config.yaml file in a similar way to the following example: apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: machineNetwork: 1 - cidr: "fd2e:6f44:5dd8:c956::/64" - cidr: "192.168.25.0/24" clusterNetwork: 2 - cidr: fd01::/48 hostPrefix: 64 - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: 3 - fd02::/112 - 172.30.0.0/16 platform: openstack: ingressVIPs: ['fd2e:6f44:5dd8:c956:f816:3eff:fef1:1bad', '192.168.25.79'] 4 apiVIPs: ['fd2e:6f44:5dd8:c956:f816:3eff:fe78:cf36', '192.168.25.199'] 5 controlPlanePort: 6 fixedIPs: 7 - subnet: 8 name: subnet-v6 id: subnet-v6-id - subnet: 9 name: subnet-v4 id: subnet-v4-id network: 10 name: dualstack id: network-id 1 2 3 You must specify an IP address range for both the IPv4 and IPv6 address families. 4 Specify the virtual IP (VIP) address endpoints for the Ingress VIP services to provide an interface to the cluster. 5 Specify the virtual IP (VIP) address endpoints for the API VIP services to provide an interface to the cluster. 6 Specify the dual-stack network details that all the nodes across the cluster use for their networking needs. 7 The CIDR of any subnet specified in this field must match the CIDRs listed on networks.machineNetwork . 8 9 You can specify a value for either name or id , or both. 10 Specifying the network under the ControlPlanePort field is optional. Optional: When you use an installation host in an isolated dual-stack network, the IPv6 address might not be reassigned correctly upon reboot. To resolve this problem on Red Hat Enterprise Linux (RHEL) 8, complete the following steps: Create a file called /etc/NetworkManager/system-connections/required-rhel8-ipv6.conf that includes the following configuration: [connection] type=ethernet [ipv6] addr-gen-mode=eui64 method=auto Reboot the installation host. Optional: When you use an installation host in an isolated dual-stack network, the IPv6 address might not be reassigned correctly upon reboot. To resolve this problem on Red Hat Enterprise Linux (RHEL) 9, complete the following steps: Create a file called /etc/NetworkManager/conf.d/required-rhel9-ipv6.conf that includes the following configuration: [connection] ipv6.addr-gen-mode=0 Reboot the installation host. Note The ip=dhcp,dhcp6 kernel argument, which is set on all of the nodes, results in a single Network Manager connection profile that is activated on multiple interfaces simultaneously. Because of this behavior, any additional network has the same connection enforced with an identical UUID. If you need an interface-specific configuration, create a new connection profile for that interface so that the default connection is no longer enforced on it. 3.10.7. Installation configuration for a cluster on OpenStack with a user-managed load balancer The following example install-config.yaml file demonstrates how to configure a cluster that uses an external, user-managed load balancer rather than the default internal load balancer. apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.10.0/24 platform: openstack: cloud: mycloud machinesSubnet: 8586bf1a-cc3c-4d40-bdf6-c243decc603a 1 apiVIPs: - 192.168.10.5 ingressVIPs: - 192.168.10.7 loadBalancer: type: UserManaged 2 1 Regardless of which load balancer you use, the load balancer is deployed to this subnet. 2 The UserManaged value indicates that you are using an user-managed load balancer. 3.11. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.12. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 3.12.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the install-config.yaml file as the values of the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 3.12.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the install-config.yaml file, do not define the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you cannot provide an external network, you can also leave platform.openstack.externalNetwork blank. If you do not provide a value for platform.openstack.externalNetwork , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own. If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 3.13. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.14. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 3.15. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.16. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 3.17. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses .
|
[
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"openstack role add --user <user> --project <project> swiftoperator",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name>",
"oc apply -f <storage_class_file_name>",
"storageclass.storage.k8s.io/custom-csi-storageclass created",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: \"true\" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3",
"oc apply -f <pvc_file_name>",
"persistentvolumeclaim/csi-pvc-imageregistry created",
"oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{\"op\": \"replace\", \"path\": \"/spec/storage/pvc/claim\", \"value\": \"csi-pvc-imageregistry\"}]'",
"config.imageregistry.operator.openshift.io/cluster patched",
"oc get configs.imageregistry.operator.openshift.io/cluster -o yaml",
"status: managementState: Managed pvc: claim: csi-pvc-imageregistry",
"oc get pvc -n openshift-image-registry csi-pvc-imageregistry",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"openshift-install --dir <destination_directory> create manifests",
"vi openshift/manifests/cloud-provider-config.yaml",
"# [LoadBalancer] lb-provider = \"amphora\" 1 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #",
"oc edit configmap -n openshift-config cloud-provider-config",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"controlPlane: platform: openstack: type: <bare_metal_control_plane_flavor> 1 compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: openstack: type: <bare_metal_compute_flavor> 2 replicas: 3 platform: openstack: machinesSubnet: <subnet_UUID> 3",
"./openshift-install wait-for install-complete --log-level debug",
"openstack network create --project openshift",
"openstack subnet create --project openshift",
"openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2",
"platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.25.0/24 - cidr: fd2e:6f44:5dd8:c956::/64 serviceNetwork: - 172.30.0.0/16 - fd02::/112 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiVIPs: - 192.168.25.10 - fd2e:6f44:5dd8:c956:f816:3eff:fec3:5955 ingressVIPs: - 192.168.25.132 - fd2e:6f44:5dd8:c956:f816:3eff:fe40:aecb controlPlanePort: fixedIPs: - subnet: name: openshift-dual4 - subnet: name: openshift-dual6 network: name: openshift-dual fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: machineNetwork: 1 - cidr: \"192.168.25.0/24\" - cidr: \"fd2e:6f44:5dd8:c956::/64\" clusterNetwork: 2 - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 serviceNetwork: 3 - 172.30.0.0/16 - fd02::/112 platform: openstack: ingressVIPs: ['192.168.25.79', 'fd2e:6f44:5dd8:c956:f816:3eff:fef1:1bad'] 4 apiVIPs: ['192.168.25.199', 'fd2e:6f44:5dd8:c956:f816:3eff:fe78:cf36'] 5 controlPlanePort: 6 fixedIPs: 7 - subnet: 8 name: subnet-v4 id: subnet-v4-id - subnet: 9 name: subnet-v6 id: subnet-v6-id network: 10 name: dualstack id: network-id",
"apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: machineNetwork: 1 - cidr: \"fd2e:6f44:5dd8:c956::/64\" - cidr: \"192.168.25.0/24\" clusterNetwork: 2 - cidr: fd01::/48 hostPrefix: 64 - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: 3 - fd02::/112 - 172.30.0.0/16 platform: openstack: ingressVIPs: ['fd2e:6f44:5dd8:c956:f816:3eff:fef1:1bad', '192.168.25.79'] 4 apiVIPs: ['fd2e:6f44:5dd8:c956:f816:3eff:fe78:cf36', '192.168.25.199'] 5 controlPlanePort: 6 fixedIPs: 7 - subnet: 8 name: subnet-v6 id: subnet-v6-id - subnet: 9 name: subnet-v4 id: subnet-v4-id network: 10 name: dualstack id: network-id",
"[connection] type=ethernet [ipv6] addr-gen-mode=eui64 method=auto",
"[connection] ipv6.addr-gen-mode=0",
"apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.10.0/24 platform: openstack: cloud: mycloud machinesSubnet: 8586bf1a-cc3c-4d40-bdf6-c243decc603a 1 apiVIPs: - 192.168.10.5 ingressVIPs: - 192.168.10.7 loadBalancer: type: UserManaged 2",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc get nodes",
"oc get clusterversion",
"oc get clusteroperator",
"oc get pods -A",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_openstack/installing-openstack-installer-custom
|
Chapter 2. Creating Red Hat Ansible Automation Platform backup resources
|
Chapter 2. Creating Red Hat Ansible Automation Platform backup resources Backing up your Red Hat Ansible Automation Platform deployment involves creating backup resources for your deployed instances. Use the following procedures to create backup resources for your Red Hat Ansible Automation Platform deployment. We recommend taking backups before upgrading the Ansible Automation Platform Operator. Take a backup regularly in case you want to restore the platform to a state. 2.1. Backing up the Automation controller deployment Use this procedure to back up a deployment of the controller, including jobs, inventories, and credentials. Prerequisites You must be authenticated with an OpenShift cluster. You have installed Ansible Automation Platform Operator on the cluster. You have deployed automation controller using the Ansible Automation Platform Operator. Procedure Log in to Red Hat OpenShift Container Platform . Navigate to Operators Installed Operators . Select your Ansible Automation Platform Operator deployment. Select the Automation Controller Backup tab. Click Create AutomationControllerBackup . Enter a Name for the backup. In the Deployment name field, enter the name of the AutomationController custom resource object of the deployed Ansible Automation Platform instance being backed up. This name was created when you created your AutomationController object . If you want to use a custom, pre-created pvc: Optionally enter the name of the Backup persistant volume claim . Optionally enter the Backup PVC storage requirements , and Backup PVC storage class . Note If no pvc or storage class is provided, the cluster's default storage class is used to create the pvc. If you have a large database, specify your storage requests accordingly under Backup management pod resource requirements . Note You can check the size of the existing postgres database data directory by running the following command inside the postgres pod. USD df -h | grep "/var/lib/pgsql/data" Click Create . A backup tarball of the specified deployment is created and available for data recovery or deployment rollback. Future backups are stored in separate tar files on the same pvc. Verification Log in to Red Hat Red Hat OpenShift Container Platform Navigate to Operators Installed Operators . Select your Ansible Automation Platform Operator. Select the AutomationControllerBackup tab. Select the backup resource you want to verify. Scroll to Conditions and check that the Successful status is True . Note If the status is Failure , the backup has failed. Check the automation controller operator logs for the error to fix the issue. 2.2. Using YAML to back up the Automation controller deployment See the following procedure for how to back up a deployment of the automation controller using YAML. Prerequisites You must be authenticated with an OpenShift cluster. You have installed Ansible Automation Platform Operator on the cluster. You have deployed automation controller using the Ansible Automation Platform Operator. Procedure Create a file named "backup-awx.yml" with the following contents: --- apiVersion: automationcontroller.ansible.com/v1beta1 kind: AWXBackup metadata: name: awxbackup-2024-07-15 namespace: my-namespace spec: deployment_name: controller Note The "deployment_name" above is the name of the automation controller deployment you intend to backup from. The namespace above is the one containing the automation controller deployment you intend to back up. Use the oc apply command to create the backup object in your cluster: USD oc apply -f backup-awx.yml 2.3. Backing up the Automation hub deployment Use this procedure to back up a deployment of the hub, including all hosted Ansible content. Prerequisites You must be authenticated with an OpenShift cluster. You have installed Ansible Automation Platform Operator on the cluster. You have deployed automation hub using the Ansible Automation Platform Operator. Procedure Log in to Red Hat OpenShift Container Platform . Navigate to Operators Installed Operators . Select your Ansible Automation Platform Operator deployment. Select the Automation Hub Backup tab. Click Create AutomationHubBackup . Enter a Name for the backup. Enter the Deployment name of the deployed Ansible Automation Platform instance being backed up. For example, if your automation hub must be backed up and the deployment name is aap-hub , enter 'aap-hub' in the Deployment name field. If you want to use a custom, pre-created pvc: Optionally, enter the name of the Backup persistent volume claim , Backup persistent volume claim namespace , Backup PVC storage requirements , and Backup PVC storage class . Click Create . A backup of the specified deployment is created and available for data recovery or deployment rollback.
|
[
"df -h | grep \"/var/lib/pgsql/data\"",
"--- apiVersion: automationcontroller.ansible.com/v1beta1 kind: AWXBackup metadata: name: awxbackup-2024-07-15 namespace: my-namespace spec: deployment_name: controller"
] |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_operator_backup_and_recovery_guide/aap-backup
|
Chapter 14. Setting up a remote diskless system
|
Chapter 14. Setting up a remote diskless system In a network environment, you can setup multiple clients with the identical configuration by deploying a remote diskless system. By using current Red Hat Enterprise Linux server version, you can save the cost of hard drives for these clients as well as configure the gateway on a separate server. The following diagram describes the connection of a diskless client with the server through Dynamic Host Configuration Protocol (DHCP) and Trivial File Transfer Protocol (TFTP) services. Figure 14.1. Remote diskless system settings diagram 14.1. Preparing environments for the remote diskless system Prepare your environment to continue with remote diskless system implementation. The remote diskless system booting requires the following services: Trivial File Transfer Protocol (TFTP) service, which is provided by tftp-server. The system uses the tftp service to retrieve the kernel image and the initial RAM disk, initrd, over the network, through the Preboot Execution Environment (PXE) loader. Dynamic Host Configuration Protocol (DHCP) service, which is provided by dhcp. Prerequisites You have set up your network connection. Procedure Install the dracut-network package: Add the following line to the /etc/dracut.conf.d/network.conf file: Ensure correct functionality of the remote diskless system in your environment by configuring services in the following order: Configure a TFTP service. For more information, see Configuring a TFTP service for diskless clients . Configure a DHCP server. For more information, see Configuring a DHCP server for diskless clients . Configure the Network File System (NFS) and an exported file system. For more information, see Configuring an exported file system for diskless clients . 14.2. Configuring a TFTP service for diskless clients For the remote diskless system to function correctly in your environment, you need to first configure a Trivial File Transfer Protocol (TFTP) service for diskless clients. Note This configuration does not boot over the Unified Extensible Firmware Interface (UEFI). For UEFI based installation, see Configuring a TFTP server for UEFI-based clients . Prerequisites You have installed the following packages: tftp-server syslinux Procedure Enable the tftp service: Create a pxelinux directory in the tftp root directory: Copy the /usr/share/syslinux/pxelinux.0 file to the /var/lib/tftpboot/pxelinux/ directory: Copy /usr/share/syslinux/ldlinux.c32 to /var/lib/tftpboot/pxelinux/ : Create a pxelinux.cfg directory in the tftp root directory: Verification Check status of service tftp : 14.3. Configuring a DHCP server for diskless clients The remote diskless system requires several pre-installed services to enable correct functionality. Prerequisites Install the Trivial File Transfer Protocol (TFTP) service. You have installed the following package: dhcp-server You have configured the tftp service for diskless clients. For more information, see Configuring a TFTP service for diskless clients . Procedure Add the following configuration to the /etc/dhcp/dhcpd.conf file to setup a DHCP server and enable Preboot Execution Environment (PXE) for booting: Your DHCP configuration might be different depending on your environment, like setting lease time or fixed address. For details, see Providing DHCP services . Note While using libvirt virtual machine as a diskless client, the libvirt daemon provides the DHCP service, and the standalone DHCP server is not used. In this situation, network booting must be enabled with the bootp file=<filename> option in the libvirt network configuration, virsh net-edit . Enable dhcpd.service : Verification Check the status of service dhcpd.service : 14.4. Configuring an exported file system for diskless clients As a part of configuring a remote diskless system in your environment, you must configure an exported file system for diskless clients. Prerequisites You have configured the tftp service for diskless clients. See section Configuring a TFTP service for diskless clients . You have configured the Dynamic Host Configuration Protocol (DHCP) server. See section Configuring a DHCP server for diskless clients . Procedure Configure the Network File System (NFS) server to export the root directory by adding it to the /etc/exports directory. For the complete set of instructions see Deploying an NFS server Install a complete version of Red Hat Enterprise Linux to the root directory to accommodate completely diskless clients. To do that you can either install a new base system or clone an existing installation. Install Red Hat Enterprise Linux to the exported location by replacing exported-root-directory with the path to the exported file system: By setting the releasever option to / , releasever is detected from the host ( / ) system. Use the rsync utility to synchronize with a running system: Replace example.com with the hostname of the running system with which to synchronize via the rsync utility. Replace exported-root-directory with the path to the exported file system. Note, that for this option you must have a separate existing running system, which you will clone to the server by the command above. Configure the file system, which is ready for export, before you can use it with diskless clients: Copy the diskless client supported kernel ( vmlinuz-_kernel-version_pass:attributes ) to the tftp boot directory: Create the initramfs- kernel-version .img file locally and move it to the exported root directory with NFS support: For example: Example for creating initrd, using current running kernel version, and overwriting existing image: Change the file permissions for initrd to 0644 : Warning If you do not change the initrd file permissions, the pxelinux.0 boot loader fails with a "file not found" error. Copy the resulting initramfs- kernel-version .img file into the tftp boot directory: Add the following configuration in the /var/lib/tftpboot/pxelinux/pxelinux.cfg/default file to edit the default boot configuration for using the initrd and the kernel: This configuration instructs the diskless client root to mount the /exported-root-directory exported file system in a read/write format. Optional: Mount the file system in a read-only` format by editing the /var/lib/tftpboot/pxelinux/pxelinux.cfg/default file with the following configuration: Restart the NFS server: You can now export the NFS share to diskless clients. These clients can boot over the network via Preboot Execution Environment (PXE). 14.5. Re-configuring a remote diskless system If you want to install packages, restart services, or debug the issues, you can reconfigure the system. Prerequisites You have enabled the no_root_squash option in the exported file system. Procedure Change the user password: Change the command line to /exported/root/directory : Change the password for the user you want: Replace the <username> with a real user for whom you want to change the password. Exit the command line. Install software on a remote diskless system: Replace <package> with the actual package you want to install. Configure two separate exports to split a remote diskless system into a /usr and a /var . For more information, see Deploying an NFS server . 14.6. Troubleshooting common issues with loading a remote diskless system Based on the earlier configuration, some issues can occur while loading the remote diskless system. Following are some examples of the most common issues and ways to troubleshoot them on a Red Hat Enterprise Linux server. Example 14.1. The client does not get an IP address Check if the Dynamic Host Configuration Protocol (DHCP) service is enabled on the server. Check if the dhcp.service is running: If the dhcp.service is inactive, enable and start it: Reboot the diskless client. Check the DHCP configuration file /etc/dhcp/dhcpd.conf . For details, see Configuring a DHCP server for diskless clients . Check if the Firewall ports are opened. Check if the dhcp.service is listed in active services: If the dhcp.service is not listed in active services, add it to the list: Check if the nfs.service is listed in active services: If the nfs.service is not listed in active services, add it to the list: Example 14.2. The file is not available during the booting a remote diskless system Check if the file is in the /var/lib/tftpboot/ directory. If the file is in the directory, ensure if it has the following permissions: Check if the Firewall ports are opened. Example 14.3. System boot failed after loading kernel / initrd Check if the NFS service is enabled on a server. Check if nfs.service is running: If the nfs.service is inactive, you must start and enable it: Check if the parameters are correct in the /var/lib/tftpboot/pxelinux.cfg/ directory. For details, see Configuring an exported file system for diskless clients . Check if the Firewall ports are opened.
|
[
"dnf install dracut-network",
"add_dracutmodules+=\" nfs \"",
"systemctl enable --now tftp",
"mkdir -p /var/lib/tftpboot/pxelinux/",
"cp /usr/share/syslinux/pxelinux.0 /var/lib/tftpboot/pxelinux/",
"cp /usr/share/syslinux/ldlinux.c32 /var/lib/tftpboot/pxelinux/",
"mkdir -p /var/lib/tftpboot/pxelinux/pxelinux.cfg/",
"systemctl status tftp Active: active (running)",
"option space pxelinux; option pxelinux.magic code 208 = string; option pxelinux.configfile code 209 = text; option pxelinux.pathprefix code 210 = text; option pxelinux.reboottime code 211 = unsigned integer 32; option architecture-type code 93 = unsigned integer 16; subnet 192.168.205.0 netmask 255.255.255.0 { option routers 192.168.205.1; range 192.168.205.10 192.168.205.25; class \"pxeclients\" { match if substring (option vendor-class-identifier, 0, 9) = \"PXEClient\"; next-server 192.168.205.1; if option architecture-type = 00:07 { filename \"BOOTX64.efi\"; } else { filename \"pxelinux/pxelinux.0\"; } } }",
"systemctl enable --now dhcpd.service",
"systemctl status dhcpd.service Active: active (running)",
"dnf install @Base kernel dracut-network nfs-utils --installroot= exported-root-directory --releasever=/",
"rsync -a -e ssh --exclude='/proc/' --exclude='/sys/' example.com :/ exported-root-directory",
"cp / exported-root-directory /boot/vmlinuz-kernel-version /var/lib/tftpboot/pxelinux/",
"dracut --add nfs initramfs-kernel-version.img kernel-version",
"dracut --add nfs /exports/root/boot/initramfs-5.14.0-202.el9.x86_64.img 5.14.0-202.el9.x86_64",
"dracut -f --add nfs \"boot/initramfs-USD(uname -r).img\" \"USD(uname -r)\"",
"chmod 0644 / exported-root-directory /boot/initramfs- kernel-version .img",
"cp / exported-root-directory /boot/initramfs- kernel-version .img /var/lib/tftpboot/pxelinux/",
"default rhel9 label rhel9 kernel vmlinuz- kernel-version append initrd=initramfs- kernel-version .img root=nfs:_server-ip_:/ exported-root-directory rw",
"default rhel9 label rhel9 kernel vmlinuz- kernel-version append initrd=initramfs- kernel-version .img root=nfs: server-ip :/ exported-root-directory ro",
"systemctl restart nfs-server.service",
"chroot /exported/root/directory /bin/bash",
"passwd <username>",
"dnf install <package> --installroot= /exported/root/directory --releasever=/ --config /etc/dnf/dnf.conf --setopt=reposdir=/etc/yum.repos.d/",
"systemctl status dhcpd.service",
"systemctl enable dhcpd.service systemctl start dhcpd.service",
"firewall-cmd --get-active-zones firewall-cmd --info-zone=public",
"firewall-cmd --add-service=dhcp --permanent",
"firewall-cmd --get-active-zones firewall-cmd --info-zone=public",
"firewall-cmd --add-service=nfs --permanent",
"chmod 644 pxelinux.0",
"systemctl status nfs.service",
"systemctl start nfs.service systemctl enable nfs.service"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_storage_devices/setting-up-a-remote-diskless-system_managing-storage-devices
|
13.19. The Configuration Menu and Progress Screen
|
13.19. The Configuration Menu and Progress Screen Once you click Begin Installation at the Installation Summary screen, the progress screen appears. Red Hat Enterprise Linux reports the installation progress on the screen as it writes the selected packages to your system. Figure 13.37. Installing Packages For your reference, a complete log of your installation can be found in the /var/log/anaconda/anaconda.packaging.log file, once you reboot your system. If you chose to encrypt one or more partitions during partitioning setup, a dialog window with a progress bar will be displayed during the early stage of the installation process. This window informs that the installer is attempting to gather enough entropy (random data) to ensure that the encryption is secure. This window will disappear after 256 bits of entropy are gathered, or after 10 minutes. You can speed up the gathering process by moving your mouse or randomly typing on the keyboard. After the window disappears, the installation process will continue. Figure 13.38. Gathering Entropy for Encryption While the packages are being installed, more configuration is required. Above the installation progress bar are the Root Password and User Creation menu items. The Root Password screen is used to configure the system's root account. This account can be used to perform critical system management and administration tasks. The same tasks can also be performed with a user account with the wheel group membership; if such an user account is created during installation, setting up a root password is not mandatory. Creating a user account is optional and can be done after installation, but it is recommended to do it on this screen. A user account is used for normal work and to access the system. Best practice suggests that you always access the system through a user account, not the root account. It is possible to disable access to the Root Password or Create User screens. To do so, use a Kickstart file which includes the rootpw --lock or user --lock commands. See Section 27.3.1, "Kickstart Commands and Options" for more information these commands. 13.19.1. Set the Root Password Setting up a root account and password is an important step during your installation. The root account (also known as the superuser) is used to install packages, upgrade RPM packages, and perform most system maintenance. The root account gives you complete control over your system. For this reason, the root account is best used only to perform system maintenance or administration. See the Red Hat Enterprise Linux 7 System Administrator's Guide for more information about becoming root. Figure 13.39. Root Password Screen Note You must always set up at least one way to gain root privileges to the installed system: either using a root account, or by creating a user account with administrative privileges (member of the wheel group), or both. Click the Root Password menu item and enter your new password into the Root Password field. Red Hat Enterprise Linux displays the characters as asterisks for security. Type the same password into the Confirm field to ensure it is set correctly. After you set the root password, click Done to return to the User Settings screen. The following are the requirements and recommendations for creating a strong root password: must be at least eight characters long may contain numbers, letters (upper and lower case) and symbols is case-sensitive and should contain a mix of cases something you can remember but that is not easily guessed should not be a word, abbreviation, or number associated with you, your organization, or found in a dictionary (including foreign languages) should not be written down; if you must write it down keep it secure Note To change your root password after you have completed the installation, run the passwd command as root . If you forget the root password, see Section 32.1.3, "Resetting the Root Password" for instructions on how to use the rescue mode to set a new one. 13.19.2. Create a User Account To create a regular (non-root) user account during the installation, click User Settings on the progress screen. The Create User screen appears, allowing you to set up the regular user account and configure its parameters. Though recommended to do during installation, this step is optional and can be performed after the installation is complete. Note You must always set up at least one way to gain root privileges to the installed system: either using a root account, or by creating a user account with administrative privileges (member of the wheel group), or both. To leave the user creation screen after you have entered it, without creating a user, leave all the fields empty and click Done . Figure 13.40. User Account Configuration Screen Enter the full name and the user name in their respective fields. Note that the system user name must be shorter than 32 characters and cannot contain spaces. It is highly recommended to set up a password for the new account. When setting up a strong password even for a non-root user, follow the guidelines described in Section 13.19.1, "Set the Root Password" . Click the Advanced button to open a new dialog with additional settings. Figure 13.41. Advanced User Account Configuration By default, each user gets a home directory corresponding to their user name. In most scenarios, there is no need to change this setting. You can also manually define a system identification number for the new user and their default group by selecting the check boxes. The range for regular user IDs starts at the number 1000 . At the bottom of the dialog, you can enter the comma-separated list of additional groups, to which the new user shall belong. The new groups will be created in the system. To customize group IDs, specify the numbers in parenthesis. Note Consider setting IDs of regular users and their default groups at range starting at 5000 instead of 1000 . That is because the range reserved for system users and groups, 0 - 999 , might increase in the future and thus overlap with IDs of regular users. For creating users with custom IDs using kickstart, see user (optional) . For changing the minimum UID and GID limits after the installation, which ensures that your chosen UID and GID ranges are applied automatically on user creation, see the Users and Groups chapter of the System Administrator's Guide . Once you have customized the user account, click Save Changes to return to the User Settings screen.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-configuration-progress-menu-ppc
|
Chapter 31. Networking
|
Chapter 31. Networking SNMP response is no longer timed out Previously, all the Simple Network Management Protocol version 1 (SNMPv1) and SNMPv2c responses that followed an SNMPv3 message were checked against the last recorded SNMPv3 max message size property. As a consequence, an SNMPv3 request with a small max message size could lead to SNMPv1 and SNMPv2c bulk requests timing out. With this update, the session maximum message size is checked only for SNMPv3 requests, and the SNMPv1 and SNMPv2c response is no longer timed out. (BZ# 1324306 ) ICMP redirects no longer cause kernel to crash Previously, a socket failed to be locked between user space and the process of Internet Control Message Protocol (ICMP) redirect packets, creating a race condition. As a consequence, kernel terminated unexpectedly. The bug has been fixed by skipping the process of ICMP redirect packets when the socket is locked by user space and now the described problem no longer occurs. (BZ#1387485) The net.ipv4.ip_nonlocal_bind kernel parameter is set in name spaces Previously, using a floating IP address inside a network name space in some cases failed with the following error message: With this update, the kernel respects setting of the net.ipv4.ip_nonlocal_bind parameter to 1 in name spaces, and the floating IP address is now assigned as expected. (BZ#1363661) The netfilter REJECT rule now works on SCTP packets Previously, the conntrack tool did not check the CRC32c value for Stream Control Transmission Protocol (SCTP) packets. As a consequence, the netfilter REJECT rule was not applied as expected on SCTP packets. The bug has been fixed by setting CHECKSUM_UNNECESSARY on SCTP packets which have valid CRC32c . As a result, the netfilter REJECT is allowed to generate an Internet Control Message Protocol (ICMP) response. (BZ#1353218) NetworkManager no longer duplicates a connection with already-set DHCP_HOSTNAME Previously, after a restart of the NetworkManager service, a connection with an already-set DHCP_HOSTNAME property was duplicated. Consequently, a DHCP lease was not always renewed upon its expiry. With this update, the connection is no longer duplicated, and a DHCP lease is correctly renewed in this scenario. Note that the fix includes ignoring the already-set hostname properties in the matching process. To avoid possible problems, remove all unused connections with an incorrect ipv4.dhcp-hostname . For more information, see https://access.redhat.com/articles/2948041 . (BZ# 1393997 ) Improved SCTP congestion_window management Previously, small data chunks caused the Stream Control Transmission Protocol (SCTP) to account the receiver_window (rwnd) values incorrectly when recovering from a zero-window situation . As a consequence, window updates were not sent to the peer, and an artificial growth of rwnd could lead to packet drops. This update properly accounts such small data chunks and ignores the rwnd pressure values when reopening a window. As a result, window updates are now sent, and the announced rwnd reflects better the real state of the receive buffer. (BZ#1084802) Value of DCTCP alpha now drops to 0 and cwnd remains at values more than 137 Previously, the alpha value of Datacenter TCP (DCTCP) was shifted before subtraction, causing precision loss. As a consequence, the real alpha value did not fall below 15 and uncongested flows eventually dropped to a congestion_window ( cwnd ) value of 137. This bug has been fixed by canceling the shift operation when alpha is low. As a result, alpha drops to 0 and cwnd remains at values more than 137 for uncongested flows. (BZ#1370638) ss now displays correctly cwnd Previously, the ss utility displayed Transmission Control Protocol congestion window (TCP cwnd) values from the kernel, performing a cast from unsigned to signed 32-bit integer. As a consequence, some values can overflow and be interpreted as a negative value. With this update, the ss code has been fixed, and the utility no longer displays negative cwnd values. (BZ# 1375215 ) Value of cwnd no longer increases using DCTCP Previously, the congestion_window ( cwnd ) increased unexpectedly after a packet loss. As a consequence, the Data Center TCP (DCTCP) congestion control module became ineffective in avoiding congestion, because repeated problems on the same flow occurred. With this update, the cwnd value is saved on loss and the old one is restored on recovery. As a result, cwnd remains stable. (BZ#1386923) Negated range matches have been fixed Previously, using a range of values in a negated match would never evaluate as true. With this update, such matches work as expected. For example: now correctly drops packets smaller than 100 bytes or larger than 200 bytes. (BZ#1418967) The nmcli connection show command now displays the correct output for both empty and NULL values Previously, the output of the nmcli connection show command did not display consistently the empty and NULL values among different properties. As a consequence, the empty values were displayed by -- or without a value. With this update, the output of the nmcli connection show command displays -- for both empty and NULL values in normal or pretty modes. Note that in terse mode, values are printed only in their raw form and the empty and NULL values are not printed at all. (BZ# 1391170 ) snmpd no longer rejects large packets from AgentX subagents Previously, the SNMP daemon (snmpd) limited the size of packets sent from AgentX subagents to 1472 bytes. This caused snmpd to refuse large packets from AgentX subagents. The packet size limit has been increased to 65535 bytes. As a result, snmpd no longer rejects large packets from AgentX subagents. (BZ#1286693) Macvlan can now be unregistered correctly Previously, attempts to unregister the Macvlan driver failed with broken sysfs links from or to devices in another namespace. With this update, Macvlan has been fixed, thus fixing this bug. (BZ#1412898)
|
[
"bind: Cannot assign requested address.",
"nft add rule ip ip_table filter_chain_input ip length != 100-200 drop"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/bug_fixes_networking
|
Chapter 5. Adding TLS Certificates to the Red Hat Quay Container
|
Chapter 5. Adding TLS Certificates to the Red Hat Quay Container To add custom TLS certificates to Red Hat Quay, create a new directory named extra_ca_certs/ beneath the Red Hat Quay config directory. Copy any required site-specific TLS certificates to this new directory. 5.1. Add TLS certificates to Red Hat Quay View certificate to be added to the container Create certs directory and copy certificate there Obtain the Quay container's CONTAINER ID with podman ps : Restart the container with that ID: Examine the certificate copied into the container namespace: 5.2. Adding custom SSL/TLS certificates when Red Hat Quay is deployed on Kubernetes When deployed on Kubernetes, Red Hat Quay mounts in a secret as a volume to store config assets. Currently, this breaks the upload certificate function of the superuser panel. As a temporary workaround, base64 encoded certificates can be added to the secret after Red Hat Quay has been deployed. Use the following procedure to add custom SSL/TLS certificates when Red Hat Quay is deployed on Kubernetes. Prerequisites Red Hat Quay has been deployed. You have a custom ca.crt file. Procedure Base64 encode the contents of an SSL/TLS certificate by entering the following command: USD cat ca.crt | base64 -w 0 Example output ...c1psWGpqeGlPQmNEWkJPMjJ5d0pDemVnR2QNCnRsbW9JdEF4YnFSdVd3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= Enter the following kubectl command to edit the quay-enterprise-config-secret file: USD kubectl --namespace quay-enterprise edit secret/quay-enterprise-config-secret Add an entry for the certificate and paste the full base64 encoded stringer under the entry. For example: custom-cert.crt: c1psWGpqeGlPQmNEWkJPMjJ5d0pDemVnR2QNCnRsbW9JdEF4YnFSdVd3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= Use the kubectl delete command to remove all Red Hat Quay pods. For example: USD kubectl delete pod quay-operator.v3.7.1-6f9d859bd-p5ftc quayregistry-clair-postgres-7487f5bd86-xnxpr quayregistry-quay-app-upgrade-xq2v6 quayregistry-quay-database-859d5445ff-cqthr quayregistry-quay-redis-84f888776f-hhgms Afterwards, the Red Hat Quay deployment automatically schedules replace pods with the new certificate data.
|
[
"cat storage.crt -----BEGIN CERTIFICATE----- MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV [...] -----END CERTIFICATE-----",
"mkdir -p quay/config/extra_ca_certs cp storage.crt quay/config/extra_ca_certs/ tree quay/config/ ├── config.yaml ├── extra_ca_certs │ ├── storage.crt",
"sudo podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS 5a3e82c4a75f <registry>/<repo>/quay:v3.12.8 \"/sbin/my_init\" 24 hours ago Up 18 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 443/tcp grave_keller",
"sudo podman restart 5a3e82c4a75f",
"sudo podman exec -it 5a3e82c4a75f cat /etc/ssl/certs/storage.pem -----BEGIN CERTIFICATE----- MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV",
"cat ca.crt | base64 -w 0",
"...c1psWGpqeGlPQmNEWkJPMjJ5d0pDemVnR2QNCnRsbW9JdEF4YnFSdVd3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=",
"kubectl --namespace quay-enterprise edit secret/quay-enterprise-config-secret",
"custom-cert.crt: c1psWGpqeGlPQmNEWkJPMjJ5d0pDemVnR2QNCnRsbW9JdEF4YnFSdVd3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=",
"kubectl delete pod quay-operator.v3.7.1-6f9d859bd-p5ftc quayregistry-clair-postgres-7487f5bd86-xnxpr quayregistry-quay-app-upgrade-xq2v6 quayregistry-quay-database-859d5445ff-cqthr quayregistry-quay-redis-84f888776f-hhgms"
] |
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/manage_red_hat_quay/config-custom-ssl-certs-manual
|
Chapter 2. Failover, load-balancing, and high-availability in IdM
|
Chapter 2. Failover, load-balancing, and high-availability in IdM Identity Management (IdM) has built-in failover mechanisms for IdM clients, and load-balancing and high-availability features for IdM servers. Client-side failover capability By default, the SSSD service on an IdM client is configured to use DNS service (SRV) resource records so that the client can automatically determine the best IdM server to connect to. Primary and backup server configuration The server resolution behavior is controlled by the _srv_ option in the ipa_server parameter of the /etc/sssd/sssd.conf file: Example /etc/sssd/sssd.conf With the _srv_ option specified, SSSD retrieves a list of IdM servers ordered by preference. If a primary server goes offline, the SSSD service on the IdM client automatically connects to another available IdM server. Primary servers are specified in the ipa_server parameter. SSSD attempts to connect to primary servers first and switches to backup servers only if no primary servers are available. The _srv_ option is not supported for backup servers. Note SSSD queries SRV records from the DNS server. By default, SSSD waits for 6 seconds for a reply from the DNS resolver before attempting to query another DNS server. If all DNS servers are unreachable, the domain will continue to operate in offline mode. You can use the dns_resolver_timeout option to increase the time the client waits for a reply from the DNS resolver. If you prefer to bypass DNS lookups for performance reasons, remove the _srv_ entry from the ipa_server parameter and specify which IdM servers the client should connect to, in order of preference: Example /etc/sssd/sssd.conf Failover behavior for IdM servers and services SSSD failover mechanism treats an IdM server and its services independently. If the hostname resolution for a server succeeds, SSSD considers the machine is online and tries to connect to the required service on that machine. If the connection to the service fails, SSSD considers only that specific service as offline, not the entire machine or other services on it. If hostname resolution fails, SSSD considers the entire machine as offline, and does not attempt to connect to any services on that machine. When all primary servers are unavailable, SSSD attempts to connect to a configured backup server. While connected to a backup server, SSSD periodically attempts to reconnect to one of the primary servers and connects immediately once a primary server becomes available. The interval between these attempts is controlled by the failover_primary_timeout option , which defaults to 31 seconds. If all IdM servers become unreachable, SSSD switches to offline mode. In this state, SSSD retries connections every 30 seconds until a server becomes available. Server-side load-balancing and service availability You can achieve load-balancing and high-availability in IdM by installing multiple IdM replicas: If you have a geographically dispersed network, you can shorten the path between IdM clients and the nearest accessible server by configuring multiple IdM replicas per data center. Red Hat supports environments with up to 60 replicas. The IdM replication mechanism provides active/active service availability: services at all IdM replicas are readily available at the same time. Note Red Hat recommends against combining IdM and other load-balancing or high-availability (HA) software. Many third-party high availability solutions assume active/passive scenarios and cause unnecessary service interruption to IdM availability. Other solutions use virtual IPs or a single hostname per clustered service. All these methods do not typically work well with the type of service availability provided by the IdM solution. They also integrate very poorly with Kerberos, decreasing the overall security and stability of the deployment. Additional resources sssd.conf(5) man pages on your system
|
[
"[domain/example.com] id_provider = ipa ipa_server = _srv_ , server1.example.com, server2.example.com ipa_backup_server = backup1.example.com, backup2.example.com",
"[domain/example.com] id_provider = ipa ipa_server = server1.example.com, server2.example.com ipa_backup_server = backup1.example.com, backup2.example.com"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/planning_identity_management/failover-load-balancing-high-availability_planning-identity-management
|
Chapter 33. Editing or deleting columns in guided decision tables
|
Chapter 33. Editing or deleting columns in guided decision tables You can edit or delete the columns you have created at any time in the guided decision tables designer. Procedure In the guided decision tables designer, click Columns . Expand the appropriate section and click Edit or Delete to the column name. Figure 33.1. Edit or delete columns Note A condition column cannot be deleted if an existing action column uses the same pattern-matching parameters as the condition column. After any column changes, click Finish in the wizard to save.
| null |
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/guided-decision-tables-columns-edit-proc
|
4.3.3. Importing and running the converted virtual machine
|
4.3.3. Importing and running the converted virtual machine On successful completion, virt-v2v will upload the exported virtual machine to the specified export domain. To import and run the converted virtual machine: Procedure 4.7. Importing and running the converted virtual machine In the Red Hat Enterprise Virtualization Administration Portal, select the export domain from the Storage tab. The export domain must have a status of Active . Select the VM Import tab in the details pane to list the available virtual machines to import. Select one or more virtual machines and click Import . The Import Virtual Machine(s) window will open. In the drop-down menus, select the select the Default Storage Domain , Cluster , and Cluster Quota in the data center. Select the Collapse Snapshots check box to remove snapshot restore points and include templates in template-based virtual machines. Click OK to import the virtual machines. For more information on importing virtual machines, refer to the Red Hat Enterprise Virtualization Administration Guide . Network configuration virt-v2v cannot currently reconfigure a guest's network configuration. If the converted guest is not connected to the same subnet as the source, the guest's network configuration may have to be updated manually.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/v2v_guide/sect-RHEV-Importing_and_Running_the_Converted_Virtual_Machine
|
probe::ioscheduler_trace.elv_issue_request
|
probe::ioscheduler_trace.elv_issue_request Name probe::ioscheduler_trace.elv_issue_request - Fires when a request is Synopsis Values disk_major Disk major no of request. rq Address of request. name Name of the probe point elevator_name The type of I/O elevator currently enabled. disk_minor Disk minor number of request. rq_flags Request flags. Description scheduled.
|
[
"ioscheduler_trace.elv_issue_request"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-ioscheduler-trace-elv-issue-request
|
4.5. Python Pretty-Printers
|
4.5. Python Pretty-Printers The GDB command print outputs comprehensive debugging information for a target application. GDB aims to provide as much debugging data as it can to users; however, this means that for highly complex programs the amount of data can become very cryptic. In addition, GDB does not provide any tools that help decipher GDB print output. GDB does not even empower users to easily create tools that can help decipher program data. This makes the practice of reading and understanding debugging data quite arcane, particularly for large, complex projects. For most developers, the only way to customize GDB print output (and make it more meaningful) is to revise and recompile GDB. However, very few developers can actually do this. Further, this practice will not scale well, particularly if the developer must also debug other programs that are heterogeneous and contain equally complex debugging data. To address this, the Red Hat Enterprise Linux 6 version of GDB is now compatible with Python pretty-printers . This allows the retrieval of more meaningful debugging data by leaving the introspection, printing, and formatting logic to a third-party Python script. Compatibility with Python pretty-printers gives you the chance to truly customize GDB output as you see fit. This makes GDB a more viable debugging solution to a wider range of projects, since you now have the flexibility to adapt GDB output as required, and with greater ease. Further, developers with intimate knowledge of a project and a specific programming language are best qualified in deciding what kind of output is meaningful, allowing them to improve the usefulness of that output. The Python pretty-printers implementation allows users to automatically inspect, format, and print program data according to specification. These specifications are written as rules implemented via Python scripts. This offers the following benefits: Safe To pass program data to a set of registered Python pretty-printers, the GDB development team added hooks to the GDB printing code. These hooks were implemented with safety in mind: the built-in GDB printing code is still intact, allowing it to serve as a default fallback printing logic. As such, if no specialized printers are available, GDB will still print debugging data the way it always did. This ensures that GDB is backwards-compatible; users who do not require pretty-printers can still continue using GDB. Highly Customizable This new "Python-scripted" approach allows users to distill as much knowledge as required into specific printers. As such, a project can have an entire library of printer scripts that parses program data in a unique manner specific to its user's requirements. There is no limit to the number of printers a user can build for a specific project; what's more, being able to customize debugging data script by script offers users an easier way to re-use and re-purpose printer scripts - or even a whole library of them. Easy to Learn The best part about this approach is its lower barrier to entry. Python scripting is comparatively easy to learn and has a large library of free documentation available online. In addition, most programmers already have basic to intermediate experience in Python scripting, or in scripting in general. Here is a small example of a pretty printer. Consider the following C++ program: fruit.cc enum Fruits {Orange, Apple, Banana}; class Fruit { int fruit; public: Fruit (int f) { fruit = f; } }; int main() { Fruit myFruit(Apple); return 0; // line 17 } This is compiled with the command g++ -g fruit.cc -o fruit . Now, examine this program with GDB. gdb ./fruit [...] (gdb) break 17 Breakpoint 1 at 0x40056d: file fruit.cc, line 17. (gdb) run Breakpoint 1, main () at fruit.cc:17 17 return 0; // line 17 (gdb) print myFruit USD1 = {fruit = 1} The output of {fruit = 1} is correct because that is the internal representation of 'fruit' in the data structure 'Fruit'. However, this is not easily read by humans as it is difficult to tell which fruit the integer 1 represents. To solve this problem, write the following pretty printer: fruit.py class FruitPrinter: def __init__(self, val): self.val = val def to_string (self): fruit = self.val['fruit'] if (fruit == 0): name = "Orange" elif (fruit == 1): name = "Apple" elif (fruit == 2): name = "Banana" else: name = "unknown" return "Our fruit is " + name def lookup_type (val): if str(val.type) == 'Fruit': return FruitPrinter(val) return None gdb.pretty_printers.append (lookup_type) Examine this printer from the bottom up. The line gdb.pretty_printers.append (lookup_type) adds the function lookup_type to GDB's list of printer lookup functions. The function lookup_type is responsible for examining the type of object to be printed, and returning an appropriate pretty printer. The object is passed by GDB in the parameter val . val.type is an attribute which represents the type of the pretty printer. FruitPrinter is where the actual work is done. More specifically in the to_string function of that Class. In this function, the integer fruit is retrieved using the python dictionary syntax self.val['fruit'] . Then the name is determined using that value. The string returned by this function is the string that will be printed to the user. After creating fruit.py , it must then be loaded into GDB with the following command: The GDB and Python Pretty-Printers whitepaper provides more details on this feature. This whitepaper also includes details and examples on how to write your own Python pretty-printer as well as how to import it into GDB. See the following link for more information: http://sourceware.org/gdb/onlinedocs/gdb/Pretty-Printing.html
|
[
"enum Fruits {Orange, Apple, Banana}; class Fruit { int fruit; public: Fruit (int f) { fruit = f; } }; int main() { Fruit myFruit(Apple); return 0; // line 17 }",
"gdb ./fruit [...] (gdb) break 17 Breakpoint 1 at 0x40056d: file fruit.cc, line 17. (gdb) run Breakpoint 1, main () at fruit.cc:17 17 return 0; // line 17 (gdb) print myFruit USD1 = {fruit = 1}",
"fruit.py class FruitPrinter: def __init__(self, val): self.val = val def to_string (self): fruit = self.val['fruit'] if (fruit == 0): name = \"Orange\" elif (fruit == 1): name = \"Apple\" elif (fruit == 2): name = \"Banana\" else: name = \"unknown\" return \"Our fruit is \" + name def lookup_type (val): if str(val.type) == 'Fruit': return FruitPrinter(val) return None gdb.pretty_printers.append (lookup_type)",
"(gdb) python execfile(\"fruit.py\")"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/debuggingprettyprinters
|
function::proc_mem_size
|
function::proc_mem_size Name function::proc_mem_size - Total program virtual memory size in pages Synopsis Arguments None Description Returns the total virtual memory size in pages of the current process, or zero when there is no current process or the number of pages couldn't be retrieved.
|
[
"proc_mem_size:long()"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-proc-mem-size
|
Part III. Post-installation tasks
|
Part III. Post-installation tasks Managing subscriptions and securing a Red Hat Enterprise Linux (RHEL) system are essential steps for maintaining system compliance and functionality. Registering RHEL ensures access to software updates and services. Additionally, setting a system purpose aligns the system's usage with the appropriate subscriptions, while adjusting security settings helps safeguard critical infrastructure. When needed, subscription services can be updated or changed to meet evolving system requirements.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_from_installation_media/post-installation-tasks
|
25.19. Modifying Link Loss Behavior
|
25.19. Modifying Link Loss Behavior This section describes how to modify the link loss behavior of devices that use either Fibre Channel or iSCSI protocols. 25.19.1. Fibre Channel If a driver implements the Transport dev_loss_tmo callback, access attempts to a device through a link will be blocked when a transport problem is detected. To verify if a device is blocked, run the following command: This command will return blocked if the device is blocked. If the device is operating normally, this command will return running . Procedure 25.15. Determining the State of a Remote Port To determine the state of a remote port, run the following command: This command will return Blocked when the remote port (along with devices accessed through it) are blocked. If the remote port is operating normally, the command will return Online . If the problem is not resolved within dev_loss_tmo seconds, the rport and devices will be unblocked and all I/O running on that device (along with any new I/O sent to that device) will be failed. Procedure 25.16. Changing dev_loss_tmo To change the dev_loss_tmo value, echo in the desired value to the file. For example, to set dev_loss_tmo to 30 seconds, run: For more information about dev_loss_tmo , refer to Section 25.4.1, "Fibre Channel API" . When a link loss exceeds dev_loss_tmo , the scsi_device and sd N devices are removed. Typically, the Fibre Channel class will leave the device as is; i.e. /dev/sd x will remain /dev/sd x . This is because the target binding is saved by the Fibre Channel driver so when the target port returns, the SCSI addresses are recreated faithfully. However, this cannot be guaranteed; the sd x will be restored only if no additional change on in-storage box configuration of LUNs is made. 25.19.2. iSCSI Settings with dm-multipath If dm-multipath is implemented, it is advisable to set iSCSI timers to immediately defer commands to the multipath layer. To configure this, nest the following line under device { in /etc/multipath.conf : This ensures that I/O errors are retried and queued if all paths are failed in the dm-multipath layer. You may need to adjust iSCSI timers further to better monitor your SAN for problems. Available iSCSI timers you can configure are NOP-Out Interval/Timeouts and replacement_timeout , which are discussed in the following sections. 25.19.2.1. NOP-Out Interval/Timeout To help monitor problems the SAN, the iSCSI layer sends a NOP-Out request to each target. If a NOP-Out request times out, the iSCSI layer responds by failing any running commands and instructing the SCSI layer to requeue those commands when possible. When dm-multipath is being used, the SCSI layer will fail those running commands and defer them to the multipath layer. The multipath layer then retries those commands on another path. If dm-multipath is not being used, those commands are retried five times before failing altogether. Intervals between NOP-Out requests are 5 seconds by default. To adjust this, open /etc/iscsi/iscsid.conf and edit the following line: Once set, the iSCSI layer will send a NOP-Out request to each target every [interval value] seconds. By default, NOP-Out requests time out in 5 seconds [9] . To adjust this, open /etc/iscsi/iscsid.conf and edit the following line: This sets the iSCSI layer to timeout a NOP-Out request after [timeout value] seconds. SCSI Error Handler If the SCSI Error Handler is running, running commands on a path will not be failed immediately when a NOP-Out request times out on that path. Instead, those commands will be failed after replacement_timeout seconds. For more information about replacement_timeout , refer to Section 25.19.2.2, " replacement_timeout " . To verify if the SCSI Error Handler is running, run: 25.19.2.2. replacement_timeout replacement_timeout controls how long the iSCSI layer should wait for a timed-out path/session to reestablish itself before failing any commands on it. The default replacement_timeout value is 120 seconds. To adjust replacement_timeout , open /etc/iscsi/iscsid.conf and edit the following line: The 1 queue_if_no_path option in /etc/multipath.conf sets iSCSI timers to immediately defer commands to the multipath layer (refer to Section 25.19.2, "iSCSI Settings with dm-multipath " ). This setting prevents I/O errors from propagating to the application; because of this, you can set replacement_timeout to 15-20 seconds. By configuring a lower replacement_timeout , I/O is quickly sent to a new path and executed (in the event of a NOP-Out timeout) while the iSCSI layer attempts to re-establish the failed path/session. If all paths time out, then the multipath and device mapper layer will internally queue I/O based on the settings in /etc/multipath.conf instead of /etc/iscsi/iscsid.conf . Important Whether your considerations are failover speed or security, the recommended value for replacement_timeout will depend on other factors. These factors include the network, target, and system workload. As such, it is recommended that you thoroughly test any new configurations to replacements_timeout before applying it to a mission-critical system. iSCSI and DM Multipath overrides The recovery_tmo sysfs option controls the timeout for a particular iSCSI device. The following options globally override recovery_tmo values: The replacement_timeout configuration option globally overrides the recovery_tmo value for all iSCSI devices. For all iSCSI devices that are managed by DM Multipath, the fast_io_fail_tmo option in DM Multipath globally overrides the recovery_tmo value. The fast_io_fail_tmo option in DM Multipath also overrides the fast_io_fail_tmo option in Fibre Channel devices. The DM Multipath fast_io_fail_tmo option takes precedence over replacement_timeout . Red Hat does not recommend using replacement_timeout to override recovery_tmo in devices managed by DM Multipath because DM Multipath always resets recovery_tmo when the multipathd service reloads. 25.19.3. iSCSI Root When accessing the root partition directly through an iSCSI disk, the iSCSI timers should be set so that iSCSI layer has several chances to try to reestablish a path/session. In addition, commands should not be quickly re-queued to the SCSI layer. This is the opposite of what should be done when dm-multipath is implemented. To start with, NOP-Outs should be disabled. You can do this by setting both NOP-Out interval and timeout to zero. To set this, open /etc/iscsi/iscsid.conf and edit as follows: In line with this, replacement_timeout should be set to a high number. This will instruct the system to wait a long time for a path/session to reestablish itself. To adjust replacement_timeout , open /etc/iscsi/iscsid.conf and edit the following line: After configuring /etc/iscsi/iscsid.conf , you must perform a re-discovery of the affected storage. This will allow the system to load and use any new values in /etc/iscsi/iscsid.conf . For more information on how to discover iSCSI devices, refer to Section 25.15, "Scanning iSCSI Interconnects" . Configuring Timeouts for a Specific Session You can also configure timeouts for a specific session and make them non-persistent (instead of using /etc/iscsi/iscsid.conf ). To do so, run the following command (replace the variables accordingly): Important The configuration described here is recommended for iSCSI sessions involving root partition access. For iSCSI sessions involving access to other types of storage (namely, in systems that use dm-multipath ), refer to Section 25.19.2, "iSCSI Settings with dm-multipath " . [9] Prior to Red Hat Enterprise Linux 5.4, the default NOP-Out requests time out was 15 seconds.
|
[
"cat /sys/block/ device /device/state",
"cat /sys/class/fc_remote_port/rport- H : B : R /port_state",
"echo 30 > /sys/class/fc_remote_port/rport- H : B : R /dev_loss_tmo",
"features \"1 queue_if_no_path\"",
"node.conn[0].timeo.noop_out_interval = [interval value]",
"node.conn[0].timeo.noop_out_timeout = [timeout value]",
"iscsiadm -m session -P 3",
"node.session.timeo.replacement_timeout = [replacement_timeout]",
"node.conn[0].timeo.noop_out_interval = 0 node.conn[0].timeo.noop_out_timeout = 0",
"node.session.timeo.replacement_timeout = replacement_timeout",
"iscsiadm -m node -T target_name -p target_IP : port -o update -n node.session.timeo.replacement_timeout -v USD timeout_value"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/modifying-link-loss-behavior
|
Chapter 8. Creating infrastructure machine sets
|
Chapter 8. Creating infrastructure machine sets Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' You can use infrastructure machine sets to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and the components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment. In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Red Hat OpenShift Service Mesh deploys Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. This configuration requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. 8.1. OpenShift Container Platform infrastructure components Each self-managed Red Hat OpenShift subscription includes entitlements for OpenShift Container Platform and other OpenShift-related components. These entitlements are included for running OpenShift Container Platform control plane and infrastructure workloads and do not need to be accounted for during sizing. To qualify as an infrastructure node and use the included entitlement, only components that are supporting the cluster, and not part of an end-user application, can run on those instances. Examples include the following components: Kubernetes and OpenShift Container Platform control plane services The default router The integrated container image registry The HAProxy-based Ingress Controller The cluster metrics collection, or monitoring service, including components for monitoring user-defined projects Cluster aggregated logging Red Hat Quay Red Hat OpenShift Data Foundation Red Hat Advanced Cluster Management for Kubernetes Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift GitOps Red Hat OpenShift Pipelines Red Hat OpenShift Service Mesh Any node that runs any other container, pod, or component is a worker node that your subscription must cover. For information about infrastructure nodes and which components can run on infrastructure nodes, see the "Red Hat OpenShift control plane and infrastructure nodes" section in the OpenShift sizing and subscription guide for enterprise Kubernetes document. To create an infrastructure node, you can use a machine set , label the node , or use a machine config pool . 8.2. Creating infrastructure machine sets for production environments In a production deployment, it is recommended that you deploy at least three compute machine sets to hold infrastructure components. Red Hat OpenShift Service Mesh deploys Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. A configuration like this requires three different compute machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. 8.2.1. Creating infrastructure machine sets for different clouds Use the sample compute machine set for your cloud. 8.2.1.1. Sample YAML for a compute machine set custom resource on Alibaba Cloud This sample YAML defines a compute machine set that runs in a specified Alibaba Cloud zone in a region and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-<infra>-<zone> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> 10 spec: metadata: labels: node-role.kubernetes.io/infra: "" providerSpec: value: apiVersion: machine.openshift.io/v1 credentialsSecret: name: alibabacloud-credentials imageId: <image_id> 11 instanceType: <instance_type> 12 kind: AlibabaCloudMachineProviderConfig ramRoleName: <infrastructure_id>-role-worker 13 regionId: <region> 14 resourceGroup: 15 id: <resource_group_id> type: ID securityGroups: - tags: 16 - Key: Name Value: <infrastructure_id>-sg-<role> type: Tags systemDisk: 17 category: cloud_essd size: <disk_size> tag: 18 - Key: kubernetes.io/cluster/<infrastructure_id> Value: owned userDataSecret: name: <user_data_secret> 19 vSwitch: tags: 20 - Key: Name Value: <infrastructure_id>-vswitch-<zone> type: Tags vpcId: "" zoneId: <zone> 21 taints: 22 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 5 7 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 8 9 Specify the <infra> node label. 4 6 10 Specify the infrastructure ID, <infra> node label, and zone. 11 Specify the image to use. Use an image from an existing default compute machine set for the cluster. 12 Specify the instance type you want to use for the compute machine set. 13 Specify the name of the RAM role to use for the compute machine set. Use the value that the installer populates in the default compute machine set. 14 Specify the region to place machines on. 15 Specify the resource group and type for the cluster. You can use the value that the installer populates in the default compute machine set, or specify a different one. 16 18 20 Specify the tags to use for the compute machine set. Minimally, you must include the tags shown in this example, with appropriate values for your cluster. You can include additional tags, including the tags that the installer populates in the default compute machine set it creates, as needed. 17 Specify the type and size of the root disk. Use the category value that the installer populates in the default compute machine set it creates. If required, specify a different value in gigabytes for size . 19 Specify the name of the secret in the user data YAML file that is in the openshift-machine-api namespace. Use the value that the installer populates in the default compute machine set. 21 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. 22 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . Machine set parameters for Alibaba Cloud usage statistics The default compute machine sets that the installer creates for Alibaba Cloud clusters include nonessential tag values that Alibaba Cloud uses internally to track usage statistics. These tags are populated in the securityGroups , tag , and vSwitch parameters of the spec.template.spec.providerSpec.value list. When creating compute machine sets to deploy additional machines, you must include the required Kubernetes tags. The usage statistics tags are applied by default, even if they are not specified in the compute machine sets you create. You can also include additional tags as needed. The following YAML snippets indicate which tags in the default compute machine sets are optional and which are required. Tags in spec.template.spec.providerSpec.value.securityGroups spec: template: spec: providerSpec: value: securityGroups: - tags: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 2 Value: ocp - Key: Name Value: <infrastructure_id>-sg-<role> 3 type: Tags 1 2 Optional: This tag is applied even when not specified in the compute machine set. 3 Required. where: <infrastructure_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. <role> is the node label to add. Tags in spec.template.spec.providerSpec.value.tag spec: template: spec: providerSpec: value: tag: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV 2 Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 3 Value: ocp 2 3 Optional: This tag is applied even when not specified in the compute machine set. 1 Required. where <infrastructure_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. Tags in spec.template.spec.providerSpec.value.vSwitch spec: template: spec: providerSpec: value: vSwitch: tags: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV 2 Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 3 Value: ocp - Key: Name Value: <infrastructure_id>-vswitch-<zone> 4 type: Tags 1 2 3 Optional: This tag is applied even when not specified in the compute machine set. 4 Required. where: <infrastructure_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. <zone> is the zone within your region to place machines on. 8.2.1.2. Sample YAML for a compute machine set custom resource on AWS This sample YAML defines a compute machine set that runs in the us-east-1a Amazon Web Services (AWS) zone and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: infra 6 machine.openshift.io/cluster-api-machine-type: infra 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> 8 spec: metadata: labels: node-role.kubernetes.io/infra: "" 9 providerSpec: value: ami: id: ami-046fe691f52a953f9 10 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 11 instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 12 region: <region> 13 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 14 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 15 tags: - name: kubernetes.io/cluster/<infrastructure_id> 16 value: owned - name: <custom_tag_name> 17 value: <custom_tag_value> 18 userDataSecret: name: worker-user-data taints: 19 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 3 5 11 14 16 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 8 Specify the infrastructure ID, infra role node label, and zone. 6 7 9 Specify the infra role node label. 10 Specify a valid Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) for your AWS zone for your OpenShift Container Platform nodes. If you want to use an AWS Marketplace image, you must complete the OpenShift Container Platform subscription from the AWS Marketplace to obtain an AMI ID for your region. USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{"\n"}' \ get machineset/<infrastructure_id>-<role>-<zone> 17 18 Optional: Specify custom tag data for your cluster. For example, you might add an admin contact email address by specifying a name:value pair of Email:[email protected] . Note Custom tags can also be specified during installation in the install-config.yml file. If the install-config.yml file and the machine set include a tag with the same name data, the value for the tag from the machine set takes priority over the value for the tag in the install-config.yml file. 12 Specify the zone, for example, us-east-1a . 13 Specify the region, for example, us-east-1 . 15 Specify the infrastructure ID and zone. 19 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . Machine sets running on AWS support non-guaranteed Spot Instances . You can save on costs by using Spot Instances at a lower price compared to On-Demand Instances on AWS. Configure Spot Instances by adding spotMarketOptions to the MachineSet YAML file. 8.2.1.3. Sample YAML for a compute machine set custom resource on Azure This sample YAML defines a compute machine set that runs in the 1 Microsoft Azure zone in a region and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and infra is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: infra 2 machine.openshift.io/cluster-api-machine-type: infra name: <infrastructure_id>-infra-<region> 3 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: infra machine.openshift.io/cluster-api-machine-type: infra machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> spec: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-machineset: <machineset_name> node-role.kubernetes.io/infra: "" providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 4 offer: "" publisher: "" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/galleries/gallery_<infrastructure_id>/images/<infrastructure_id>-gen2/versions/latest 5 sku: "" version: "" internalLoadBalancer: "" kind: AzureMachineProviderSpec location: <region> 6 managedIdentity: <infrastructure_id>-identity metadata: creationTimestamp: null natRule: null networkResourceGroup: "" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: "" resourceGroup: <infrastructure_id>-rg sshPrivateKey: "" sshPublicKey: "" tags: - name: <custom_tag_name> 7 value: <custom_tag_value> subnet: <infrastructure_id>-<role>-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: <infrastructure_id>-vnet zone: "1" 8 taints: 9 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster You can obtain the subnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 You can obtain the vnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 2 Specify the infra node label. 3 Specify the infrastructure ID, infra node label, and region. 4 Specify the image details for your compute machine set. If you want to use an Azure Marketplace image, see "Selecting an Azure Marketplace image". 5 Specify an image that is compatible with your instance type. The Hyper-V generation V2 images created by the installation program have a -gen2 suffix, while V1 images have the same name without the suffix. 6 Specify the region to place machines on. 7 Optional: Specify custom tags in your machine set. Provide the tag name in <custom_tag_name> field and the corresponding tag value in <custom_tag_value> field. 8 Specify the zone within your region to place machines on. Ensure that your region supports the zone that you specify. Important If your region supports availability zones, you must specify the zone. Specifying the zone avoids volume node affinity failure when a pod requires a persistent volume attachment. To do this, you can create a compute machine set for each zone in the same region. 9 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . Machine sets running on Azure support non-guaranteed Spot VMs . You can save on costs by using Spot VMs at a lower price compared to standard VMs on Azure. You can configure Spot VMs by adding spotVMOptions to the MachineSet YAML file. Additional resources Selecting an Azure Marketplace image 8.2.1.4. Sample YAML for a compute machine set custom resource on Azure Stack Hub This sample YAML defines a compute machine set that runs in the 1 Microsoft Azure zone in a region and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: "" 11 taints: 12 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: machine.openshift.io/v1beta1 availabilitySet: <availability_set> 13 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: "" publisher: "" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 14 sku: "" version: "" internalLoadBalancer: "" kind: AzureMachineProviderSpec location: <region> 15 managedIdentity: <infrastructure_id>-identity 16 metadata: creationTimestamp: null natRule: null networkResourceGroup: "" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: "" resourceGroup: <infrastructure_id>-rg 17 sshPrivateKey: "" sshPublicKey: "" subnet: <infrastructure_id>-<role>-subnet 18 19 userDataSecret: name: worker-user-data 20 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 21 zone: "1" 22 1 5 7 14 16 17 18 21 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster You can obtain the subnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 You can obtain the vnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 2 3 8 9 11 19 20 Specify the <infra> node label. 4 6 10 Specify the infrastructure ID, <infra> node label, and region. 12 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 15 Specify the region to place machines on. 13 Specify the availability set for the cluster. 22 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. Note Machine sets running on Azure Stack Hub do not support non-guaranteed Spot VMs. 8.2.1.5. Sample YAML for a compute machine set custom resource on IBM Cloud This sample YAML defines a compute machine set that runs in a specified IBM Cloud zone in a region and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-<infra>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/infra: "" providerSpec: value: apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1 credentialsSecret: name: ibmcloud-credentials image: <infrastructure_id>-rhcos 11 kind: IBMCloudMachineProviderSpec primaryNetworkInterface: securityGroups: - <infrastructure_id>-sg-cluster-wide - <infrastructure_id>-sg-openshift-net subnet: <infrastructure_id>-subnet-compute-<zone> 12 profile: <instance_profile> 13 region: <region> 14 resourceGroup: <resource_group> 15 userDataSecret: name: <role>-user-data 16 vpc: <vpc_name> 17 zone: <zone> 18 taints: 19 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 5 7 The infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 8 9 16 The <infra> node label. 4 6 10 The infrastructure ID, <infra> node label, and region. 11 The custom Red Hat Enterprise Linux CoreOS (RHCOS) image that was used for cluster installation. 12 The infrastructure ID and zone within your region to place machines on. Be sure that your region supports the zone that you specify. 13 Specify the IBM Cloud instance profile . 14 Specify the region to place machines on. 15 The resource group that machine resources are placed in. This is either an existing resource group specified at installation time, or an installer-created resource group named based on the infrastructure ID. 17 The VPC name. 18 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. 19 The taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 8.2.1.6. Sample YAML for a compute machine set custom resource on GCP This sample YAML defines a compute machine set that runs in Google Cloud Platform (GCP) and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/infra: "" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: 6 - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a taints: 7 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 For <infrastructure_id> , specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 For <infra> , specify the <infra> node label. 3 Specify the path to the image that is used in current compute machine sets. If you have the OpenShift CLI installed, you can obtain the path to the image by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{"\n"}' \ get machineset/<infrastructure_id>-worker-a To use a GCP Marketplace image, specify the offer to use: OpenShift Container Platform: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-48-x86-64-202210040145 OpenShift Platform Plus: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-48-x86-64-202206140145 OpenShift Kubernetes Engine: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-48-x86-64-202206140145 4 Optional: Specify custom metadata in the form of a key:value pair. For example use cases, see the GCP documentation for setting custom metadata . 5 For <project_name> , specify the name of the GCP project that you use for your cluster. 6 Specifies a single service account. Multiple service accounts are not supported. 7 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . Machine sets running on GCP support non-guaranteed preemptible VM instances . You can save on costs by using preemptible VM instances at a lower price compared to normal instances on GCP. You can configure preemptible VM instances by adding preemptible to the MachineSet YAML file. 8.2.1.7. Sample YAML for a compute machine set custom resource on Nutanix This sample YAML defines a Nutanix compute machine set that creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-<infra>-<zone> 4 namespace: openshift-machine-api annotations: 5 machine.openshift.io/memoryMb: "16384" machine.openshift.io/vCPU: "4" spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 6 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> 7 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 8 machine.openshift.io/cluster-api-machine-role: <infra> 9 machine.openshift.io/cluster-api-machine-type: <infra> 10 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> 11 spec: metadata: labels: node-role.kubernetes.io/infra: "" providerSpec: value: apiVersion: machine.openshift.io/v1 cluster: type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials image: name: <infrastructure_id>-rhcos 12 type: name kind: NutanixMachineProviderConfig memorySize: 16Gi 13 subnets: - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 14 userDataSecret: name: <user_data_secret> 15 vcpuSockets: 4 16 vcpusPerSocket: 1 17 taints: 18 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 6 8 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 9 10 Specify the <infra> node label. 4 7 11 Specify the infrastructure ID, <infra> node label, and zone. 5 Annotations for the cluster autoscaler. 12 Specify the image to use. Use an image from an existing default compute machine set for the cluster. 13 Specify the amount of memory for the cluster in Gi. 14 Specify the size of the system disk in Gi. 15 Specify the name of the secret in the user data YAML file that is in the openshift-machine-api namespace. Use the value that the installer populates in the default compute machine set. 16 Specify the number of vCPU sockets. 17 Specify the number of vCPUs per socket. 18 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 8.2.1.8. Sample YAML for a compute machine set custom resource on RHOSP This sample YAML defines a compute machine set that runs on Red Hat OpenStack Platform (RHOSP) and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: "" taints: 11 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 12 kind: OpenstackProviderSpec networks: 13 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 14 primarySubnet: <rhosp_subnet_UUID> 15 securityGroups: - filter: {} name: <infrastructure_id>-worker 16 serverMetadata: Name: <infrastructure_id>-worker 17 openshiftClusterID: <infrastructure_id> 18 tags: - openshiftClusterID=<infrastructure_id> 19 trunk: true userDataSecret: name: worker-user-data 20 availabilityZone: <optional_openstack_availability_zone> 1 5 7 14 16 17 18 19 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 8 9 20 Specify the <infra> node label. 4 6 10 Specify the infrastructure ID and <infra> node label. 11 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 12 To set a server group policy for the MachineSet, enter the value that is returned from creating a server group . For most deployments, anti-affinity or soft-anti-affinity policies are recommended. 13 Required for deployments to multiple networks. If deploying to multiple networks, this list must include the network that is used as the primarySubnet value. 15 Specify the RHOSP subnet that you want the endpoints of nodes to be published on. Usually, this is the same subnet that is used as the value of machinesSubnet in the install-config.yaml file. 8.2.1.9. Sample YAML for a compute machine set custom resource on RHV This sample YAML defines a compute machine set that runs on RHV and creates nodes that are labeled with node-role.kubernetes.io/<node_role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> 5 Selector: 6 matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 9 machine.openshift.io/cluster-api-machine-role: <role> 10 machine.openshift.io/cluster-api-machine-type: <role> 11 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 12 spec: metadata: labels: node-role.kubernetes.io/<role>: "" 13 providerSpec: value: apiVersion: ovirtproviderconfig.machine.openshift.io/v1beta1 cluster_id: <ovirt_cluster_id> 14 template_name: <ovirt_template_name> 15 sparse: <boolean_value> 16 format: <raw_or_cow> 17 cpu: 18 sockets: <number_of_sockets> 19 cores: <number_of_cores> 20 threads: <number_of_threads> 21 memory_mb: <memory_size> 22 guaranteed_memory_mb: <memory_size> 23 os_disk: 24 size_gb: <disk_size> 25 storage_domain_id: <storage_domain_UUID> 26 network_interfaces: 27 vnic_profile_id: <vnic_profile_id> 28 credentialsSecret: name: ovirt-credentials 29 kind: OvirtMachineProviderSpec type: <workload_type> 30 auto_pinning_policy: <auto_pinning_policy> 31 hugepages: <hugepages> 32 affinityGroupsNames: - compute 33 userDataSecret: name: worker-user-data 1 7 9 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 10 11 13 Specify the node label to add. 4 8 12 Specify the infrastructure ID and node label. These two strings together cannot be longer than 35 characters. 5 Specify the number of machines to create. 6 Selector for the machines. 14 Specify the UUID for the RHV cluster to which this VM instance belongs. 15 Specify the RHV VM template to use to create the machine. 16 Setting this option to false enables preallocation of disks. The default is true . Setting sparse to true with format set to raw is not available for block storage domains. The raw format writes the entire virtual disk to the underlying physical disk. 17 Can be set to cow or raw . The default is cow . The cow format is optimized for virtual machines. Note Preallocating disks on file storage domains writes zeroes to the file. This might not actually preallocate disks depending on the underlying storage. 18 Optional: The CPU field contains the CPU configuration, including sockets, cores, and threads. 19 Optional: Specify the number of sockets for a VM. 20 Optional: Specify the number of cores per socket. 21 Optional: Specify the number of threads per core. 22 Optional: Specify the size of a VM's memory in MiB. 23 Optional: Specify the size of a virtual machine's guaranteed memory in MiB. This is the amount of memory that is guaranteed not to be drained by the ballooning mechanism. For more information, see Memory Ballooning and Optimization Settings Explained . Note If you are using a version earlier than RHV 4.4.8, see Guaranteed memory requirements for OpenShift on Red Hat Virtualization clusters . 24 Optional: Root disk of the node. 25 Optional: Specify the size of the bootable disk in GiB. 26 Optional: Specify the UUID of the storage domain for the compute node's disks. If none is provided, the compute node is created on the same storage domain as the control nodes. (default) 27 Optional: List of the network interfaces of the VM. If you include this parameter, OpenShift Container Platform discards all network interfaces from the template and creates new ones. 28 Optional: Specify the vNIC profile ID. 29 Specify the name of the secret object that holds the RHV credentials. 30 Optional: Specify the workload type for which the instance is optimized. This value affects the RHV VM parameter. Supported values: desktop , server (default), high_performance . high_performance improves performance on the VM. Limitations exist, for example, you cannot access the VM with a graphical console. For more information, see Configuring High Performance Virtual Machines, Templates, and Pools in the Virtual Machine Management Guide . 31 Optional: AutoPinningPolicy defines the policy that automatically sets CPU and NUMA settings, including pinning to the host for this instance. Supported values: none , resize_and_pin . For more information, see Setting NUMA Nodes in the Virtual Machine Management Guide . 32 Optional: Hugepages is the size in KiB for defining hugepages in a VM. Supported values: 2048 or 1048576 . For more information, see Configuring Huge Pages in the Virtual Machine Management Guide . 33 Optional: A list of affinity group names to be applied to the VMs. The affinity groups must exist in oVirt. Note Because RHV uses a template when creating a VM, if you do not specify a value for an optional parameter, RHV uses the value for that parameter that is specified in the template. 8.2.1.10. Sample YAML for a compute machine set custom resource on vSphere This sample YAML defines a compute machine set that runs on VMware vSphere and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: "" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: "<vm_network_name>" 11 numCPUs: 4 numCoresPerSocket: 1 snapshot: "" template: <vm_template_name> 12 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_datacenter_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcepool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17 1 3 5 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 8 Specify the infrastructure ID and <infra> node label. 6 7 9 Specify the <infra> node label. 10 Specify a taint to prevent user workloads from being scheduled on infra nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 11 Specify the vSphere VM network to deploy the compute machine set to. This VM network must be where other compute machines reside in the cluster. 12 Specify the vSphere VM template to use, such as user-5ddjd-rhcos . 13 Specify the vCenter Datacenter to deploy the compute machine set on. 14 Specify the vCenter Datastore to deploy the compute machine set on. 15 Specify the path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd . 16 Specify the vSphere resource pool for your VMs. 17 Specify the vCenter server IP or fully qualified domain name. 8.2.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. 8.2.3. Creating an infrastructure node Important See Creating infrastructure machine sets for installer-provisioned infrastructure environments or for any cluster where the control plane nodes are managed by the machine API. Requirements of the cluster dictate that infrastructure, also called infra nodes, be provisioned. The installer only provides provisions for control plane and worker nodes. Worker nodes can be designated as infrastructure nodes or application, also called app , nodes through labeling. Procedure Add a label to the worker node that you want to act as application node: USD oc label node <node-name> node-role.kubernetes.io/app="" Add a label to the worker nodes that you want to act as infrastructure nodes: USD oc label node <node-name> node-role.kubernetes.io/infra="" Check to see if applicable nodes now have the infra role and app roles: USD oc get nodes Create a default cluster-wide node selector. The default node selector is applied to pods created in all namespaces. This creates an intersection with any existing node selectors on a pod, which additionally constrains the pod's selector. Important If the default node selector key conflicts with the key of a pod's label, then the default node selector is not applied. However, do not set a default node selector that might cause a pod to become unschedulable. For example, setting the default node selector to a specific node role, such as node-role.kubernetes.io/infra="" , when a pod's label is set to a different node role, such as node-role.kubernetes.io/master="" , can cause the pod to become unschedulable. For this reason, use caution when setting the default node selector to specific node roles. You can alternatively use a project node selector to avoid cluster-wide node selector key conflicts. Edit the Scheduler object: USD oc edit scheduler cluster Add the defaultNodeSelector field with the appropriate node selector: apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra="" 1 # ... 1 This example node selector deploys pods on infrastructure nodes by default. Save the file to apply the changes. You can now move infrastructure resources to the newly labeled infra nodes. Additional resources Moving resources to infrastructure machine sets 8.2.4. Creating a machine config pool for infrastructure machines If you need infrastructure machines to have dedicated configurations, you must create an infra pool. Procedure Add a label to the node you want to assign as the infra node with a specific label: USD oc label node <node_name> <label> USD oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra= Create a machine config pool that contains both the worker role and your custom role as machine config selector: USD cat infra.mcp.yaml Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: "" 2 1 Add the worker role and your custom role. 2 Add the label you added to the node as a nodeSelector . Note Custom machine config pools inherit machine configs from the worker pool. Custom pools use any machine config targeted for the worker pool, but add the ability to also deploy changes that are targeted at only the custom pool. Because a custom pool inherits resources from the worker pool, any change to the worker pool also affects the custom pool. After you have the YAML file, you can create the machine config pool: USD oc create -f infra.mcp.yaml Check the machine configs to ensure that the infrastructure configuration rendered successfully: USD oc get machineconfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d You should see a new machine config, with the rendered-infra-* prefix. Optional: To deploy changes to a custom pool, create a machine config that uses the custom pool name as the label, such as infra . Note that this is not required and only shown for instructional purposes. In this manner, you can apply any custom configurations specific to only your infra nodes. Note After you create the new machine config pool, the MCO generates a new rendered config for that pool, and associated nodes of that pool reboot to apply the new configuration. Create a machine config: USD cat infra.mc.yaml Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra 1 Add the label you added to the node as a nodeSelector . Apply the machine config to the infra-labeled nodes: USD oc create -f infra.mc.yaml Confirm that your new machine config pool is available: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m In this example, a worker node was changed to an infra node. Additional resources See Node configuration management with machine config pools for more information on grouping infra machines in a custom pool. 8.3. Assigning machine set resources to infrastructure nodes After creating an infrastructure machine set, the worker and infra roles are applied to new infra nodes. Nodes with the infra role applied are not counted toward the total number of subscriptions that are required to run the environment, even when the worker role is also applied. However, with an infra node being assigned as a worker, there is a chance user workloads could get inadvertently assigned to an infra node. To avoid this, you can apply a taint to the infra node and tolerations for the pods you want to control. 8.3.1. Binding infrastructure node workloads using taints and tolerations If you have an infra node that has the infra and worker roles assigned, you must configure the node so that user workloads are not assigned to it. Important It is recommended that you preserve the dual infra,worker label that is created for infra nodes and use taints and tolerations to manage nodes that user workloads are scheduled on. If you remove the worker label from the node, you must create a custom pool to manage it. A node with a label other than master or worker is not recognized by the MCO without a custom pool. Maintaining the worker label allows the node to be managed by the default worker machine config pool, if no custom pools that select the custom label exists. The infra label communicates to the cluster that it does not count toward the total number of subscriptions. Prerequisites Configure additional MachineSet objects in your OpenShift Container Platform cluster. Procedure Add a taint to the infra node to prevent scheduling user workloads on it: Determine if the node has the taint: USD oc describe nodes <node_name> Sample output oc describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker ... Taints: node-role.kubernetes.io/infra:NoSchedule ... This example shows that the node has a taint. You can proceed with adding a toleration to your pod in the step. If you have not configured a taint to prevent scheduling user workloads on it: USD oc adm taint nodes <node_name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoExecute Tip You can alternatively apply the following YAML to add the taint: kind: Node apiVersion: v1 metadata: name: <node_name> labels: ... spec: taints: - key: node-role.kubernetes.io/infra effect: NoExecute value: reserved ... This example places a taint on node1 that has key node-role.kubernetes.io/infra and taint effect NoSchedule . Nodes with the NoSchedule effect schedule only pods that tolerate the taint, but allow existing pods to remain scheduled on the node. Note If a descheduler is used, pods violating node taints could be evicted from the cluster. Add tolerations for the pod configurations you want to schedule on the infra node, like router, registry, and monitoring workloads. Add the following code to the Pod object specification: tolerations: - effect: NoExecute 1 key: node-role.kubernetes.io/infra 2 operator: Equal 3 value: reserved 4 1 Specify the effect that you added to the node. 2 Specify the key that you added to the node. 3 Specify the Equal Operator to require a taint with the key node-role.kubernetes.io/infra to be present on the node. 4 Specify the value of the key-value pair taint that you added to the node. This toleration matches the taint created by the oc adm taint command. A pod with this toleration can be scheduled onto the infra node. Note Moving pods for an Operator installed via OLM to an infra node is not always possible. The capability to move Operator pods depends on the configuration of each Operator. Schedule the pod to the infra node using a scheduler. See the documentation for Controlling pod placement onto nodes for details. Additional resources See Controlling pod placement using the scheduler for general information on scheduling a pod to a node. See Moving resources to infrastructure machine sets for instructions on scheduling pods to infra nodes. 8.4. Moving resources to infrastructure machine sets Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created by adding the infrastructure node selector, as shown: spec: nodePlacement: 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Applying a specific node selector to all infrastructure components causes OpenShift Container Platform to schedule those workloads on nodes with that label . 8.4.1. Moving the router You can deploy the router pod to a different compute machine set. By default, the pod is deployed to a worker node. Prerequisites Configure additional compute machine sets in your OpenShift Container Platform cluster. Procedure View the IngressController custom resource for the router Operator: USD oc get ingresscontroller default -n openshift-ingress-operator -o yaml The command output resembles the following text: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: "11341" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: "True" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default Edit the ingresscontroller resource and change the nodeSelector to use the infra label: USD oc edit ingresscontroller default -n openshift-ingress-operator spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrastructure node, also add a matching toleration. Confirm that the router pod is running on the infra node. View the list of router pods and note the node name of the running pod: USD oc get pod -n openshift-ingress -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none> In this example, the running pod is on the ip-10-0-217-226.ec2.internal node. View the node status of the running pod: USD oc get node <node_name> 1 1 Specify the <node_name> that you obtained from the pod list. Example output NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.25.0 Because the role list includes infra , the pod is running on the correct node. 8.4.2. Moving the default registry You configure the registry Operator to deploy its pods to different nodes. Prerequisites Configure additional compute machine sets in your OpenShift Container Platform cluster. Procedure View the config/instance object: USD oc get configs.imageregistry.operator.openshift.io/cluster -o yaml Example output apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: "56174" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status: ... Edit the config/instance object: USD oc edit configs.imageregistry.operator.openshift.io/cluster spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Verify the registry pod has been moved to the infrastructure node. Run the following command to identify the node where the registry pod is located: USD oc get pods -o wide -n openshift-image-registry Confirm the node has the label you specified: USD oc describe node <node_name> Review the command output and confirm that node-role.kubernetes.io/infra is in the LABELS list. 8.4.3. Moving the monitoring solution The monitoring stack includes multiple components, including Prometheus, Thanos Querier, and Alertmanager. The Cluster Monitoring Operator manages this stack. To redeploy the monitoring stack to infrastructure nodes, you can create and apply a custom config map. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config config map and change the nodeSelector to use the infra label: USD oc edit configmap cluster-monitoring-config -n openshift-monitoring apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Watch the monitoring pods move to the new machines: USD watch 'oc get pod -n openshift-monitoring -o wide' If a component has not moved to the infra node, delete the pod with this component: USD oc delete pod -n openshift-monitoring <pod> The component from the deleted pod is re-created on the infra node. Additional resources Moving monitoring components to different nodes Using node selectors to move logging resources Using taints and tolerations to control logging pod placement
|
[
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-<infra>-<zone> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> 10 spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: machine.openshift.io/v1 credentialsSecret: name: alibabacloud-credentials imageId: <image_id> 11 instanceType: <instance_type> 12 kind: AlibabaCloudMachineProviderConfig ramRoleName: <infrastructure_id>-role-worker 13 regionId: <region> 14 resourceGroup: 15 id: <resource_group_id> type: ID securityGroups: - tags: 16 - Key: Name Value: <infrastructure_id>-sg-<role> type: Tags systemDisk: 17 category: cloud_essd size: <disk_size> tag: 18 - Key: kubernetes.io/cluster/<infrastructure_id> Value: owned userDataSecret: name: <user_data_secret> 19 vSwitch: tags: 20 - Key: Name Value: <infrastructure_id>-vswitch-<zone> type: Tags vpcId: \"\" zoneId: <zone> 21 taints: 22 - key: node-role.kubernetes.io/infra effect: NoSchedule",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"spec: template: spec: providerSpec: value: securityGroups: - tags: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 2 Value: ocp - Key: Name Value: <infrastructure_id>-sg-<role> 3 type: Tags",
"spec: template: spec: providerSpec: value: tag: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV 2 Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 3 Value: ocp",
"spec: template: spec: providerSpec: value: vSwitch: tags: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV 2 Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 3 Value: ocp - Key: Name Value: <infrastructure_id>-vswitch-<zone> 4 type: Tags",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: infra 6 machine.openshift.io/cluster-api-machine-type: infra 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> 8 spec: metadata: labels: node-role.kubernetes.io/infra: \"\" 9 providerSpec: value: ami: id: ami-046fe691f52a953f9 10 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 11 instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 12 region: <region> 13 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 14 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 15 tags: - name: kubernetes.io/cluster/<infrastructure_id> 16 value: owned - name: <custom_tag_name> 17 value: <custom_tag_value> 18 userDataSecret: name: worker-user-data taints: 19 - key: node-role.kubernetes.io/infra effect: NoSchedule",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{\"\\n\"}' get machineset/<infrastructure_id>-<role>-<zone>",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: infra 2 machine.openshift.io/cluster-api-machine-type: infra name: <infrastructure_id>-infra-<region> 3 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: infra machine.openshift.io/cluster-api-machine-type: infra machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> spec: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-machineset: <machineset_name> node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 4 offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/galleries/gallery_<infrastructure_id>/images/<infrastructure_id>-gen2/versions/latest 5 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 6 managedIdentity: <infrastructure_id>-identity metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg sshPrivateKey: \"\" sshPublicKey: \"\" tags: - name: <custom_tag_name> 7 value: <custom_tag_value> subnet: <infrastructure_id>-<role>-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: <infrastructure_id>-vnet zone: \"1\" 8 taints: 9 - key: node-role.kubernetes.io/infra effect: NoSchedule",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" 11 taints: 12 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: machine.openshift.io/v1beta1 availabilitySet: <availability_set> 13 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 14 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 15 managedIdentity: <infrastructure_id>-identity 16 metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg 17 sshPrivateKey: \"\" sshPublicKey: \"\" subnet: <infrastructure_id>-<role>-subnet 18 19 userDataSecret: name: worker-user-data 20 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 21 zone: \"1\" 22",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-<infra>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1 credentialsSecret: name: ibmcloud-credentials image: <infrastructure_id>-rhcos 11 kind: IBMCloudMachineProviderSpec primaryNetworkInterface: securityGroups: - <infrastructure_id>-sg-cluster-wide - <infrastructure_id>-sg-openshift-net subnet: <infrastructure_id>-subnet-compute-<zone> 12 profile: <instance_profile> 13 region: <region> 14 resourceGroup: <resource_group> 15 userDataSecret: name: <role>-user-data 16 vpc: <vpc_name> 17 zone: <zone> 18 taints: 19 - key: node-role.kubernetes.io/infra effect: NoSchedule",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: 6 - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a taints: 7 - key: node-role.kubernetes.io/infra effect: NoSchedule",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{\"\\n\"}' get machineset/<infrastructure_id>-worker-a",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-<infra>-<zone> 4 namespace: openshift-machine-api annotations: 5 machine.openshift.io/memoryMb: \"16384\" machine.openshift.io/vCPU: \"4\" spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 6 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> 7 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 8 machine.openshift.io/cluster-api-machine-role: <infra> 9 machine.openshift.io/cluster-api-machine-type: <infra> 10 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> 11 spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: machine.openshift.io/v1 cluster: type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials image: name: <infrastructure_id>-rhcos 12 type: name kind: NutanixMachineProviderConfig memorySize: 16Gi 13 subnets: - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 14 userDataSecret: name: <user_data_secret> 15 vcpuSockets: 4 16 vcpusPerSocket: 1 17 taints: 18 - key: node-role.kubernetes.io/infra effect: NoSchedule",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" taints: 11 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 12 kind: OpenstackProviderSpec networks: 13 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 14 primarySubnet: <rhosp_subnet_UUID> 15 securityGroups: - filter: {} name: <infrastructure_id>-worker 16 serverMetadata: Name: <infrastructure_id>-worker 17 openshiftClusterID: <infrastructure_id> 18 tags: - openshiftClusterID=<infrastructure_id> 19 trunk: true userDataSecret: name: worker-user-data 20 availabilityZone: <optional_openstack_availability_zone>",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> 5 Selector: 6 matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 9 machine.openshift.io/cluster-api-machine-role: <role> 10 machine.openshift.io/cluster-api-machine-type: <role> 11 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 12 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" 13 providerSpec: value: apiVersion: ovirtproviderconfig.machine.openshift.io/v1beta1 cluster_id: <ovirt_cluster_id> 14 template_name: <ovirt_template_name> 15 sparse: <boolean_value> 16 format: <raw_or_cow> 17 cpu: 18 sockets: <number_of_sockets> 19 cores: <number_of_cores> 20 threads: <number_of_threads> 21 memory_mb: <memory_size> 22 guaranteed_memory_mb: <memory_size> 23 os_disk: 24 size_gb: <disk_size> 25 storage_domain_id: <storage_domain_UUID> 26 network_interfaces: 27 vnic_profile_id: <vnic_profile_id> 28 credentialsSecret: name: ovirt-credentials 29 kind: OvirtMachineProviderSpec type: <workload_type> 30 auto_pinning_policy: <auto_pinning_policy> 31 hugepages: <hugepages> 32 affinityGroupsNames: - compute 33 userDataSecret: name: worker-user-data",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: \"<vm_network_name>\" 11 numCPUs: 4 numCoresPerSocket: 1 snapshot: \"\" template: <vm_template_name> 12 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_datacenter_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcepool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc label node <node-name> node-role.kubernetes.io/app=\"\"",
"oc label node <node-name> node-role.kubernetes.io/infra=\"\"",
"oc get nodes",
"oc edit scheduler cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra=\"\" 1",
"oc label node <node_name> <label>",
"oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=",
"cat infra.mcp.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" 2",
"oc create -f infra.mcp.yaml",
"oc get machineconfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d",
"cat infra.mc.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra",
"oc create -f infra.mc.yaml",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m",
"oc describe nodes <node_name>",
"describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker Taints: node-role.kubernetes.io/infra:NoSchedule",
"oc adm taint nodes <node_name> <key>=<value>:<effect>",
"oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoExecute",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: node-role.kubernetes.io/infra effect: NoExecute value: reserved",
"tolerations: - effect: NoExecute 1 key: node-role.kubernetes.io/infra 2 operator: Equal 3 value: reserved 4",
"spec: nodePlacement: 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc get ingresscontroller default -n openshift-ingress-operator -o yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: \"11341\" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: \"True\" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default",
"oc edit ingresscontroller default -n openshift-ingress-operator",
"spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc get pod -n openshift-ingress -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>",
"oc get node <node_name> 1",
"NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.25.0",
"oc get configs.imageregistry.operator.openshift.io/cluster -o yaml",
"apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: \"56174\" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status:",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc get pods -o wide -n openshift-image-registry",
"oc describe node <node_name>",
"oc edit configmap cluster-monitoring-config -n openshift-monitoring",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute",
"watch 'oc get pod -n openshift-monitoring -o wide'",
"oc delete pod -n openshift-monitoring <pod>"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/machine_management/creating-infrastructure-machinesets
|
7.180. perl-Sys-Virt
|
7.180. perl-Sys-Virt 7.180.1. RHBA-2013:0377 - perl-Sys-Virt bug fix and enhancement update Updated perl-Sys-Virt packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The perl-Sys-Virt packages provide application programming interfaces (APIs) to manage virtual machines from Perl with the libvirt library. Note The perl-Sys-Virt package has been upgraded to upstream version 0.10.2, which provides a number of enhancements over the version. (BZ#836955) Bug Fixes BZ# 848309 Previously, the Perl binding was setting an incompatible flag for the set_blkio_parameters() function. Consequently, it was impossible to use this function to apply block tuning. The incorrect flag has been removed and set_blkio_parameters() can now be used as expected. BZ# 861581 Prior to this update, an incorrect string length was used when setting hash keys, and thus names of certain hash keys were truncated. The correct string lengths were provided for hash keys and the hash keys for the get_node_memory_stats() function now match their documentation. BZ# 865310 When setting memory parameters, the set_node_memory_parameters() function was trying to also update some read-only values. Consequently, set_node_memory_parameters() always returned an error message. To fix this bug, the method has been changed to only set parameters, and set_node_memory_parameters() now works as expected. BZ# 869130 Previously, the API documentation contained formatting errors. This update provides correction of the API documentation, which formats the documentation correctly. BZ# 873203 Due to missing default values for parameters in the pm_suspend_for_duration() and pm_wakeup() functions, callers of the API had to supply the parameters even though they were supposed to be optional. With this update, the default values have been added to these functions, which now succeed when called. BZ# 882829 Prior to this update, mistakes were present in the Plain Old Documentation (POD) for the list_all_volumes() parameters, which could mislead users. The documentation has been updated and, for list_all_volumes() now describes the API usage correctly. BZ# 883775 Previously, an incorrect class name was used with the list_all_nwfilters() function. Consequently, the objects returned from list_all_nwfilters() could not be used. Now, the object name has been fixed and the list_all_nwfilters() function works as expected. BZ# 886028 When checking return value of the screenshot() and current_snapshot() functions, a wrong data type was assumed. Consequently, certain errors were not handled properly and applications could eventually terminate unexpectedly. With this update, API errors are correctly handled in screenshot() and current_snapshot(), and the applications no longer crash. Users of perl-Sys-Virt are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/perl-sys-virt
|
Chapter 1. Managing the application set resources in non-control plane namespaces
|
Chapter 1. Managing the application set resources in non-control plane namespaces Important Argo CD application sets in non-control plane namespaces is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . By using application sets, you can automate and manage the deployments of multiple Argo CD applications declaratively from a single mono-repository to many clusters at once with greater flexibility. With Red Hat OpenShift GitOps 1.12 and later, as a cluster administrator, you can create and manage the ApplicationSet resources in non-control plane namespaces declaratively, other than the openshift-gitops control plane namespace, by explicitly enabling and configuring the ArgoCD and ApplicationSet custom resources (CRs) as per your requirements. This functionality is particularly useful in multitenancy environments when you want to manage deployments of Argo CD applications for your isolated teams. This functionality is called the ApplicationSet in any namespace feature in the Argo CD open source project. Note The generated Argo CD applications can create resources in any non-control plane namespace. However, the application itself will be in the same namespace as the application set resources. 1.1. Prerequisites You have a user-defined cluster-scoped Argo CD instance in your defined namespace. For example, spring-petclinic namespace. You have explicitly enabled and configured the target namespaces in the ArgoCD CR to manage application resources in non-control plane namespaces. 1.2. Enabling the application set resources in non-control plane namespaces As a cluster administrator, you can define a certain set of non-control plane namespaces wherein users can create, update, and reconcile ApplicationSet resources. You must explicitly enable and configure the ArgoCD and ApplicationSet custom resources (CRs) as per your requirements. Procedure Set the sourceNamespaces parameter for the applicationSet spec to include the non-control plane namespaces: Example Argo CD custom resource apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example namespace: spring-petclinic spec: applicationSet: sourceNamespaces: 1 - dev 2 1 List of non-control plane namespaces for creating and managing ApplicationSet resources. 2 Name of the target namespace for the Argo CD server to create and manage ApplicationSet resources. Note At the moment, the use of wildcards ( * ) is not supported in the .spec.applicationSet.sourceNamespaces field. Verify that the following role-based access control (RBAC) resources are either created or modified by the GitOps Operator: Name Kind Purpose <argocd_name>-<argocd_namespace>-argocd-applicationset-controller ClusterRole and ClusterRoleBinding For the Argo CD ApplicationSet Controller to watch and list ApplicationSet resources at cluster-level <argocd_name>-<argocd_namespace>-applicationset Role and RoleBinding For the Argo CD ApplicationSet Controller to manage ApplicationSet resources in target namespace <argocd_name>-<target_namespace> Role and RoleBinding For the Argo CD server to manage ApplicationSet resources in target namespace through UI, API, or CLI Note The Operator adds the argocd.argoproj.io/applicationset-managed-by-cluster-argocd label to the target namespace. 1.3. Allowing Source Code Manager Providers Important Please read this section carefully. Misconfiguration could lead to potential security issues. Allowing ApplicationSet resources in non-control plane namespaces can result in the exfiltration of secrets through malicious API endpoints in Source Code Manager (SCM) Provider or Pull Request (PR) generators. To prevent unauthorized access to sensitive information, the Operator disables the SCM Provider and PR generators by default as a precautionary measure. Procedure To use the SCM Provider and PR generators, explicitly define a list of allowed SCM Providers: Example Argo CD custom resource apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example-argocd spec: applicationSet: sourceNamespaces: - dev scmProviders: 1 - https://git.mydomain.com/ - https://gitlab.mydomain.com/ 1 The list of URLs of the allowed SCM Providers. Note If you use a URL that is not in the list of allowed SCM Providers, the Argo CD ApplicationSet Controller will reject it. 1.4. Additional resources The ApplicationSet resource ApplicationSet in any namespace Argo CD custom resource and component properties SCM Provider Generator Pull Request Generator
|
[
"apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example namespace: spring-petclinic spec: applicationSet: sourceNamespaces: 1 - dev 2",
"apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example-argocd spec: applicationSet: sourceNamespaces: - dev scmProviders: 1 - https://git.mydomain.com/ - https://gitlab.mydomain.com/"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.15/html/argo_cd_application_sets/managing-app-sets-in-non-control-plane-namespaces
|
Chapter 1. Overcloud Parameters
|
Chapter 1. Overcloud Parameters You can modify overcloud features with overcloud parameters. To set a parameter, include the chosen parameter and its value in an environment file under the parameter_defaults section and include the environment file with your openstack overcloud deploy command.
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/overcloud_parameters/assembly-overcloud_parameters
|
Chapter 2. MTC release notes
|
Chapter 2. MTC release notes 2.1. Migration Toolkit for Containers 1.8 release notes The release notes for Migration Toolkit for Containers (MTC) describe new features and enhancements, deprecated features, and known issues. The MTC enables you to migrate application workloads between OpenShift Container Platform clusters at the granularity of a namespace. MTC provides a web console and an API, based on Kubernetes custom resources, to help you control the migration and minimize application downtime. For information on the support policy for MTC, see OpenShift Application and Cluster Migration Solutions , part of the Red Hat OpenShift Container Platform Life Cycle Policy . 2.1.1. Migration Toolkit for Containers 1.8.5 release notes 2.1.1.1. Technical changes Migration Toolkit for Containers (MTC) 1.8.5 has the following technical changes: Federal Information Processing Standard (FIPS) FIPS is a set of computer security standards developed by the United States federal government in accordance with the Federal Information Security Management Act (FISMA). Starting with version 1.8.5, MTC is designed for FIPS compliance. 2.1.1.2. Resolved issues For more information, see the list of MTC 1.8.5 resolved issues in Jira. 2.1.1.3. Known issues MTC 1.8.5 has the following known issues: The associated SCC for service account cannot be migrated in OpenShift Container Platform 4.12 The associated Security Context Constraints (SCCs) for service accounts in OpenShift Container Platform 4.12 cannot be migrated. This issue is planned to be resolved in a future release of MTC. (MIG-1454) MTC does not patch statefulset.spec.volumeClaimTemplates[].spec.storageClassName on storage class conversion While running a Storage Class conversion for a StatefulSet application, MTC updates the persistent volume claims (PVC) references in .spec.volumeClaimTemplates[].metadata.name to use the migrated PVC names. MTC does not update spec.volumeClaimTemplates[].spec.storageClassName , which causes the application to scale up. Additionally, new replicas consume PVCs created under the old Storage Class instead of the migrated Storage Class. (MIG-1660) Performing a StorageClass conversion triggers the scale-down of all applications in the namespace When running a StorageClass conversion on more than one application, MTC scales down all the applications in the cutover phase, including those not involved in the migration. (MIG-1661) MigPlan cannot be edited to have the same target namespace as the source cluster after it is changed After changing the target namespace to something different from the source namespace while creating a MigPlan in the MTC UI, you cannot edit the MigPlan again to make the target namespace the same as the source namespace. (MIG-1600) Migrated builder pod fails to push to the image registry When migrating an application that includes BuildConfig from the source to the target cluster, the builder pod encounters an error, failing to push the image to the image registry. (BZ#2234781) Conflict condition clears briefly after it is displayed When creating a new state migration plan that results in a conflict error, the error is cleared shortly after it is displayed. (BZ#2144299) PvCapacityAdjustmentRequired warning not displayed after setting pv_resizing_threshold The PvCapacityAdjustmentRequired warning does not appear in the migration plan after the pv_resizing_threshold is adjusted. (BZ#2270160) For a complete list of all known issues, see the list of MTC 1.8.5 known issues in Jira. 2.1.2. Migration Toolkit for Containers 1.8.4 release notes 2.1.2.1. Technical changes Migration Toolkit for Containers (MTC) 1.8.4 has the following technical changes: MTC 1.8.4 extends its dependency resolution to include support for using OpenShift API for Data Protection (OADP) 1.4. Support for KubeVirt Virtual Machines with DirectVolumeMigration MTC 1.8.4 adds support for KubeVirt Virtual Machines (VMs) with Direct Volume Migration (DVM). 2.1.2.2. Resolved issues MTC 1.8.4 has the following major resolved issues: Ansible Operator is broken when OpenShift Virtualization is installed There is a bug in the python3-openshift package that installing OpenShift Virtualization exposes, with an exception, ValueError: too many values to unpack , returned during the task. Earlier versions of MTC are impacted, while MTC 1.8.4 has implemented a workaround. Updating to MTC 1.8.4 means you are no longer affected by this issue. (OCPBUGS-38116) UI stuck at Namespaces while creating a migration plan When trying to create a migration plan from the MTC UI, the migration plan wizard becomes stuck at the Namespaces step. This issue has been resolved in MTC 1.8.4. (MIG-1597) Migration fails with error of no matches for kind Virtual machine in version kubevirt/v1 During the migration of an application, all the necessary steps, including the backup, DVM, and restore, are successfully completed. However, the migration is marked as unsuccessful with the error message no matches for kind Virtual machine in version kubevirt/v1 . (MIG-1594) Direct Volume Migration fails when migrating to a namespace different from the source namespace On performing a migration from source cluster to target cluster, with the target namespace different from the source namespace, the DVM fails. (MIG-1592) Direct Image Migration does not respect label selector on migplan When using Direct Image Migration (DIM), if a label selector is set on the migration plan, DIM does not respect it and attempts to migrate all imagestreams in the namespace. (MIG-1533) 2.1.2.3. Known issues MTC 1.8.4 has the following known issues: The associated SCC for service account cannot be migrated in OpenShift Container Platform 4.12 The associated Security Context Constraints (SCCs) for service accounts in OpenShift Container Platform 4.12 cannot be migrated. This issue is planned to be resolved in a future release of MTC. (MIG-1454) . Rsync pod fails to start causing the DVM phase to fail The DVM phase fails due to the Rsync pod failing to start, because of a permission issue. (BZ#2231403) Migrated builder pod fails to push to image registry When migrating an application including BuildConfig from source to target cluster, the builder pod results in error, failing to push the image to the image registry. (BZ#2234781) Conflict condition gets cleared briefly after it is created When creating a new state migration plan that results in a conflict error, that error is cleared shorty after it is displayed. (BZ#2144299) PvCapacityAdjustmentRequired Warning Not Displayed After Setting pv_resizing_threshold The PvCapacityAdjustmentRequired warning fails to appear in the migration plan after the pv_resizing_threshold is adjusted. (BZ#2270160) 2.1.3. Migration Toolkit for Containers 1.8.3 release notes 2.1.3.1. Technical changes Migration Toolkit for Containers (MTC) 1.8.3 has the following technical changes: OADP 1.3 is now supported MTC 1.8.3 adds support to OpenShift API for Data Protection (OADP) as a dependency of MTC 1.8.z. 2.1.3.2. Resolved issues MTC 1.8.3 has the following major resolved issues: CVE-2024-24786: Flaw in Golang protobuf module causes unmarshal function to enter infinite loop In releases of MTC, a vulnerability was found in Golang's protobuf module, where the unmarshal function entered an infinite loop while processing certain invalid inputs. Consequently, an attacker provided carefully constructed invalid inputs, which caused the function to enter an infinite loop. With this update, the unmarshal function works as expected. For more information, see CVE-2024-24786 . CVE-2023-45857: Axios Cross-Site Request Forgery Vulnerability In releases of MTC, a vulnerability was discovered in Axios 1.5.1 that inadvertently revealed a confidential XSRF-TOKEN stored in cookies by including it in the HTTP header X-XSRF-TOKEN for every request made to the host, allowing attackers to view sensitive information. For more information, see CVE-2023-45857 . Restic backup does not work properly when the source workload is not quiesced In releases of MTC, some files did not migrate when deploying an application with a route. The Restic backup did not function as expected when the quiesce option was unchecked for the source workload. This issue has been resolved in MTC 1.8.3. For more information, see BZ#2242064 . The Migration Controller fails to install due to an unsupported value error in Velero The MigrationController failed to install due to an unsupported value error in Velero. Updating OADP 1.3.0 to OADP 1.3.1 resolves this problem. For more information, see BZ#2267018 . This issue has been resolved in MTC 1.8.3. For a complete list of all resolved issues, see the list of MTC 1.8.3 resolved issues in Jira. 2.1.3.3. Known issues Migration Toolkit for Containers (MTC) 1.8.3 has the following known issues: Ansible Operator is broken when OpenShift Virtualization is installed There is a bug in the python3-openshift package that installing OpenShift Virtualization exposes, with an exception, ValueError: too many values to unpack , returned during the task. MTC 1.8.4 has implemented a workaround. Updating to MTC 1.8.4 means you are no longer affected by this issue. (OCPBUGS-38116) The associated SCC for service account cannot be migrated in OpenShift Container Platform 4.12 The associated Security Context Constraints (SCCs) for service accounts in OpenShift Container Platform version 4.12 cannot be migrated. This issue is planned to be resolved in a future release of MTC. (MIG-1454) . For a complete list of all known issues, see the list of MTC 1.8.3 known issues in Jira. 2.1.4. Migration Toolkit for Containers 1.8.2 release notes 2.1.4.1. Resolved issues This release has the following major resolved issues: Backup phase fails after setting custom CA replication repository In releases of Migration Toolkit for Containers (MTC), after editing the replication repository, adding a custom CA certificate, successfully connecting the repository, and triggering a migration, a failure occurred during the backup phase. CVE-2023-26136: tough-cookie package before 4.1.3 are vulnerable to Prototype Pollution In releases of (MTC), versions before 4.1.3 of the tough-cookie package used in MTC were vulnerable to prototype pollution. This vulnerability occurred because CookieJar did not handle cookies properly when the value of the rejectPublicSuffixes was set to false . For more details, see (CVE-2023-26136) CVE-2022-25883 openshift-migration-ui-container: nodejs-semver: Regular expression denial of service In releases of (MTC), versions of the semver package before 7.5.2, used in MTC, were vulnerable to Regular Expression Denial of Service (ReDoS) from the function newRange , when untrusted user data was provided as a range. For more details, see (CVE-2022-25883) 2.1.4.2. Known issues MTC 1.8.2 has the following known issues: Ansible Operator is broken when OpenShift Virtualization is installed There is a bug in the python3-openshift package that installing OpenShift Virtualization exposes, with an exception, ValueError: too many values to unpack , returned during the task. MTC 1.8.4 has implemented a workaround. Updating to MTC 1.8.4 means you are no longer affected by this issue. (OCPBUGS-38116) 2.1.5. Migration Toolkit for Containers 1.8.1 release notes 2.1.5.1. Resolved issues Migration Toolkit for Containers (MTC) 1.8.1 has the following major resolved issues: CVE-2023-39325: golang: net/http, x/net/http2: rapid stream resets can cause excessive work A flaw was found in handling multiplexed streams in the HTTP/2 protocol, which is used by MTC. A client could repeatedly make a request for a new multiplex stream and immediately send an RST_STREAM frame to cancel it. This creates additional workload for the server in terms of setting up and dismantling streams, while avoiding any server-side limitations on the maximum number of active streams per connection, resulting in a denial of service due to server resource consumption. (BZ#2245079) It is advised to update to MTC 1.8.1 or later, which resolve this issue. For more details, see (CVE-2023-39325) and (CVE-2023-44487) 2.1.5.2. Known issues Migration Toolkit for Containers (MTC) 1.8.1 has the following known issues: Ansible Operator is broken when OpenShift Virtualization is installed There is a bug in the python3-openshift package that installing OpenShift Virtualization exposes. An exception, ValueError: too many values to unpack , is returned during the task. MTC 1.8.4 has implemented a workaround. Updating to MTC 1.8.4 means you are no longer affected by this issue. (OCPBUGS-38116) 2.1.6. Migration Toolkit for Containers 1.8.0 release notes 2.1.6.1. Resolved issues Migration Toolkit for Containers (MTC) 1.8.0 has the following resolved issues: Indirect migration is stuck on backup stage In releases, an indirect migration became stuck at the backup stage, due to InvalidImageName error. ( (BZ#2233097) ) PodVolumeRestore remain In Progress keeping the migration stuck at Stage Restore In releases, on performing an indirect migration, the migration became stuck at the Stage Restore step, waiting for the podvolumerestore to be completed. ( (BZ#2233868) ) Migrated application unable to pull image from internal registry on target cluster In releases, on migrating an application to the target cluster, the migrated application failed to pull the image from the internal image registry resulting in an application failure . ( (BZ#2233103) ) Migration failing on Azure due to authorization issue In releases, on an Azure cluster, when backing up to Azure storage, the migration failed at the Backup stage. ( (BZ#2238974) ) 2.1.6.2. Known issues MTC 1.8.0 has the following known issues: Ansible Operator is broken when OpenShift Virtualization is installed There is a bug in the python3-openshift package that installing OpenShift Virtualization exposes, with an exception ValueError: too many values to unpack returned during the task. MTC 1.8.4 has implemented a workaround. Updating to MTC 1.8.4 means you are no longer affected by this issue. (OCPBUGS-38116) Old Restic pods are not getting removed on upgrading MTC 1.7.x 1.8.x In this release, on upgrading the MTC Operator from 1.7.x to 1.8.x, the old Restic pods are not being removed. Therefore after the upgrade, both Restic and node-agent pods are visible in the namespace. ( (BZ#2236829) ) Migrated builder pod fails to push to image registry In this release, on migrating an application including a BuildConfig from a source to target cluster, builder pod results in error , failing to push the image to the image registry. ( (BZ#2234781) ) [UI] CA bundle file field is not properly cleared In this release, after enabling Require SSL verification and adding content to the CA bundle file for an MCG NooBaa bucket in MigStorage, the connection fails as expected. However, when reverting these changes by removing the CA bundle content and clearing Require SSL verification , the connection still fails. The issue is only resolved by deleting and re-adding the repository. ( (BZ#2240052) ) Backup phase fails after setting custom CA replication repository In (MTC), after editing the replication repository, adding a custom CA certificate, successfully connecting the repository, and triggering a migration, a failure occurs during the backup phase. This issue is resolved in MTC 1.8.2. CVE-2023-26136: tough-cookie package before 4.1.3 are vulnerable to Prototype Pollution Versions before 4.1.3 of the tough-cookie package, used in MTC, are vulnerable to prototype pollution. This vulnerability occurs because CookieJar does not handle cookies properly when the value of the rejectPublicSuffixes is set to false . This issue is resolved in MTC 1.8.2. For more details, see (CVE-2023-26136) CVE-2022-25883 openshift-migration-ui-container: nodejs-semver: Regular expression denial of service In releases of (MTC), versions of the semver package before 7.5.2, used in MTC, are vulnerable to Regular Expression Denial of Service (ReDoS) from the function newRange , when untrusted user data is provided as a range. This issue is resolved in MTC 1.8.2. For more details, see (CVE-2022-25883) 2.1.6.3. Technical changes This release has the following technical changes: Migration from OpenShift Container Platform 3 to OpenShift Container Platform 4 requires a legacy Migration Toolkit for Containers Operator and Migration Toolkit for Containers 1.7.x. Migration from MTC 1.7.x to MTC 1.8.x is not supported. You must use MTC 1.7.x to migrate anything with a source of OpenShift Container Platform 4.9 or earlier. MTC 1.7.x must be used on both source and destination. Migration Toolkit for Containers (MTC) 1.8.x only supports migrations from OpenShift Container Platform 4.10 or later to OpenShift Container Platform 4.10 or later. For migrations only involving cluster versions 4.10 and later, either 1.7.x or 1.8.x might be used. However, but it must be the same MTC 1.Y.z on both source and destination. Migration from source MTC 1.7.x to destination MTC 1.8.x is unsupported. Migration from source MTC 1.8.x to destination MTC 1.7.x is unsupported. Migration from source MTC 1.7.x to destination MTC 1.7.x is supported. Migration from source MTC 1.8.x to destination MTC 1.8.x is supported. MTC 1.8.x by default installs OADP 1.2.x. Upgrading from MTC 1.7.x to MTC 1.8.0, requires manually changing the OADP channel to 1.2. If this is not done, the upgrade of the Operator fails. 2.2. Migration Toolkit for Containers 1.7 release notes The release notes for Migration Toolkit for Containers (MTC) describe new features and enhancements, deprecated features, and known issues. The MTC enables you to migrate application workloads between OpenShift Container Platform clusters at the granularity of a namespace. You can migrate from OpenShift Container Platform 3 to 4.14 and between OpenShift Container Platform 4 clusters. MTC provides a web console and an API, based on Kubernetes custom resources, to help you control the migration and minimize application downtime. For information on the support policy for MTC, see OpenShift Application and Cluster Migration Solutions , part of the Red Hat OpenShift Container Platform Life Cycle Policy . 2.2.1. Migration Toolkit for Containers 1.7.18 release notes Migration Toolkit for Containers (MTC) 1.7.18 is a Container Grade Only (CGO) release, which is released to refresh the health grades of the containers. No code was changed in the product itself compared to that of MTC 1.7.17. 2.2.1.1. Technical changes Migration Toolkit for Containers (MTC) 1.7.18 has the following technical changes: Federal Information Processing Standard (FIPS) FIPS is a set of computer security standards developed by the United States federal government in accordance with the Federal Information Security Management Act (FISMA). Starting with version 1.7.18, MTC is designed for FIPS compliance. 2.2.2. Migration Toolkit for Containers 1.7.17 release notes Migration Toolkit for Containers (MTC) 1.7.17 is a Container Grade Only (CGO) release, which is released to refresh the health grades of the containers. No code was changed in the product itself compared to that of MTC 1.7.16. 2.2.3. Migration Toolkit for Containers 1.7.16 release notes 2.2.3.1. Resolved issues This release has the following resolved issues: CVE-2023-45290: Golang: net/http : Memory exhaustion in the Request.ParseMultipartForm method A flaw was found in the net/http Golang standard library package, which impacts earlier versions of MTC. When parsing a multipart form, either explicitly with Request.ParseMultipartForm or implicitly with Request.FormValue , Request.PostFormValue , or Request.FormFile methods, limits on the total size of the parsed form are not applied to the memory consumed while reading a single form line. This permits a maliciously crafted input containing long lines to cause the allocation of arbitrarily large amounts of memory, potentially leading to memory exhaustion. To resolve this issue, upgrade to MTC 1.7.16. For more details, see CVE-2023-45290 CVE-2024-24783: Golang: crypto/x509 : Verify panics on certificates with an unknown public key algorithm A flaw was found in the crypto/x509 Golang standard library package, which impacts earlier versions of MTC. Verifying a certificate chain that contains a certificate with an unknown public key algorithm causes Certificate.Verify to panic. This affects all crypto/tls clients and servers that set Config.ClientAuth to VerifyClientCertIfGiven or RequireAndVerifyClientCert . The default behavior is for TLS servers to not verify client certificates. To resolve this issue, upgrade to MTC 1.7.16. For more details, see CVE-2024-24783 . CVE-2024-24784: Golang: net/mail : Comments in display names are incorrectly handled A flaw was found in the net/mail Golang standard library package, which impacts earlier versions of MTC. The ParseAddressList function incorrectly handles comments, text in parentheses, and display names. As this is a misalignment with conforming address parsers, it can result in different trust decisions being made by programs using different parsers. To resolve this issue, upgrade to MTC 1.7.16. For more details, see CVE-2024-24784 . CVE-2024-24785: Golang: html/template : Errors returned from MarshalJSON methods may break template escaping A flaw was found in the html/template Golang standard library package, which impacts earlier versions of MTC. If errors returned from MarshalJSON methods contain user-controlled data, they could be used to break the contextual auto-escaping behavior of the html/template package, allowing subsequent actions to inject unexpected content into templates. To resolve this issue, upgrade to MTC 1.7.16. For more details, see CVE-2024-24785 . CVE-2024-29180: webpack-dev-middleware : Lack of URL validation may lead to file leak A flaw was found in the webpack-dev-middleware package , which impacts earlier versions of MTC. This flaw fails to validate the supplied URL address sufficiently before returning local files, which could allow an attacker to craft URLs to return arbitrary local files from the developer's machine. To resolve this issue, upgrade to MTC 1.7.16. For more details, see CVE-2024-29180 . CVE-2024-30255: envoy : HTTP/2 CPU exhaustion due to CONTINUATION frame flood A flaw was found in how the envoy proxy implements the HTTP/2 codec, which impacts earlier versions of MTC. There are insufficient limitations placed on the number of CONTINUATION frames that can be sent within a single stream, even after exceeding the header map limits of envoy . This flaw could allow an unauthenticated remote attacker to send packets to vulnerable servers. These packets could consume compute resources and cause a denial of service (DoS). To resolve this issue, upgrade to MTC 1.7.16. For more details, see CVE-2024-30255 . 2.2.3.2. Known issues This release has the following known issues: Direct Volume Migration is failing as the Rsync pod on the source cluster goes into an Error state On migrating any application with a Persistent Volume Claim (PVC), the Stage migration operation succeeds with warnings, but the Direct Volume Migration (DVM) fails with the rsync pod on the source namespace moving into an error state. (BZ#2256141) The conflict condition is briefly cleared after it is created When creating a new state migration plan that returns a conflict error message, the error message is cleared very shortly after it is displayed. (BZ#2144299) Migration fails when there are multiple Volume Snapshot Locations of different provider types configured in a cluster When there are multiple Volume Snapshot Locations (VSLs) in a cluster with different provider types, but you have not set any of them as the default VSL, Velero results in a validation error that causes migration operations to fail. (BZ#2180565) 2.2.4. Migration Toolkit for Containers 1.7.15 release notes 2.2.4.1. Resolved issues This release has the following resolved issues: CVE-2024-24786: A flaw was found in Golang's protobuf module, where the unmarshal function can enter an infinite loop A flaw was found in the protojson.Unmarshal function that could cause the function to enter an infinite loop when unmarshaling certain forms of invalid JSON messages. This condition could occur when unmarshaling into a message that contained a google.protobuf.Any value or when the UnmarshalOptions.DiscardUnknown option was set in a JSON-formatted message. To resolve this issue, upgrade to MTC 1.7.15. For more details, see (CVE-2024-24786) . CVE-2024-28180: jose-go improper handling of highly compressed data A vulnerability was found in Jose due to improper handling of highly compressed data. An attacker could send a JSON Web Encryption (JWE) encrypted message that contained compressed data that used large amounts of memory and CPU when decompressed by the Decrypt or DecryptMulti functions. To resolve this issue, upgrade to MTC 1.7.15. For more details, see (CVE-2024-28180) . 2.2.4.2. Known issues This release has the following known issues: Direct Volume Migration is failing as the Rsync pod on the source cluster goes into an Error state On migrating any application with Persistent Volume Claim (PVC), the Stage migration operation succeeds with warnings, and Direct Volume Migration (DVM) fails with the rsync pod on the source namespace going into an error state. (BZ#2256141) The conflict condition is briefly cleared after it is created When creating a new state migration plan that results in a conflict error message, the error message is cleared shortly after it is displayed. (BZ#2144299) Migration fails when there are multiple Volume Snapshot Locations (VSLs) of different provider types configured in a cluster with no specified default VSL. When there are multiple VSLs in a cluster with different provider types, and you set none of them as the default VSL, Velero results in a validation error that causes migration operations to fail. (BZ#2180565) 2.2.5. Migration Toolkit for Containers 1.7.14 release notes 2.2.5.1. Resolved issues This release has the following resolved issues: CVE-2023-39325 CVE-2023-44487: various flaws A flaw was found in the handling of multiplexed streams in the HTTP/2 protocol, which is utilized by Migration Toolkit for Containers (MTC). A client could repeatedly make a request for a new multiplex stream then immediately send an RST_STREAM frame to cancel those requests. This activity created additional workloads for the server in terms of setting up and dismantling streams, but avoided any server-side limitations on the maximum number of active streams per connection. As a result, a denial of service occurred due to server resource consumption. (BZ#2243564) (BZ#2244013) (BZ#2244014) (BZ#2244015) (BZ#2244016) (BZ#2244017) To resolve this issue, upgrade to MTC 1.7.14. For more details, see (CVE-2023-44487) and (CVE-2023-39325) . CVE-2023-39318 CVE-2023-39319 CVE-2023-39321: various flaws (CVE-2023-39318) : A flaw was discovered in Golang, utilized by MTC. The html/template package did not properly handle HTML-like "" comment tokens, or the hashbang "#!" comment tokens, in <script> contexts. This flaw could cause the template parser to improperly interpret the contents of <script> contexts, causing actions to be improperly escaped. (BZ#2238062) (BZ#2238088) (CVE-2023-39319) : A flaw was discovered in Golang, utilized by MTC. The html/template package did not apply the proper rules for handling occurrences of "<script" , "<!--" , and "</script" within JavaScript literals in <script> contexts. This could cause the template parser to improperly consider script contexts to be terminated early, causing actions to be improperly escaped. (BZ#2238062) (BZ#2238088) (CVE-2023-39321) : A flaw was discovered in Golang, utilized by MTC. Processing an incomplete post-handshake message for a QUIC connection could cause a panic. (BZ#2238062) (BZ#2238088) (CVE-2023-3932) : A flaw was discovered in Golang, utilized by MTC. Connections using the QUIC transport protocol did not set an upper bound on the amount of data buffered when reading post-handshake messages, allowing a malicious QUIC connection to cause unbounded memory growth. (BZ#2238088) To resolve these issues, upgrade to MTC 1.7.14. For more details, see (CVE-2023-39318) , (CVE-2023-39319) , and (CVE-2023-39321) . 2.2.5.2. Known issues There are no major known issues in this release. 2.2.6. Migration Toolkit for Containers 1.7.13 release notes 2.2.6.1. Resolved issues There are no major resolved issues in this release. 2.2.6.2. Known issues There are no major known issues in this release. 2.2.7. Migration Toolkit for Containers 1.7.12 release notes 2.2.7.1. Resolved issues There are no major resolved issues in this release. 2.2.7.2. Known issues This release has the following known issues: Error code 504 is displayed on the Migration details page On the Migration details page, at first, the migration details are displayed without any issues. However, after sometime, the details disappear, and a 504 error is returned. ( BZ#2231106 ) Old restic pods are not removed when upgrading Migration Toolkit for Containers 1.7.x to Migration Toolkit for Containers 1.8 On upgrading the Migration Toolkit for Containers (MTC) operator from 1.7.x to 1.8.x, the old restic pods are not removed. After the upgrade, both restic and node-agent pods are visible in the namespace. ( BZ#2236829 ) 2.2.8. Migration Toolkit for Containers 1.7.11 release notes 2.2.8.1. Resolved issues There are no major resolved issues in this release. 2.2.8.2. Known issues There are no known issues in this release. 2.2.9. Migration Toolkit for Containers 1.7.10 release notes 2.2.9.1. Resolved issues This release has the following major resolved issue: Adjust rsync options in DVM In this release, you can prevent absolute symlinks from being manipulated by Rsync in the course of direct volume migration (DVM). Running DVM in privileged mode preserves absolute symlinks inside the persistent volume claims (PVCs). To switch to privileged mode, in the MigrationController CR, set the migration_rsync_privileged spec to true . ( BZ#2204461 ) 2.2.9.2. Known issues There are no known issues in this release. 2.2.10. Migration Toolkit for Containers 1.7.9 release notes 2.2.10.1. Resolved issues There are no major resolved issues in this release. 2.2.10.2. Known issues This release has the following known issue: Adjust rsync options in DVM In this release, users are unable to prevent absolute symlinks from being manipulated by rsync during direct volume migration (DVM). ( BZ#2204461 ) 2.2.11. Migration Toolkit for Containers 1.7.8 release notes 2.2.11.1. Resolved issues This release has the following major resolved issues: Velero image cannot be overridden in the Migration Toolkit for Containers (MTC) operator In releases, it was not possible to override the velero image using the velero_image_fqin parameter in the MigrationController Custom Resource (CR). ( BZ#2143389 ) Adding a MigCluster from the UI fails when the domain name has more than six characters In releases, adding a MigCluster from the UI failed when the domain name had more than six characters. The UI code expected a domain name of between two and six characters. ( BZ#2152149 ) UI fails to render the Migrations' page: Cannot read properties of undefined (reading 'name') In releases, the UI failed to render the Migrations' page, returning Cannot read properties of undefined (reading 'name') . ( BZ#2163485 ) Creating DPA resource fails on Red Hat OpenShift Container Platform 4.6 clusters In releases, when deploying MTC on an OpenShift Container Platform 4.6 cluster, the DPA failed to be created according to the logs, which resulted in some pods missing. From the logs in the migration-controller in the OpenShift Container Platform 4.6 cluster, it indicated that an unexpected null value was passed, which caused the error. ( BZ#2173742 ) 2.2.11.2. Known issues There are no known issues in this release. 2.2.12. Migration Toolkit for Containers 1.7.7 release notes 2.2.12.1. Resolved issues There are no major resolved issues in this release. 2.2.12.2. Known issues There are no known issues in this release. 2.2.13. Migration Toolkit for Containers 1.7.6 release notes 2.2.13.1. New features Implement proposed changes for DVM support with PSA in Red Hat OpenShift Container Platform 4.12 With the incoming enforcement of Pod Security Admission (PSA) in OpenShift Container Platform 4.12 the default pod would run with a restricted profile. This restricted profile would mean workloads to migrate would be in violation of this policy and no longer work as of now. The following enhancement outlines the changes that would be required to remain compatible with OCP 4.12. ( MIG-1240 ) 2.2.13.2. Resolved issues This release has the following major resolved issues: Unable to create Storage Class Conversion plan due to missing cronjob error in Red Hat OpenShift Platform 4.12 In releases, on the persistent volumes page, an error is thrown that a CronJob is not available in version batch/v1beta1 , and when clicking on cancel, the migplan is created with status Not ready . ( BZ#2143628 ) 2.2.13.3. Known issues This release has the following known issue: Conflict conditions are cleared briefly after they are created When creating a new state migration plan that will result in a conflict error, that error is cleared shorty after it is displayed. ( BZ#2144299 ) 2.2.14. Migration Toolkit for Containers 1.7.5 release notes 2.2.14.1. Resolved issues This release has the following major resolved issue: Direct Volume Migration is failing as rsync pod on source cluster move into Error state In release, migration succeeded with warnings but Direct Volume Migration failed with rsync pod on source namespace going into error state. ( *BZ#2132978 ) 2.2.14.2. Known issues This release has the following known issues: Velero image cannot be overridden in the Migration Toolkit for Containers (MTC) operator In releases, it was not possible to override the velero image using the velero_image_fqin parameter in the MigrationController Custom Resource (CR). ( BZ#2143389 ) When editing a MigHook in the UI, the page might fail to reload The UI might fail to reload when editing a hook if there is a network connection issue. After the network connection is restored, the page will fail to reload until the cache is cleared. ( BZ#2140208 ) 2.2.15. Migration Toolkit for Containers 1.7.4 release notes 2.2.15.1. Resolved issues There are no major resolved issues in this release. 2.2.15.2. Known issues Rollback missing out deletion of some resources from the target cluster On performing the roll back of an application from the Migration Toolkit for Containers (MTC) UI, some resources are not being deleted from the target cluster and the roll back is showing a status as successfully completed. ( BZ#2126880 ) 2.2.16. Migration Toolkit for Containers 1.7.3 release notes 2.2.16.1. Resolved issues This release has the following major resolved issues: Correct DNS validation for destination namespace In releases, the MigPlan could not be validated if the destination namespace started with a non-alphabetic character. ( BZ#2102231 ) Deselecting all PVCs from UI still results in an attempted PVC transfer In releases, while doing a full migration, unselecting the persistent volume claims (PVCs) would not skip selecting the PVCs and still try to migrate them. ( BZ#2106073 ) Incorrect DNS validation for destination namespace In releases, MigPlan could not be validated because the destination namespace started with a non-alphabetic character. ( BZ#2102231 ) 2.2.16.2. Known issues There are no known issues in this release. 2.2.17. Migration Toolkit for Containers 1.7.2 release notes 2.2.17.1. Resolved issues This release has the following major resolved issues: MTC UI does not display logs correctly In releases, the Migration Toolkit for Containers (MTC) UI did not display logs correctly. ( BZ#2062266 ) StorageClass conversion plan adding migstorage reference in migplan In releases, StorageClass conversion plans had a migstorage reference even though it was not being used. ( BZ#2078459 ) Velero pod log missing from downloaded logs In releases, when downloading a compressed (.zip) folder for all logs, the velero pod was missing. ( BZ#2076599 ) Velero pod log missing from UI drop down In releases, after a migration was performed, the velero pod log was not included in the logs provided in the dropdown list. ( BZ#2076593 ) Rsync options logs not visible in log-reader pod In releases, when trying to set any valid or invalid rsync options in the migrationcontroller , the log-reader was not showing any logs regarding the invalid options or about the rsync command being used. ( BZ#2079252 ) Default CPU requests on Velero/Restic are too demanding and fail in certain environments In releases, the default CPU requests on Velero/Restic were too demanding and fail in certain environments. Default CPU requests for Velero and Restic Pods are set to 500m. These values were high. ( BZ#2088022 ) 2.2.17.2. Known issues This release has the following known issues: Updating the replication repository to a different storage provider type is not respected by the UI After updating the replication repository to a different type and clicking Update Repository , it shows connection successful, but the UI is not updated with the correct details. When clicking on the Edit button again, it still shows the old replication repository information. Furthermore, when trying to update the replication repository again, it still shows the old replication details. When selecting the new repository, it also shows all the information you entered previously and the Update repository is not enabled, as if there are no changes to be submitted. ( BZ#2102020 ) Migrations fails because the backup is not found Migration fails at the restore stage because of initial backup has not been found. ( BZ#2104874 ) Update Cluster button is not enabled when updating Azure resource group When updating the remote cluster, selecting the Azure resource group checkbox, and adding a resource group does not enable the Update cluster option. ( BZ#2098594 ) Error pop-up in UI on deleting migstorage resource When creating a backupStorage credential secret in OpenShift Container Platform, if the migstorage is removed from the UI, a 404 error is returned and the underlying secret is not removed. ( BZ#2100828 ) Miganalytic resource displaying resource count as 0 in UI After creating a migplan from backend, the Miganalytic resource displays the resource count as 0 in UI. ( BZ#2102139 ) Registry validation fails when two trailing slashes are added to the Exposed route host to image registry After adding two trailing slashes, meaning // , to the exposed registry route, the MigCluster resource is showing the status as connected . When creating a migplan from backend with DIM, the plans move to the unready status. ( BZ#2104864 ) Service Account Token not visible while editing source cluster When editing the source cluster that has been added and is in Connected state, in the UI, the service account token is not visible in the field. To save the wizard, you have to fetch the token again and provide details inside the field. ( BZ#2097668 ) 2.2.18. Migration Toolkit for Containers 1.7.1 release notes 2.2.18.1. Resolved issues There are no major resolved issues in this release. 2.2.18.2. Known issues This release has the following known issues: Incorrect DNS validation for destination namespace MigPlan cannot be validated because the destination namespace starts with a non-alphabetic character. ( BZ#2102231 ) Cloud propagation phase in migration controller is not functioning due to missing labels on Velero pods The Cloud propagation phase in the migration controller is not functioning due to missing labels on Velero pods. The EnsureCloudSecretPropagated phase in the migration controller waits until replication repository secrets are propagated on both sides. As this label is missing on Velero pods, the phase is not functioning as expected. ( BZ#2088026 ) Default CPU requests on Velero/Restic are too demanding when making scheduling fail in certain environments Default CPU requests on Velero/Restic are too demanding when making scheduling fail in certain environments. Default CPU requests for Velero and Restic Pods are set to 500m. These values are high. The resources can be configured in DPA using the podConfig field for Velero and Restic. Migration operator should set CPU requests to a lower value, such as 100m, so that Velero and Restic pods can be scheduled in resource constrained environments Migration Toolkit for Containers (MTC) often operates in. ( BZ#2088022 ) Warning is displayed on persistentVolumes page after editing storage class conversion plan A warning is displayed on the persistentVolumes page after editing the storage class conversion plan. When editing the existing migration plan, a warning is displayed on the UI At least one PVC must be selected for Storage Class Conversion . ( BZ#2079549 ) Velero pod log missing from downloaded logs When downloading a compressed (.zip) folder for all logs, the velero pod is missing. ( BZ#2076599 ) Velero pod log missing from UI drop down After a migration is performed, the velero pod log is not included in the logs provided in the dropdown list. ( BZ#2076593 ) 2.2.19. Migration Toolkit for Containers 1.7.0 release notes 2.2.19.1. New features and enhancements This release has the following new features and enhancements: The Migration Toolkit for Containers (MTC) Operator now depends upon the OpenShift API for Data Protection (OADP) Operator. When you install the MTC Operator, the Operator Lifecycle Manager (OLM) automatically installs the OADP Operator in the same namespace. You can migrate from a source cluster that is behind a firewall to a cloud-based destination cluster by establishing a network tunnel between the two clusters by using the crane tunnel-api command. Converting storage classes in the MTC web console: You can convert the storage class of a persistent volume (PV) by migrating it within the same cluster. 2.2.19.2. Known issues This release has the following known issues: MigPlan custom resource does not display a warning when an AWS gp2 PVC has no available space. ( BZ#1963927 ) Direct and indirect data transfers do not work if the destination storage is a PV that is dynamically provisioned by the AWS Elastic File System (EFS). This is due to limitations of the AWS EFS Container Storage Interface (CSI) driver. ( BZ#2085097 ) Block storage for IBM Cloud must be in the same availability zone. See the IBM FAQ for block storage for virtual private cloud . MTC 1.7.6 cannot migrate cron jobs from source clusters that support v1beta1 cron jobs to clusters of OpenShift Container Platform 4.12 and later, which do not support v1beta1 cron jobs. ( BZ#2149119 ) 2.3. Migration Toolkit for Containers 1.6 release notes The release notes for Migration Toolkit for Containers (MTC) describe new features and enhancements, deprecated features, and known issues. The MTC enables you to migrate application workloads between OpenShift Container Platform clusters at the granularity of a namespace. You can migrate from OpenShift Container Platform 3 to 4.14 and between OpenShift Container Platform 4 clusters. MTC provides a web console and an API, based on Kubernetes custom resources, to help you control the migration and minimize application downtime. For information on the support policy for MTC, see OpenShift Application and Cluster Migration Solutions , part of the Red Hat OpenShift Container Platform Life Cycle Policy . 2.3.1. Migration Toolkit for Containers 1.6 release notes 2.3.1.1. New features and enhancements This release has the following new features and enhancements: State migration: You can perform repeatable, state-only migrations by selecting specific persistent volume claims (PVCs). "New operator version available" notification: The Clusters page of the MTC web console displays a notification when a new Migration Toolkit for Containers Operator is available. 2.3.1.2. Deprecated features The following features are deprecated: MTC version 1.4 is no longer supported. 2.3.1.3. Known issues This release has the following known issues: On OpenShift Container Platform 3.10, the MigrationController pod takes too long to restart. The Bugzilla report contains a workaround. ( BZ#1986796 ) Stage pods fail during direct volume migration from a classic OpenShift Container Platform source cluster on IBM Cloud. The IBM block storage plugin does not allow the same volume to be mounted on multiple pods of the same node. As a result, the PVCs cannot be mounted on the Rsync pods and on the application pods simultaneously. To resolve this issue, stop the application pods before migration. ( BZ#1887526 ) MigPlan custom resource does not display a warning when an AWS gp2 PVC has no available space. ( BZ#1963927 ) Block storage for IBM Cloud must be in the same availability zone. See the IBM FAQ for block storage for virtual private cloud . 2.4. Migration Toolkit for Containers 1.5 release notes The release notes for Migration Toolkit for Containers (MTC) describe new features and enhancements, deprecated features, and known issues. The MTC enables you to migrate application workloads between OpenShift Container Platform clusters at the granularity of a namespace. You can migrate from OpenShift Container Platform 3 to 4.14 and between OpenShift Container Platform 4 clusters. MTC provides a web console and an API, based on Kubernetes custom resources, to help you control the migration and minimize application downtime. For information on the support policy for MTC, see OpenShift Application and Cluster Migration Solutions , part of the Red Hat OpenShift Container Platform Life Cycle Policy . 2.4.1. Migration Toolkit for Containers 1.5 release notes 2.4.1.1. New features and enhancements This release has the following new features and enhancements: The Migration resource tree on the Migration details page of the web console has been enhanced with additional resources, Kubernetes events, and live status information for monitoring and debugging migrations. The web console can support hundreds of migration plans. A source namespace can be mapped to a different target namespace in a migration plan. Previously, the source namespace was mapped to a target namespace with the same name. Hook phases with status information are displayed in the web console during a migration. The number of Rsync retry attempts is displayed in the web console during direct volume migration. Persistent volume (PV) resizing can be enabled for direct volume migration to ensure that the target cluster does not run out of disk space. The threshold that triggers PV resizing is configurable. Previously, PV resizing occurred when the disk usage exceeded 97%. Velero has been updated to version 1.6, which provides numerous fixes and enhancements. Cached Kubernetes clients can be enabled to provide improved performance. 2.4.1.2. Deprecated features The following features are deprecated: MTC versions 1.2 and 1.3 are no longer supported. The procedure for updating deprecated APIs has been removed from the troubleshooting section of the documentation because the oc convert command is deprecated. 2.4.1.3. Known issues This release has the following known issues: Microsoft Azure storage is unavailable if you create more than 400 migration plans. The MigStorage custom resource displays the following message: The request is being throttled as the limit has been reached for operation type . ( BZ#1977226 ) If a migration fails, the migration plan does not retain custom persistent volume (PV) settings for quiesced pods. You must manually roll back the migration, delete the migration plan, and create a new migration plan with your PV settings. ( BZ#1784899 ) PV resizing does not work as expected for AWS gp2 storage unless the pv_resizing_threshold is 42% or greater. ( BZ#1973148 ) PV resizing does not work with OpenShift Container Platform 3.7 and 3.9 source clusters in the following scenarios: The application was installed after MTC was installed. An application pod was rescheduled on a different node after MTC was installed. OpenShift Container Platform 3.7 and 3.9 do not support the Mount Propagation feature that enables Velero to mount PVs automatically in the Restic pod. The MigAnalytic custom resource (CR) fails to collect PV data from the Restic pod and reports the resources as 0 . The MigPlan CR displays a status similar to the following: Example output status: conditions: - category: Warn lastTransitionTime: 2021-07-15T04:11:44Z message: Failed gathering extended PV usage information for PVs [nginx-logs nginx-html], please see MigAnalytic openshift-migration/ocp-24706-basicvolmig-migplan-1626319591-szwd6 for details reason: FailedRunningDf status: "True" type: ExtendedPVAnalysisFailed To enable PV resizing, you can manually restart the Restic daemonset on the source cluster or restart the Restic pods on the same nodes as the application. If you do not restart Restic, you can run the direct volume migration without PV resizing. ( BZ#1982729 ) 2.4.1.4. Technical changes This release has the following technical changes: The legacy Migration Toolkit for Containers Operator version 1.5.1 is installed manually on OpenShift Container Platform versions 3.7 to 4.5. The Migration Toolkit for Containers Operator version 1.5.1 is installed on OpenShift Container Platform versions 4.6 and later by using the Operator Lifecycle Manager.
|
[
"status: conditions: - category: Warn lastTransitionTime: 2021-07-15T04:11:44Z message: Failed gathering extended PV usage information for PVs [nginx-logs nginx-html], please see MigAnalytic openshift-migration/ocp-24706-basicvolmig-migplan-1626319591-szwd6 for details reason: FailedRunningDf status: \"True\" type: ExtendedPVAnalysisFailed"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/migration_toolkit_for_containers/mtc-release-notes-1
|
Chapter 11. Managing bare-metal hosts
|
Chapter 11. Managing bare-metal hosts When you install OpenShift Container Platform on a bare-metal cluster, you can provision and manage bare-metal nodes by using machine and machineset custom resources (CRs) for bare-metal hosts that exist in the cluster. 11.1. About bare metal hosts and nodes To provision a Red Hat Enterprise Linux CoreOS (RHCOS) bare metal host as a node in your cluster, first create a MachineSet custom resource (CR) object that corresponds to the bare metal host hardware. Bare metal host compute machine sets describe infrastructure components specific to your configuration. You apply specific Kubernetes labels to these compute machine sets and then update the infrastructure components to run on only those machines. Machine CR's are created automatically when you scale up the relevant MachineSet containing a metal3.io/autoscale-to-hosts annotation. OpenShift Container Platform uses Machine CR's to provision the bare metal node that corresponds to the host as specified in the MachineSet CR. 11.2. Maintaining bare metal hosts You can maintain the details of the bare metal hosts in your cluster from the OpenShift Container Platform web console. Navigate to Compute Bare Metal Hosts , and select a task from the Actions drop down menu. Here you can manage items such as BMC details, boot MAC address for the host, enable power management, and so on. You can also review the details of the network interfaces and drives for the host. You can move a bare metal host into maintenance mode. When you move a host into maintenance mode, the scheduler moves all managed workloads off the corresponding bare metal node. No new workloads are scheduled while in maintenance mode. You can deprovision a bare metal host in the web console. Deprovisioning a host does the following actions: Annotates the bare metal host CR with cluster.k8s.io/delete-machine: true Scales down the related compute machine set Note Powering off the host without first moving the daemon set and unmanaged static pods to another node can cause service disruption and loss of data. Additional resources Adding compute machines to bare metal 11.2.1. Adding a bare metal host to the cluster using the web console You can add bare metal hosts to the cluster in the web console. Prerequisites Install an RHCOS cluster on bare metal. Log in as a user with cluster-admin privileges. Procedure In the web console, navigate to Compute Bare Metal Hosts . Select Add Host New with Dialog . Specify a unique name for the new bare metal host. Set the Boot MAC address . Set the Baseboard Management Console (BMC) Address . Enter the user credentials for the host's baseboard management controller (BMC). Select to power on the host after creation, and select Create . Scale up the number of replicas to match the number of available bare metal hosts. Navigate to Compute MachineSets , and increase the number of machine replicas in the cluster by selecting Edit Machine count from the Actions drop-down menu. Note You can also manage the number of bare metal nodes using the oc scale command and the appropriate bare metal compute machine set. 11.2.2. Adding a bare metal host to the cluster using YAML in the web console You can add bare metal hosts to the cluster in the web console using a YAML file that describes the bare metal host. Prerequisites Install a RHCOS compute machine on bare metal infrastructure for use in the cluster. Log in as a user with cluster-admin privileges. Create a Secret CR for the bare metal host. Procedure In the web console, navigate to Compute Bare Metal Hosts . Select Add Host New from YAML . Copy and paste the below YAML, modifying the relevant fields with the details of your host: apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: <bare_metal_host_name> spec: online: true bmc: address: <bmc_address> credentialsName: <secret_credentials_name> 1 disableCertificateVerification: True 2 bootMACAddress: <host_boot_mac_address> 1 credentialsName must reference a valid Secret CR. The baremetal-operator cannot manage the bare metal host without a valid Secret referenced in the credentialsName . For more information about secrets and how to create them, see Understanding secrets . 2 Setting disableCertificateVerification to true disables TLS host validation between the cluster and the baseboard management controller (BMC). Select Create to save the YAML and create the new bare metal host. Scale up the number of replicas to match the number of available bare metal hosts. Navigate to Compute MachineSets , and increase the number of machines in the cluster by selecting Edit Machine count from the Actions drop-down menu. Note You can also manage the number of bare metal nodes using the oc scale command and the appropriate bare metal compute machine set. 11.2.3. Automatically scaling machines to the number of available bare metal hosts To automatically create the number of Machine objects that matches the number of available BareMetalHost objects, add a metal3.io/autoscale-to-hosts annotation to the MachineSet object. Prerequisites Install RHCOS bare metal compute machines for use in the cluster, and create corresponding BareMetalHost objects. Install the OpenShift Container Platform CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Annotate the compute machine set that you want to configure for automatic scaling by adding the metal3.io/autoscale-to-hosts annotation. Replace <machineset> with the name of the compute machine set. USD oc annotate machineset <machineset> -n openshift-machine-api 'metal3.io/autoscale-to-hosts=<any_value>' Wait for the new scaled machines to start. Note When you use a BareMetalHost object to create a machine in the cluster and labels or selectors are subsequently changed on the BareMetalHost , the BareMetalHost object continues be counted against the MachineSet that the Machine object was created from. 11.2.4. Removing bare metal hosts from the provisioner node In certain circumstances, you might want to temporarily remove bare metal hosts from the provisioner node. For example, during provisioning when a bare metal host reboot is triggered by using the OpenShift Container Platform administration console or as a result of a Machine Config Pool update, OpenShift Container Platform logs into the integrated Dell Remote Access Controller (iDrac) and issues a delete of the job queue. To prevent the management of the number of Machine objects that matches the number of available BareMetalHost objects, add a baremetalhost.metal3.io/detached annotation to the MachineSet object. Note This annotation has an effect for only BareMetalHost objects that are in either Provisioned , ExternallyProvisioned or Ready/Available state. Prerequisites Install RHCOS bare metal compute machines for use in the cluster and create corresponding BareMetalHost objects. Install the OpenShift Container Platform CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Annotate the compute machine set that you want to remove from the provisioner node by adding the baremetalhost.metal3.io/detached annotation. USD oc annotate machineset <machineset> -n openshift-machine-api 'baremetalhost.metal3.io/detached' Wait for the new machines to start. Note When you use a BareMetalHost object to create a machine in the cluster and labels or selectors are subsequently changed on the BareMetalHost , the BareMetalHost object continues be counted against the MachineSet that the Machine object was created from. In the provisioning use case, remove the annotation after the reboot is complete by using the following command: USD oc annotate machineset <machineset> -n openshift-machine-api 'baremetalhost.metal3.io/detached-' Additional resources Expanding the cluster MachineHealthChecks on bare metal 11.2.5. Powering off bare-metal hosts You can power off bare-metal cluster hosts in the web console or by applying a patch in the cluster by using the OpenShift CLI ( oc ). Before you power off a host, you should mark the node as unschedulable and drain all pods and workloads from the node. Prerequisites You have installed a RHCOS compute machine on bare-metal infrastructure for use in the cluster. You have logged in as a user with cluster-admin privileges. You have configured the host to be managed and have added BMC credentials for the cluster host. You can add BMC credentials by applying a Secret custom resource (CR) in the cluster or by logging in to the web console and configuring the bare-metal host to be managed. Procedure In the web console, mark the node that you want to power off as unschedulable. Perform the following steps: Navigate to Nodes and select the node that you want to power off. Expand the Actions menu and select Mark as unschedulable . Manually delete or relocate running pods on the node by adjusting the pod deployments or scaling down workloads on the node to zero. Wait for the drain process to complete. Navigate to Compute Bare Metal Hosts . Expand the Options menu for the bare-metal host that you want to power off, and select Power Off . Select Immediate power off . Alternatively, you can patch the BareMetalHost resource for the host that you want to power off by using oc . Get the name of the managed bare-metal host. Run the following command: USD oc get baremetalhosts -n openshift-machine-api -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.provisioning.state}{"\n"}{end}' Example output master-0.example.com managed master-1.example.com managed master-2.example.com managed worker-0.example.com managed worker-1.example.com managed worker-2.example.com managed Mark the node as unschedulable: USD oc adm cordon <bare_metal_host> 1 1 <bare_metal_host> is the host that you want to shut down, for example, worker-2.example.com . Drain all pods on the node: USD oc adm drain <bare_metal_host> --force=true Pods that are backed by replication controllers are rescheduled to other available nodes in the cluster. Safely power off the bare-metal host. Run the following command: USD oc patch <bare_metal_host> --type json -p '[{"op": "replace", "path": "/spec/online", "value": false}]' After you power on the host, make the node schedulable for workloads. Run the following command: USD oc adm uncordon <bare_metal_host>
|
[
"apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: <bare_metal_host_name> spec: online: true bmc: address: <bmc_address> credentialsName: <secret_credentials_name> 1 disableCertificateVerification: True 2 bootMACAddress: <host_boot_mac_address>",
"oc annotate machineset <machineset> -n openshift-machine-api 'metal3.io/autoscale-to-hosts=<any_value>'",
"oc annotate machineset <machineset> -n openshift-machine-api 'baremetalhost.metal3.io/detached'",
"oc annotate machineset <machineset> -n openshift-machine-api 'baremetalhost.metal3.io/detached-'",
"oc get baremetalhosts -n openshift-machine-api -o jsonpath='{range .items[*]}{.metadata.name}{\"\\t\"}{.status.provisioning.state}{\"\\n\"}{end}'",
"master-0.example.com managed master-1.example.com managed master-2.example.com managed worker-0.example.com managed worker-1.example.com managed worker-2.example.com managed",
"oc adm cordon <bare_metal_host> 1",
"oc adm drain <bare_metal_host> --force=true",
"oc patch <bare_metal_host> --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/online\", \"value\": false}]'",
"oc adm uncordon <bare_metal_host>"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/scalability_and_performance/managing-bare-metal-hosts
|
Chapter 18. The camel-jbossdatagrid-fuse Quickstart
|
Chapter 18. The camel-jbossdatagrid-fuse Quickstart This quickstart shows how to use the component described in Section 5.1, "The camel-jbossdatagrid Component" on JBoss Fuse to interact with JBoss Data Grid. This quickstart will deploy two bundles, local_cache_producer and local_cache_consumer , on Fuse, one on each container child1 and child2 respectivity. Below is a description of each of the bundles: local_cache_producer : Scans a folder (/tmp/incoming) for incoming CSV files of the format "id, firstName, lastName, age". If a file is dropped with entries in the given format, each entry is read and transformed into a Person POJO and stored in the data grid. local_cache_consumer : Lets you query for a POJO using a RESTful interface and receive a JSON representation of the Person POJO stored in the data grid for the given key The bundles reside in two different containers; the consumer is able to extract what the producer has put in due to the same configuration being used in the infinispan.xml and jgroups.xml files. The infinispan.xml file defines a REPL (replicated) cache named camel-cache , and both the consumer and producer interact with this cache. Report a bug 18.1. Quickstart Prerequisites The prerequisites for this quickstart are as follows: Java 7.0 (Java SDK 1.7) or better Maven 3.0 or better JBoss Fuse 6.2.0 or better Configure the Maven Repository. For details, see Chapter 3, Install and Use the Maven Repositories Report a bug
| null |
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/Camel-JBoss_Data_Grid_Quickstart
|
Chapter 2. Selecting a cluster installation method and preparing it for users
|
Chapter 2. Selecting a cluster installation method and preparing it for users Before you install OpenShift Container Platform, decide what kind of installation process to follow and verify that you have all of the required resources to prepare the cluster for users. 2.1. Selecting a cluster installation type Before you install an OpenShift Container Platform cluster, you need to select the best installation instructions to follow. Think about your answers to the following questions to select the best option. 2.1.1. Do you want to install and manage an OpenShift Container Platform cluster yourself? If you want to install and manage OpenShift Container Platform yourself, you can install it on the following platforms: Amazon Web Services (AWS) on 64-bit x86 instances Amazon Web Services (AWS) on 64-bit ARM instances Microsoft Azure on 64-bit x86 instances Microsoft Azure on 64-bit ARM instances Microsoft Azure Stack Hub Google Cloud Platform (GCP) on 64-bit x86 instances Google Cloud Platform (GCP) on 64-bit ARM instances Red Hat OpenStack Platform (RHOSP) IBM Cloud(R) IBM Z(R) or IBM(R) LinuxONE with z/VM IBM Z(R) or IBM(R) LinuxONE with Red Hat Enterprise Linux (RHEL) KVM IBM Z(R) or IBM(R) LinuxONE in an LPAR IBM Power(R) IBM Power(R) Virtual Server Nutanix VMware vSphere Bare metal or other platform agnostic infrastructure You can deploy an OpenShift Container Platform 4 cluster to both on-premise hardware and to cloud hosting services, but all of the machines in a cluster must be in the same data center or cloud hosting service. If you want to use OpenShift Container Platform but you do not want to manage the cluster yourself, you can choose from several managed service options. If you want a cluster that is fully managed by Red Hat, you can use OpenShift Dedicated . You can also use OpenShift as a managed service on Azure, AWS, IBM Cloud(R), or Google Cloud Platform. For more information about managed services, see the OpenShift Products page. If you install an OpenShift Container Platform cluster with a cloud virtual machine as a virtual bare metal, the corresponding cloud-based storage is not supported. 2.1.2. Have you used OpenShift Container Platform 3 and want to use OpenShift Container Platform 4? If you used OpenShift Container Platform 3 and want to try OpenShift Container Platform 4, you need to understand how different OpenShift Container Platform 4 is. OpenShift Container Platform 4 weaves the Operators that package, deploy, and manage Kubernetes applications and the operating system that the platform runs on, Red Hat Enterprise Linux CoreOS (RHCOS), together seamlessly. Instead of deploying machines and configuring their operating systems so that you can install OpenShift Container Platform on them, the RHCOS operating system is an integral part of the OpenShift Container Platform cluster. Deploying the operating system for the cluster machines is part of the installation process for OpenShift Container Platform. See Differences between OpenShift Container Platform 3 and 4 . Because you need to provision machines as part of the OpenShift Container Platform cluster installation process, you cannot upgrade an OpenShift Container Platform 3 cluster to OpenShift Container Platform 4. Instead, you must create a new OpenShift Container Platform 4 cluster and migrate your OpenShift Container Platform 3 workloads to them. For more information about migrating, see Migrating from OpenShift Container Platform 3 to 4 overview . Because you must migrate to OpenShift Container Platform 4, you can use any type of production cluster installation process to create your new cluster. 2.1.3. Do you want to use existing components in your cluster? Because the operating system is integral to OpenShift Container Platform, it is easier to let the installation program for OpenShift Container Platform stand up all of the infrastructure. These are called installer provisioned infrastructure installations. In this type of installation, you can provide some existing infrastructure to the cluster, but the installation program deploys all of the machines that your cluster initially needs. You can deploy an installer-provisioned infrastructure cluster without specifying any customizations to the cluster or its underlying machines to AWS , Azure , Azure Stack Hub , GCP , Nutanix . If you need to perform basic configuration for your installer-provisioned infrastructure cluster, such as the instance type for the cluster machines, you can customize an installation for AWS , Azure , GCP , Nutanix . For installer-provisioned infrastructure installations, you can use an existing VPC in AWS , vNet in Azure , or VPC in GCP . You can also reuse part of your networking infrastructure so that your cluster in AWS , Azure , GCP can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. If you have existing accounts and credentials on these clouds, you can re-use them, but you might need to modify the accounts to have the required permissions to install OpenShift Container Platform clusters on them. You can use the installer-provisioned infrastructure method to create appropriate machine instances on your hardware for vSphere , and bare metal . Additionally, for vSphere , you can also customize additional network parameters during installation. For some installer-provisioned infrastructure installations, for example on the VMware vSphere and bare metal platforms, the external traffic that reaches the ingress virtual IP (VIP) is not balanced between the default IngressController replicas. For vSphere and bare metal installer-provisioned infrastructure installations where exceeding the baseline IngressController router performance is expected, you must configure an external load balancer. Configuring an external load balancer achieves the performance of multiple IngressController replicas. For more information about the baseline IngressController performance, see Baseline Ingress Controller (router) performance . For more information about configuring an external load balancer, see Configuring a user-managed load balancer . If you want to reuse extensive cloud infrastructure, you can complete a user-provisioned infrastructure installation. With these installations, you manually deploy the machines that your cluster requires during the installation process. If you perform a user-provisioned infrastructure installation on AWS , Azure , Azure Stack Hub , you can use the provided templates to help you stand up all of the required components. You can also reuse a shared VPC on GCP . Otherwise, you can use the provider-agnostic installation method to deploy a cluster into other clouds. You can also complete a user-provisioned infrastructure installation on your existing hardware. If you use RHOSP , IBM Z(R) or IBM(R) LinuxONE , IBM Z(R) and IBM(R) LinuxONE with RHEL KVM , IBM Z(R) and IBM(R) LinuxONE in an LPAR , IBM Power , or vSphere , use the specific installation instructions to deploy your cluster. If you use other supported hardware, follow the bare metal installation procedure. For some of these platforms, such as vSphere , and bare metal , you can also customize additional network parameters during installation. 2.1.4. Do you need extra security for your cluster? If you use a user-provisioned installation method, you can configure a proxy for your cluster. The instructions are included in each installation procedure. If you want to prevent your cluster on a public cloud from exposing endpoints externally, you can deploy a private cluster with installer-provisioned infrastructure on AWS , Azure , or GCP . If you need to install your cluster that has limited access to the internet, such as a disconnected or restricted network cluster, you can mirror the installation packages and install the cluster from them. Follow detailed instructions for user-provisioned infrastructure installations into restricted networks for AWS , GCP , IBM Z(R) or IBM(R) LinuxONE , IBM Z(R) or IBM(R) LinuxONE with RHEL KVM , IBM Z(R) or IBM(R) LinuxONE in an LPAR , IBM Power(R) , vSphere , or bare metal . You can also install a cluster into a restricted network using installer-provisioned infrastructure by following detailed instructions for AWS , GCP , IBM Cloud(R) , Nutanix , RHOSP , and vSphere . If you need to deploy your cluster to an AWS GovCloud region , AWS China region , or Azure government region , you can configure those custom regions during an installer-provisioned infrastructure installation. You can also configure the cluster machines to use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation during installation. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 2.2. Preparing your cluster for users after installation Some configuration is not required to install the cluster but recommended before your users access the cluster. You can customize the cluster itself by customizing the Operators that make up your cluster and integrate you cluster with other required systems, such as an identity provider. For a production cluster, you must configure the following integrations: Persistent storage An identity provider Monitoring core OpenShift Container Platform components 2.3. Preparing your cluster for workloads Depending on your workload needs, you might need to take extra steps before you begin deploying applications. For example, after you prepare infrastructure to support your application build strategy , you might need to make provisions for low-latency workloads or to protect sensitive workloads . You can also configure monitoring for application workloads. If you plan to run Windows workloads , you must enable hybrid networking with OVN-Kubernetes during the installation process; hybrid networking cannot be enabled after your cluster is installed. 2.4. Supported installation methods for different platforms You can perform different types of installations on different platforms. Note Not all installation options are supported for all platforms, as shown in the following tables. A checkmark indicates that the option is supported and links to the relevant section. Table 2.1. Installer-provisioned infrastructure options AWS (64-bit x86) AWS (64-bit ARM) Azure (64-bit x86) Azure (64-bit ARM) Azure Stack Hub GCP (64-bit x86) GCP (64-bit ARM) Nutanix RHOSP Bare metal (64-bit x86) Bare metal (64-bit ARM) vSphere IBM Cloud(R) IBM Z(R) IBM Power(R) IBM Power(R) Virtual Server Default [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] Custom [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] Network customization [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] Restricted network [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] Private clusters [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] Existing virtual private networks [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] Government regions [✓] [✓] Secret regions [✓] China regions [✓] Table 2.2. User-provisioned infrastructure options AWS (64-bit x86) AWS (64-bit ARM) Azure (64-bit x86) Azure (64-bit ARM) Azure Stack Hub GCP (64-bit x86) GCP (64-bit ARM) Nutanix RHOSP Bare metal (64-bit x86) Bare metal (64-bit ARM) vSphere IBM Cloud(R) IBM Z(R) IBM Z(R) with RHEL KVM IBM Power(R) Platform agnostic Custom [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] Network customization [✓] [✓] [✓] Restricted network [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] Shared VPC hosted outside of cluster project [✓] [✓]
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installation_overview/installing-preparing
|
Chapter 25. Viewing and Managing Log Files
|
Chapter 25. Viewing and Managing Log Files Log files are files that contain messages about the system, including the kernel, services, and applications running on it. There are different log files for different information. For example, there is a default system log file, a log file just for security messages, and a log file for cron tasks. Log files can be very useful when trying to troubleshoot a problem with the system such as trying to load a kernel driver or when looking for unauthorized login attempts to the system. This chapter discusses where to find log files, how to view log files, and what to look for in log files. Some log files are controlled by a daemon called rsyslogd . The rsyslogd daemon is an enhanced replacement for sysklogd , and provides extended filtering, encryption protected relaying of messages, various configuration options, input and output modules, support for transportation via the TCP or UDP protocols. Note that rsyslog is compatible with sysklogd . 25.1. Installing rsyslog Version 5 of rsyslog , provided in the rsyslog package, is installed by default in Red Hat Enterprise Linux 6. If required, to ensure that it is, issue the following command as root : 25.1.1. Upgrading to rsyslog version 7 Version 7 of rsyslog , provided in the rsyslog7 package, is available in Red Hat Enterprise Linux 6. It provides a number of enhancements over version 5, in particular higher processing performance and support for more plug-ins. If required, to change to version 7, make use of the yum shell utility as described below. Procedure 25.1. Upgrading to rsyslog 7 To upgrade from rsyslog version 5 to rsyslog version 7, it is necessary to install and remove the relevant packages simultaneously. This can be accomplished using the yum shell utility. Enter the following command as root to start the yum shell: The yum shell prompt appears. Enter the following commands to install the rsyslog7 package and remove the rsyslog package. Enter run to start the process: Enter y when prompted to start the upgrade. When the upgrade is completed, the yum shell prompt is displayed. Enter quit or exit to exit the shell: For information on using the new syntax provided by rsyslog version 7, see Section 25.4, "Using the New Configuration Format" .
|
[
"~]# yum install rsyslog Loaded plugins: product-id, refresh-packagekit, subscription-manager Package rsyslog-5.8.10-10.el6_6.i686 already installed and latest version Nothing to do",
"~]# yum shell Loaded plugins: product-id, refresh-packagekit, subscription-manager >",
"> install rsyslog7 > remove rsyslog",
"> run --> Running transaction check ---> Package rsyslog.i686 0:5.8.10-10.el6_6 will be erased ---> Package rsyslog7.i686 0:7.4.10-3.el6_6 will be installed --> Finished Dependency Resolution ============================================================================ Package Arch Version Repository Size ============================================================================ Installing: rsyslog7 i686 7.4.10-3.el6_6 rhel-6-workstation-rpms 1.3 M Removing: rsyslog i686 5.8.10-10.el6_6 @rhel-6-workstation-rpms 2.1 M Transaction Summary ============================================================================ Install 1 Package Remove 1 Package Total download size: 1.3 M Is this ok [y/d/N]: y",
"Finished Transaction > quit Leaving Shell ~]#"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch-viewing_and_managing_log_files
|
19.5. chkconfig
|
19.5. chkconfig The chkconfig command can also be used to activate and deactivate services. The chkconfig --list command displays a list of system services and whether they are started ( on ) or stopped ( off ) in runlevels 0-6. At the end of the list is a section for the services managed by xinetd . If the chkconfig --list command is used to query a service managed by xinetd , it displays whether the xinetd service is enabled ( on ) or disabled ( off ). For example, the command chkconfig --list finger returns the following output: As shown, finger is enabled as an xinetd service. If xinetd is running, finger is enabled. If you use chkconfig --list to query a service in /etc/rc.d , service's settings for each runlevel are displayed. For example, the command chkconfig --list httpd returns the following output: chkconfig can also be used to configure a service to be started (or not) in a specific runlevel. For example, to turn nscd off in runlevels 3, 4, and 5, use the following command: Warning Services managed by xinetd are immediately affected by chkconfig . For example, if xinetd is running, finger is disabled, and the command chkconfig finger on is executed, finger is immediately enabled without having to restart xinetd manually. Changes for other services do not take effect immediately after using chkconfig . You must stop or start the individual service with the command service daemon stop . In the example, replace daemon with the name of the service you want to stop; for example, httpd . Replace stop with start or restart to start or restart the service.
|
[
"finger on",
"httpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off",
"chkconfig --level 345 nscd off"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/controlling_access_to_services-chkconfig
|
9.9. Managing Unique UID and GID Number Assignments
|
9.9. Managing Unique UID and GID Number Assignments An IdM server must generate random UID and GID values and simultaneously ensure that replicas never generate the same UID or GID value. The need for unique UID and GID numbers might even cross IdM domains, if a single organization has multiple disparate domains. 9.9.1. About ID Number Ranges The UID and GID numbers are divided into ranges . By keeping separate numeric ranges for individual servers and replicas, the chances are minimal that any numbers issued by one server or replica will duplicate those from another. Ranges are updated and shared intelligently between servers and replicas through the Dynamic Numeric Assignment (DNA) Plug-in, as part of the backend 389 Directory Server instance for the domain. The same range is used for user IDs ( uidNumber ) and group IDs ( gidNumber ). A user and a group may have the same ID, but since the ID is set in different attributes, there is no conflict. Using the same ID number for both a user and a group also allows an administrator to configure user private groups, where a unique system group is created for each user and the ID number is the same for both the user and the group. When a user is created interactively or without specifying a UID or GID number, then the user account is created with an ID number that is available in the server or replica range. This means that a user always has a unique number for its UID number and, if configured, for its private group. Important If a number is manually assigned to a user entry, the server does not validate that the uidNumber is unique. It will allow duplicate IDs; this is expected (though discouraged) behavior for POSIX entries. The same is true for group entries: a duplicate gidNumber can be manually assigned to the entry. If two entries are assigned the same ID number, only the first entry is returned in a search for that ID number. However, both entries will be returned in searches for other attributes or with ipa user-find --all . 9.9.2. About ID Range Assignments During Installation The IdM administrator can initially define a range during server installation, using the --idstart and --idmax options with ipa-server-install . These options are not required, so the setup script can assign random ranges during installation. If no range is set manually when the first IdM server is installed, a range of 200,000 IDs is randomly selected. There are 10,000 possible ranges. Selecting a random range from that number provides a high probability of non-conflicting IDs if two separate IdM domains are ever merged in the future. With a single IdM server, IDs are assigned to entries in order through the range. With replicas, the initial server ID range is split and distributed. When a replica is installed, it is configured with an invalid range. It also has a directory entry (that is shared among replicas) that instructs the replica where it can request a valid range. When the replica starts, or as its current range is depleted so that less than 100 IDs are available, it can contact one of the available servers for a new range allotment. A special extended operation splits the range in two, so that the original server and the replica each have half of the available range. 9.9.3. A Note on Conflicting ID Ranges It is possible for an administrator to define an ID number range using the min_id and max_id options in the sssd.conf file. The default min_id value is 1 . However, Red Hat recommends to set this value to 1000 in order to avoid conflicts with UID and GID numbers that are reserved for system use. 9.9.4. Adding New Ranges If the range for the entire domain is close to depletion, a new range can be manually selected and assigned to one of the master servers. All replicas then request ID ranges from the master as necessary. The changes to the range are done by editing the 389 Directory Server configuration to change the DNA Plug-in instance. The range is defined in the dnaNextRange parameter. For example: Note This command only adds the specified range of values; it does not check that the values in that range are actually available. This check is performed when an attempt is made to allocate those values. If a range is added that contains mostly values that were already allocated, the system will cycle through the entire range searching for unallocated values, and then the operation ultimately fails if none are available. 9.9.5. Repairing Changed UID and GID Numbers When a user is created, the user is automatically assigned a user ID number and a group ID number. When the user logs into an IdM system or service, SSSD on that system caches that username with the associated UID/GID numbers. The UID number is then used as the identifying key for the user. If a user with the same name but a different UID attempts to log into the system, then SSSD treats it as two different users with a name collision. What this means is that SSSD does not recognize UID number changes. It interprets it as a different and new user, not an existing user with a different UID number. If an existing user changes the UID number, that user is prevented from logging into SSSD and associated services and domains. This also has an impact on any client applications which use SSSD for identity information; the user with the conflict will not be found or accessible to those applications. Important UID/GID changes are not supported in Identity Management or in SSSD. If a user for some reason has a changed UID/GID number, then the SSSD cache must be cleared for that user before that user can log in again. For example: [root@server ~]# sss_cache -u jsmith
|
[
"ldapmodify -x -D \"cn=Directory Manager\" -W -h server.example.com -p 389 Enter LDAP Password: ******* dn: cn=POSIX IDs,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config changetype: modify add: dnaNextRange dnaNextRange: 123400000-123500000",
"sss_cache -u jsmith"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/managing-unique_uid_and_gid_attributes
|
Chapter 16. Usability Analytics and Data Collection
|
Chapter 16. Usability Analytics and Data Collection Usability data collection is included with automation controller to collect data to better understand how automation controller users interact with it. Only users installing a trial of or a fresh installation of are opted-in for this data collection. Automation controller collects user data automatically to help improve the product. For information on setting up Automation Analytics, see Configuring Automation Analytics . 16.1. Automation Analytics When you imported your license for the first time, you were automatically opted in for the collection of data that powers Automation Analytics, a cloud service that is part of the Ansible Automation Platform subscription. Important For opt-in of Automation Analytics to have any effect, your instance of automation controller must be running on Red Hat Enterprise Linux. As with Red Hat Insights, Automation Analytics is built to collect the minimum amount of data needed. No credential secrets, personal data, automation variables, or task output is gathered. When you imported your license for the first time, you were automatically opted in to Automation Analytics. To configure or disable this feature, see Configuring Automation Analytics . By default, the data is collected every four hours. When you enable this feature, data is collected up to a month in arrears (or until the collection). You can turn off this data collection at any time in the Miscellaneous System settings of the System configuration window. This setting can also be enabled through the API by specifying INSIGHTS_TRACKING_STATE = true in either of these endpoints: api/v2/settings/all api/v2/settings/system The Automation Analytics generated from this data collection can be found on the Red Hat Cloud Services portal. Clusters data is the default view. This graph represents the number of job runs across all automation controller clusters over a period of time. The example shows a span of a week in a stacked bar-style chart that is organized by the number of jobs that ran successfully (in green) and jobs that failed (in red). Alternatively, you can select a single cluster to view its job status information. This multi-line chart represents the number of job runs for a single automation controller cluster for a specified period of time. The preceding example shows a span of a week, organized by the number of successfully running jobs (in green) and jobs that failed (in red). You can specify the number of successful and failed job runs for a selected cluster over a span of one week, two weeks, and monthly increments. On the clouds navigation panel, select Organization Statistics to view information for the following: Use by organization Job runs by organization Organization status Note The organization statistics page will be deprecated in a future release. 16.1.1. Use by organization The following chart represents the number of tasks run inside all jobs by a particular organization. 16.1.2. Job runs by organization This chart represents automation controller use across all automation controller clusters by organization, calculated by the number of jobs run by that organization. 16.1.3. Organization status This bar chart represents automation controller use by organization and date, which is calculated by the number of jobs run by that organization on a particular date. Alternatively, you can specify to show the number of job runs per organization in one week, two weeks, and monthly increments. 16.2. Details of data collection Automation Analytics collects the following classes of data from automation controller: Basic configuration, such as which features are enabled, and what operating system is being used Topology and status of the automation controller environment and hosts, including capacity and health Counts of automation resources: organizations, teams, and users inventories and hosts credentials (indexed by type) projects (indexed by type) templates schedules active sessions running and pending jobs Job execution details (start time, finish time, launch type, and success) Automation task details (success, host id, playbook/role, task name, and module used) You can use awx-manage gather_analytics (without --ship ) to inspect the data that automation controller sends, so that you can satisfy your data collection concerns. This creates a tarball that contains the analytics data that is sent to Red Hat. This file contains a number of JSON and CSV files. Each file contains a different set of analytics data. manifest.json config.json instance_info.json counts.json org_counts.json cred_type_counts.json inventory_counts.json projects_by_scm_type.json query_info.json job_counts.json job_instance_counts.json unified_job_template_table.csv unified_jobs_table.csv workflow_job_template_node_table.csv workflow_job_node_table.csv events_table.csv 16.2.1. manifest.json manifest.json is the manifest of the analytics data. It describes each file included in the collection, and what version of the schema for that file is included. The following is an example manifest.json file: "config.json": "1.1", "counts.json": "1.0", "cred_type_counts.json": "1.0", "events_table.csv": "1.1", "instance_info.json": "1.0", "inventory_counts.json": "1.2", "job_counts.json": "1.0", "job_instance_counts.json": "1.0", "org_counts.json": "1.0", "projects_by_scm_type.json": "1.0", "query_info.json": "1.0", "unified_job_template_table.csv": "1.0", "unified_jobs_table.csv": "1.0", "workflow_job_node_table.csv": "1.0", "workflow_job_template_node_table.csv": "1.0" } 16.2.2. config.json The config.json file contains a subset of the configuration endpoint /api/v2/config from the cluster. An example config.json is: { "ansible_version": "2.9.1", "authentication_backends": [ "social_core.backends.azuread.AzureADOAuth2", "django.contrib.auth.backends.ModelBackend" ], "external_logger_enabled": true, "external_logger_type": "splunk", "free_instances": 1234, "install_uuid": "d3d497f7-9d07-43ab-b8de-9d5cc9752b7c", "instance_uuid": "bed08c6b-19cc-4a49-bc9e-82c33936e91b", "license_expiry": 34937373, "license_type": "enterprise", "logging_aggregators": [ "awx", "activity_stream", "job_events", "system_tracking" ], "pendo_tracking": "detailed", "platform": { "dist": [ "redhat", "7.4", "Maipo" ], "release": "3.10.0-693.el7.x86_64", "system": "Linux", "type": "traditional" }, "total_licensed_instances": 2500, "controller_url_base": "https://ansible.rhdemo.io", "controller_version": "3.6.3" } Which includes the following fields: ansible_version : The system Ansible version on the host authentication_backends : The user authentication backends that are available. For more information, see Configuring an authentication type . external_logger_enabled : Whether external logging is enabled external_logger_type : What logging backend is in use if enabled. For more information, see Logging and aggregation . logging_aggregators : What logging categories are sent to external logging. For more information, see Logging and aggregation . free_instances : How many hosts are available in the license. A value of zero means the cluster is fully consuming its license. install_uuid : A UUID for the installation (identical for all cluster nodes) instance_uuid : A UUID for the instance (different for each cluster node) license_expiry : Time to expiry of the license, in seconds license_type : The type of the license (should be 'enterprise' for most cases) pendo_tracking : State of usability_data_collection platform : The operating system the cluster is running on total_licensed_instances : The total number of hosts in the license controller_url_base : The base URL for the cluster used by clients (shown in Automation Analytics) controller_version : Version of the software on the cluster 16.2.3. instance_info.json The instance_info.json file contains detailed information on the instances that make up the cluster, organized by instance UUID. The following is an example instance_info.json file: { "bed08c6b-19cc-4a49-bc9e-82c33936e91b": { "capacity": 57, "cpu": 2, "enabled": true, "last_isolated_check": "2019-08-15T14:48:58.553005+00:00", "managed_by_policy": true, "memory": 8201400320, "uuid": "bed08c6b-19cc-4a49-bc9e-82c33936e91b", "version": "3.6.3" } "c0a2a215-0e33-419a-92f5-e3a0f59bfaee": { "capacity": 57, "cpu": 2, "enabled": true, "last_isolated_check": "2019-08-15T14:48:58.553005+00:00", "managed_by_policy": true, "memory": 8201400320, "uuid": "c0a2a215-0e33-419a-92f5-e3a0f59bfaee", "version": "3.6.3" } } Which includes the following fields: capacity : The capacity of the instance for executing tasks. cpu : Processor cores for the instance memory : Memory for the instance enabled : Whether the instance is enabled and accepting tasks managed_by_policy : Whether the instance's membership in instance groups is managed by policy, or manually managed version : Version of the software on the instance 16.2.4. counts.json The counts.json file contains the total number of objects for each relevant category in a cluster. The following is an example counts.json file: { "active_anonymous_sessions": 1, "active_host_count": 682, "active_sessions": 2, "active_user_sessions": 1, "credential": 38, "custom_inventory_script": 2, "custom_virtualenvs": 4, "host": 697, "inventories": { "normal": 20, "smart": 1 }, "inventory": 21, "job_template": 78, "notification_template": 5, "organization": 10, "pending_jobs": 0, "project": 20, "running_jobs": 0, "schedule": 16, "team": 5, "unified_job": 7073, "user": 28, "workflow_job_template": 15 } Each entry in this file is for the corresponding API objects in /api/v2 , with the exception of the active session counts. 16.2.5. org_counts.json The org_counts.json file contains information on each organization in the cluster, and the number of users and teams associated with that organization. The following is an example org_counts.json file: { "1": { "name": "Operations", "teams": 5, "users": 17 }, "2": { "name": "Development", "teams": 27, "users": 154 }, "3": { "name": "Networking", "teams": 3, "users": 28 } } 16.2.6. cred_type_counts.json The cred_type_counts.json file contains information on the different credential types in the cluster, and how many credentials exist for each type. The following is an example cred_type_counts.json file: { "1": { "credential_count": 15, "managed_by_controller": true, "name": "Machine" }, "2": { "credential_count": 2, "managed_by_controller": true, "name": "Source Control" }, "3": { "credential_count": 3, "managed_by_controller": true, "name": "Vault" }, "4": { "credential_count": 0, "managed_by_controller": true, "name": "Network" }, "5": { "credential_count": 6, "managed_by_controller": true, "name": "Amazon Web Services" }, "6": { "credential_count": 0, "managed_by_controller": true, "name": "OpenStack" }, 16.2.7. inventory_counts.json The inventory_counts.json file contains information on the different inventories in the cluster. The following is an example inventory_counts.json file: { "1": { "hosts": 211, "kind": "", "name": "AWS Inventory", "source_list": [ { "name": "AWS", "num_hosts": 211, "source": "ec2" } ], "sources": 1 }, "2": { "hosts": 15, "kind": "", "name": "Manual inventory", "source_list": [], "sources": 0 }, "3": { "hosts": 25, "kind": "", "name": "SCM inventory - test repo", "source_list": [ { "name": "Git source", "num_hosts": 25, "source": "scm" } ], "sources": 1 } "4": { "num_hosts": 5, "kind": "smart", "name": "Filtered AWS inventory", "source_list": [], "sources": 0 } } 16.2.8. projects_by_scm_type.json The projects_by_scm_type.json file provides a breakdown of all projects in the cluster, by source control type. The following is an example projects_by_scm_type.json file: { "git": 27, "hg": 0, "insights": 1, "manual": 0, "svn": 0 } 16.2.9. query_info.json The query_info.json file provides details on when and how the data collection happened. The following is an example query_info.json file: { "collection_type": "manual", "current_time": "2019-11-22 20:10:27.751267+00:00", "last_run": "2019-11-22 20:03:40.361225+00:00" } collection_type is one of manual or automatic . 16.2.10. job_counts.json The job_counts.json file provides details on the job history of the cluster, describing both how jobs were launched, and what their finishing status is. The following is an example job_counts.json file: "launch_type": { "dependency": 3628, "manual": 799, "relaunch": 6, "scheduled": 1286, "scm": 6, "workflow": 1348 }, "status": { "canceled": 7, "failed": 108, "successful": 6958 }, "total_jobs": 7073 } 16.2.11. job_instance_counts.json The job_instance_counts.json file provides the same detail as job_counts.json , broken down by instance. The following is an example job_instance_counts.json file: { "localhost": { "launch_type": { "dependency": 3628, "manual": 770, "relaunch": 3, "scheduled": 1009, "scm": 6, "workflow": 1336 }, "status": { "canceled": 2, "failed": 60, "successful": 6690 } } } Note that instances in this file are by hostname, not by UUID as they are in instance_info . 16.2.12. unified_job_template_table.csv The unified_job_template_table.csv file provides information on job templates in the system. Each line contains the following fields for the job template: id : Job template id. name : Job template name. polymorphic_ctype_id : The id of the type of template it is. model : The name of the polymorphic_ctype_id for the template. Examples include project , systemjobtemplate , jobtemplate , inventorysource , and workflowjobtemplate . created : When the template was created. modified : When the template was last updated. created_by_id : The userid that created the template. Blank if done by the system. modified_by_id : The userid that last modified the template. Blank if done by the system. current_job_id : Currently executing job id for the template, if any. last_job_id : Last execution of the job. last_job_run : Time of last execution of the job. last_job_failed : Whether the last_job_id failed. status : Status of last_job_id . next_job_run : scheduled execution of the template, if any. next_schedule_id : Schedule id for next_job_run , if any. 16.2.13. unified_jobs_table.csv The unified_jobs_table.csv file provides information on jobs run by the system. Each line contains the following fields for a job: id : Job id. name : Job name (from the template). polymorphic_ctype_id : The id of the type of job it is. model : The name of the polymorphic_ctype_id for the job. Examples include job and workflow . organization_id : The organization ID for the job. organization_name : Name for the organization_id . created : When the job record was created. started : When the job started executing. finished : When the job finished. elapsed : Elapsed time for the job in seconds. unified_job_template_id : The template for this job. launch_type : One of manual , scheduled , relaunched , scm , workflow , or dependency . schedule_id : The id of the schedule that launched the job, if any, instance_group_id : The instance group that executed the job. execution_node : The node that executed the job (hostname, not UUID). controller_node : The automation controller node for the job, if run as an isolated job, or in a container group. cancel_flag : Whether the job was canceled. status : Status of the job. failed : Whether the job failed. job_explanation : Any additional detail for jobs that failed to execute properly. forks : Number of forks executed for this job. 16.2.14. workflow_job_template_node_table.csv The workflow_job_template_node_table.csv file provides information on the nodes defined in workflow job templates on the system. Each line contains the following fields for a worfklow job template node: id : Node id. created : When the node was created. modified : When the node was last updated. unified_job_template_id : The id of the job template, project, inventory, or other parent resource for this node. workflow_job_template_id : The workflow job template that contains this node. inventory_id : The inventory used by this node. success_nodes : Nodes that are triggered after this node succeeds. failure_nodes : Nodes that are triggered after this node fails. always_nodes : Nodes that always are triggered after this node finishes. all_parents_must_converge : Whether this node requires all its parent conditions satisfied to start. 16.2.15. workflow_job_node_table.csv The workflow_job_node_table.csv provides information on the jobs that have been executed as part of a workflow on the system. Each line contains the following fields for a job run as part of a workflow: id : Node id. created : When the node was created. modified : When the node was last updated. job_id : The job id for the job run for this node. unified_job_template_id : The id of the job template, project, inventory, or other parent resource for this node. workflow_job_template_id : The workflow job template that contains this node. inventory_id : The inventory used by this node. success_nodes : Nodes that are triggered after this node succeeds. failure_nodes : Nodes that are triggered after this node fails. always_nodes : Nodes that always are triggered after this node finishes. do_not_run : Nodes that were not run in the workflow due to their start conditions not being triggered. all_parents_must_converge : Whether this node requires all its parent conditions satisfied to start. 16.2.16. events_table.csv The events_table.csv file provides information on all job events from all job runs in the system. Each line contains the following fields for a job event: id : Event id. uuid : Event UUID. created : When the event was created. parent_uuid : The parent UUID for this event, if any. event : The Ansible event type. task_action : The module associated with this event, if any (such as command or yum ). failed : Whether the event returned failed . changed : Whether the event returned changed . playbook : Playbook associated with the event. play : Play name from playbook. task : Task name from playbook. role : Role name from playbook. job_id : Id of the job this event is from. host_id : Id of the host this event is associated with, if any. host_name : Name of the host this event is associated with, if any. start : Start time of the task. end : End time of the task. duration : Duration of the task. warnings : Any warnings from the task or module. deprecations : Any deprecation warnings from the task or module. 16.3. Analytics Reports Reports for data collected are available through console.redhat.com . Other Automation Analytics data currently available and accessible through the platform UI include the following: Automation Calculator is a view-only version of the Automation Calculator utility that shows a report that represents (possible) savings to the subscriber. Host Metrics is an analytics report collected for host data such as, when they were first automated, when they were most recently automated, how many times they were automated, and how many times each host has been deleted. Subscription Usage reports the historical usage of your subscription. Subscription capacity and licenses consumed per month are displayed, with the ability to filter by the last year, two years, or three years.
|
[
"\"config.json\": \"1.1\", \"counts.json\": \"1.0\", \"cred_type_counts.json\": \"1.0\", \"events_table.csv\": \"1.1\", \"instance_info.json\": \"1.0\", \"inventory_counts.json\": \"1.2\", \"job_counts.json\": \"1.0\", \"job_instance_counts.json\": \"1.0\", \"org_counts.json\": \"1.0\", \"projects_by_scm_type.json\": \"1.0\", \"query_info.json\": \"1.0\", \"unified_job_template_table.csv\": \"1.0\", \"unified_jobs_table.csv\": \"1.0\", \"workflow_job_node_table.csv\": \"1.0\", \"workflow_job_template_node_table.csv\": \"1.0\" }",
"{ \"ansible_version\": \"2.9.1\", \"authentication_backends\": [ \"social_core.backends.azuread.AzureADOAuth2\", \"django.contrib.auth.backends.ModelBackend\" ], \"external_logger_enabled\": true, \"external_logger_type\": \"splunk\", \"free_instances\": 1234, \"install_uuid\": \"d3d497f7-9d07-43ab-b8de-9d5cc9752b7c\", \"instance_uuid\": \"bed08c6b-19cc-4a49-bc9e-82c33936e91b\", \"license_expiry\": 34937373, \"license_type\": \"enterprise\", \"logging_aggregators\": [ \"awx\", \"activity_stream\", \"job_events\", \"system_tracking\" ], \"pendo_tracking\": \"detailed\", \"platform\": { \"dist\": [ \"redhat\", \"7.4\", \"Maipo\" ], \"release\": \"3.10.0-693.el7.x86_64\", \"system\": \"Linux\", \"type\": \"traditional\" }, \"total_licensed_instances\": 2500, \"controller_url_base\": \"https://ansible.rhdemo.io\", \"controller_version\": \"3.6.3\" }",
"{ \"bed08c6b-19cc-4a49-bc9e-82c33936e91b\": { \"capacity\": 57, \"cpu\": 2, \"enabled\": true, \"last_isolated_check\": \"2019-08-15T14:48:58.553005+00:00\", \"managed_by_policy\": true, \"memory\": 8201400320, \"uuid\": \"bed08c6b-19cc-4a49-bc9e-82c33936e91b\", \"version\": \"3.6.3\" } \"c0a2a215-0e33-419a-92f5-e3a0f59bfaee\": { \"capacity\": 57, \"cpu\": 2, \"enabled\": true, \"last_isolated_check\": \"2019-08-15T14:48:58.553005+00:00\", \"managed_by_policy\": true, \"memory\": 8201400320, \"uuid\": \"c0a2a215-0e33-419a-92f5-e3a0f59bfaee\", \"version\": \"3.6.3\" } }",
"{ \"active_anonymous_sessions\": 1, \"active_host_count\": 682, \"active_sessions\": 2, \"active_user_sessions\": 1, \"credential\": 38, \"custom_inventory_script\": 2, \"custom_virtualenvs\": 4, \"host\": 697, \"inventories\": { \"normal\": 20, \"smart\": 1 }, \"inventory\": 21, \"job_template\": 78, \"notification_template\": 5, \"organization\": 10, \"pending_jobs\": 0, \"project\": 20, \"running_jobs\": 0, \"schedule\": 16, \"team\": 5, \"unified_job\": 7073, \"user\": 28, \"workflow_job_template\": 15 }",
"{ \"1\": { \"name\": \"Operations\", \"teams\": 5, \"users\": 17 }, \"2\": { \"name\": \"Development\", \"teams\": 27, \"users\": 154 }, \"3\": { \"name\": \"Networking\", \"teams\": 3, \"users\": 28 } }",
"{ \"1\": { \"credential_count\": 15, \"managed_by_controller\": true, \"name\": \"Machine\" }, \"2\": { \"credential_count\": 2, \"managed_by_controller\": true, \"name\": \"Source Control\" }, \"3\": { \"credential_count\": 3, \"managed_by_controller\": true, \"name\": \"Vault\" }, \"4\": { \"credential_count\": 0, \"managed_by_controller\": true, \"name\": \"Network\" }, \"5\": { \"credential_count\": 6, \"managed_by_controller\": true, \"name\": \"Amazon Web Services\" }, \"6\": { \"credential_count\": 0, \"managed_by_controller\": true, \"name\": \"OpenStack\" },",
"{ \"1\": { \"hosts\": 211, \"kind\": \"\", \"name\": \"AWS Inventory\", \"source_list\": [ { \"name\": \"AWS\", \"num_hosts\": 211, \"source\": \"ec2\" } ], \"sources\": 1 }, \"2\": { \"hosts\": 15, \"kind\": \"\", \"name\": \"Manual inventory\", \"source_list\": [], \"sources\": 0 }, \"3\": { \"hosts\": 25, \"kind\": \"\", \"name\": \"SCM inventory - test repo\", \"source_list\": [ { \"name\": \"Git source\", \"num_hosts\": 25, \"source\": \"scm\" } ], \"sources\": 1 } \"4\": { \"num_hosts\": 5, \"kind\": \"smart\", \"name\": \"Filtered AWS inventory\", \"source_list\": [], \"sources\": 0 } }",
"{ \"git\": 27, \"hg\": 0, \"insights\": 1, \"manual\": 0, \"svn\": 0 }",
"{ \"collection_type\": \"manual\", \"current_time\": \"2019-11-22 20:10:27.751267+00:00\", \"last_run\": \"2019-11-22 20:03:40.361225+00:00\" }",
"\"launch_type\": { \"dependency\": 3628, \"manual\": 799, \"relaunch\": 6, \"scheduled\": 1286, \"scm\": 6, \"workflow\": 1348 }, \"status\": { \"canceled\": 7, \"failed\": 108, \"successful\": 6958 }, \"total_jobs\": 7073 }",
"{ \"localhost\": { \"launch_type\": { \"dependency\": 3628, \"manual\": 770, \"relaunch\": 3, \"scheduled\": 1009, \"scm\": 6, \"workflow\": 1336 }, \"status\": { \"canceled\": 2, \"failed\": 60, \"successful\": 6690 } } }"
] |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/configuring_automation_execution/controller-usability-analytics-data-collection
|
Chapter 1. Customizing your taxonomy tree
|
Chapter 1. Customizing your taxonomy tree You can modify the taxonomy tree with knowledge data in your RHEL AI environment to create your own custom Granite Large Language Model (LLM). On RHEL AI, knowledge data sets are formatted in YAML. This YAML configuration is called a qna.yaml file, where "qna" stands for question and answer. The following documentation sections describe how to create skill and knowledge sets for your taxonomy. Adding knowledge to your taxonomy tree Adding skills to your taxonomy tree There are a few supported knowledge document types that you can use for training the base Granite LLM. The current supported document types include: Markdown PDF Note RHEL AI currently only supports training with skills and knowledge. Skills only model customization is not supported. 1.1. Overview of skill and knowledge You can use skill and knowledge sets and specify domain-specific information to teach your custom model. Knowledge A dataset that consists of background information and facts. When creating knowledge sets for a model, you are providing it with additional data and information so the model can answer questions more accurately. Skills A dataset where you can teach the model how to do a task. Skills on RHEL AI are split into categories: Compositional skill: Compositional skills allow AI models to perform specific tasks or functions. There are two types of composition skills: Freeform compositional skills: These are performative skills that to not require additional context or information to function. Grounded compositional skills: These are performative skills that require additional context. For example, you can teach the model to read a table, where the additional context is an example of the table layout. Foundation skills: Foundational skills are skills that involve math, reasoning, and coding. Important Ensure your server is not running before you start customizing your Granite starter model. Additional Resources Sample knowledge specifications
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.3/html/creating_skills_and_knowledge_yaml_files/customize_taxonomy_tree
|
Chapter 4. Metrics data retention
|
Chapter 4. Metrics data retention The storage capacity required by PCP data logging is determined by the following factors: The logged metrics The logging interval The retention policy The default logging (sampling) interval is 60 seconds. The default retention policy is to compress archives older than one day and to keep archives for the last 14 days. You can increase the logging interval or shorten the retention policy to save storage space. If you require high-resolution sampling, you can decrease the logging interval. In such case, ensure that you have enough storage space. PCP archive logs are stored in the /var/log/pcp/pmlogger/ satellite.example.com directory. 4.1. Changing default logging interval You can change the default logging interval to either increase or decrease the sampling rate, at which the PCP metrics are logged. A larger interval results in a lower sampling rate. Procedure Open the /etc/pcp/pmlogger/control.d/local configuration file. Locate the LOCALHOSTNAME line. Append -t XX s , where XX is the required time interval in seconds. Save the file. Restart the pmlogger service: 4.2. Changing data retention policy You can change the data retention policy to control after how long the PCP data are archived and deleted. Procedure Open the /etc/sysconfig/pmlogger_timers file. Locate the PMLOGGER_DAILY_PARAMS line. If the line is commented, uncomment the line. Configure the following parameters: Ensure the default -E parameter is present. Append the -x parameter and add a value for the required number of days after which data is archived. Append the -k parameter and add a value for the number of days after which data is deleted. For example, the parameters -x 4 -k 7 specify that data will be compressed after 4 days and deleted after 7 days. Save the file. 4.3. Viewing data storage statistics You can list all available metrics, grouped by the frequency at which they are logged. For each group, you can also view the storage required to store the listed metrics, per day. Example storage statistics: Procedure To view data storage statistics, enter the following command on your Satellite Server:
|
[
"systemctl restart pmlogger",
"logged every 60 sec: 61752 bytes or 84.80 Mbytes/day",
"less /var/log/pcp/pmlogger/ satellite.example.com /pmlogger.log"
] |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/monitoring_satellite_performance/metrics-data-retention_monitoring
|
Chapter 9. Using service accounts in applications
|
Chapter 9. Using service accounts in applications 9.1. Service accounts overview A service account is an OpenShift Dedicated account that allows a component to directly access the API. Service accounts are API objects that exist within each project. Service accounts provide a flexible way to control API access without sharing a regular user's credentials. When you use the OpenShift Dedicated CLI or web console, your API token authenticates you to the API. You can associate a component with a service account so that they can access the API without using a regular user's credentials. Each service account's user name is derived from its project and name: system:serviceaccount:<project>:<name> Every service account is also a member of two groups: Group Description system:serviceaccounts Includes all service accounts in the system. system:serviceaccounts:<project> Includes all service accounts in the specified project. 9.2. Default service accounts Your OpenShift Dedicated cluster contains default service accounts for cluster management and generates more service accounts for each project. 9.2.1. Default cluster service accounts Several infrastructure controllers run using service account credentials. The following service accounts are created in the OpenShift Dedicated infrastructure project ( openshift-infra ) at server start, and given the following roles cluster-wide: Service account Description replication-controller Assigned the system:replication-controller role deployment-controller Assigned the system:deployment-controller role build-controller Assigned the system:build-controller role. Additionally, the build-controller service account is included in the privileged security context constraint to create privileged build pods. 9.2.2. Default project service accounts and roles Three service accounts are automatically created in each project: Service account Usage builder Used by build pods. It is given the system:image-builder role, which allows pushing images to any imagestream in the project using the internal Docker registry. Note The builder service account is not created if the Build cluster capability is not enabled. deployer Used by deployment pods and given the system:deployer role, which allows viewing and modifying replication controllers and pods in the project. Note The deployer service account is not created if the DeploymentConfig cluster capability is not enabled. default Used to run all other pods unless they specify a different service account. All service accounts in a project are given the system:image-puller role, which allows pulling images from any image stream in the project using the internal container image registry. 9.2.3. Automatically generated image pull secrets By default, OpenShift Dedicated creates an image pull secret for each service account. Note Prior to OpenShift Dedicated 4.16, a long-lived service account API token secret was also generated for each service account that was created. Starting with OpenShift Dedicated 4.16, this service account API token secret is no longer created. After upgrading to 4, any existing long-lived service account API token secrets are not deleted and will continue to function. For information about detecting long-lived API tokens that are in use in your cluster or deleting them if they are not needed, see the Red Hat Knowledgebase article Long-lived service account API tokens in OpenShift Container Platform . This image pull secret is necessary to integrate the OpenShift image registry into the cluster's user authentication and authorization system. However, if you do not enable the ImageRegistry capability or if you disable the integrated OpenShift image registry in the Cluster Image Registry Operator's configuration, an image pull secret is not generated for each service account. When the integrated OpenShift image registry is disabled on a cluster that previously had it enabled, the previously generated image pull secrets are deleted automatically. 9.3. Creating service accounts You can create a service account in a project and grant it permissions by binding it to a role. Procedure Optional: To view the service accounts in the current project: USD oc get sa Example output NAME SECRETS AGE builder 1 2d default 1 2d deployer 1 2d To create a new service account in the current project: USD oc create sa <service_account_name> 1 1 To create a service account in a different project, specify -n <project_name> . Example output serviceaccount "robot" created Tip You can alternatively apply the following YAML to create the service account: apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project> Optional: View the secrets for the service account: USD oc describe sa robot Example output Name: robot Namespace: project1 Labels: <none> Annotations: openshift.io/internal-registry-pull-secret-ref: robot-dockercfg-qzbhb Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: <none> Events: <none>
|
[
"system:serviceaccount:<project>:<name>",
"oc get sa",
"NAME SECRETS AGE builder 1 2d default 1 2d deployer 1 2d",
"oc create sa <service_account_name> 1",
"serviceaccount \"robot\" created",
"apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project>",
"oc describe sa robot",
"Name: robot Namespace: project1 Labels: <none> Annotations: openshift.io/internal-registry-pull-secret-ref: robot-dockercfg-qzbhb Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: <none> Events: <none>"
] |
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/authentication_and_authorization/using-service-accounts
|
9.2. Booting a Guest Using PXE
|
9.2. Booting a Guest Using PXE This section demonstrates how to boot a guest virtual machine with PXE. 9.2.1. Using bridged networking Procedure 9.2. Booting a guest using PXE and bridged networking Ensure bridging is enabled such that the PXE boot server is available on the network. Boot a guest virtual machine with PXE booting enabled. You can use the virt-install command to create a new virtual machine with PXE booting enabled, as shown in the following example command: Alternatively, ensure that the guest network is configured to use your bridged network, and that the XML guest configuration file has a <boot order='1'/> element inside the network's <interface> element, as shown in the following example: 9.2.2. Using a Private libvirt Network Procedure 9.3. Using a private libvirt network Configure PXE booting on libvirt as shown in Section 9.1.1, "Setting up a PXE Boot Server on a Private libvirt Network" . Boot a guest virtual machine using libvirt with PXE booting enabled. You can use the virt-install command to create/install a new virtual machine using PXE: Alternatively, ensure that the guest network is configured to use your private libvirt network, and that the XML guest configuration file has a <boot order='1'/> element inside the network's <interface> element. In addition, ensure that the guest virtual machine is connected to the private network:
|
[
"virt-install --pxe --network bridge=breth0 --prompt",
"<interface type='bridge'> <mac address='52:54:00:5a:ad:cb'/> <source bridge='breth0'/> <target dev='vnet0'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> <boot order='1'/> </interface>",
"virt-install --pxe --network network=default --prompt",
"<interface type='network'> <mac address='52:54:00:66:79:14'/> <source network='default'/> <target dev='vnet0'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> <boot order='1'/> </interface>"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-Network_booting_with_libvirt-Booting_a_guest_using_PXE
|
Chapter 7. Management of monitoring stack using the Ceph Orchestrator
|
Chapter 7. Management of monitoring stack using the Ceph Orchestrator As a storage administrator, you can use the Ceph Orchestrator with Cephadm in the backend to deploy monitoring and alerting stack. The monitoring stack consists of Prometheus, Prometheus exporters, Prometheus Alertmanager, and Grafana. Users need to either define these services with Cephadm in a YAML configuration file, or they can use the command line interface to deploy them. When multiple services of the same type are deployed, a highly-available setup is deployed. The node exporter is an exception to this rule. Note Red Hat Ceph Storage does not support custom images for deploying monitoring services such as Prometheus, Grafana, Alertmanager, and node-exporter. The following monitoring services can be deployed with Cephadm: Prometheus is the monitoring and alerting toolkit. It collects the data provided by Prometheus exporters and fires preconfigured alerts if predefined thresholds have been reached. The Prometheus manager module provides a Prometheus exporter to pass on Ceph performance counters from the collection point in ceph-mgr . The Prometheus configuration, including scrape targets, such as metrics providing daemons, is set up automatically by Cephadm. Cephadm also deploys a list of default alerts, for example, health error, 10% OSDs down, or pgs inactive. Alertmanager handles alerts sent by the Prometheus server. It deduplicates, groups, and routes the alerts to the correct receiver. By default, the Ceph dashboard is automatically configured as the receiver. The Alertmanager handles alerts sent by the Prometheus server. Alerts can be silenced using the Alertmanager, but silences can also be managed using the Ceph Dashboard. Grafana is a visualization and alerting software. The alerting functionality of Grafana is not used by this monitoring stack. For alerting, the Alertmanager is used. By default, traffic to Grafana is encrypted with TLS. You can either supply your own TLS certificate or use a self-signed one. If no custom certificate has been configured before Grafana has been deployed, then a self-signed certificate is automatically created and configured for Grafana. Custom certificates for Grafana can be configured using the following commands: Syntax Node exporter is an exporter for Prometheus which provides data about the node on which it is installed. It is recommended to install the node exporter on all nodes. This can be done using the monitoring.yml file with the node-exporter service type. 7.1. Deploying the monitoring stack using the Ceph Orchestrator The monitoring stack consists of Prometheus, Prometheus exporters, Prometheus Alertmanager, Grafana, and Ceph Exporter. Ceph Dashboard makes use of these components to store and visualize detailed metrics on cluster usage and performance. You can deploy the monitoring stack using the service specification in YAML file format. All the monitoring services can have the network and port they bind to configured in the yml file. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the nodes. Procedure Enable the prometheus module in the Ceph Manager daemon. This exposes the internal Ceph metrics so that Prometheus can read them: Example Important Ensure this command is run before Prometheus is deployed. If the command was not run before the deployment, you must redeploy Prometheus to update the configuration: Navigate to the following directory: Syntax Example Note If the directory monitoring does not exist, create it. Create the monitoring.yml file: Example Edit the specification file with a content similar to the following example: Example Note Ensure the monitoring stack components alertmanager , prometheus , and grafana are deployed on the same host. The node-exporter and ceph-exporter components should be deployed on all the hosts. Apply monitoring services: Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example Important Prometheus, Grafana, and the Ceph dashboard are all automatically configured to talk to each other, resulting in a fully functional Grafana integration in the Ceph dashboard. 7.2. Removing the monitoring stack using the Ceph Orchestrator You can remove the monitoring stack using the ceph orch rm command. Prerequisites A running Red Hat Ceph Storage cluster. Procedure Log into the Cephadm shell: Example Use the ceph orch rm command to remove the monitoring stack: Syntax Example Check the status of the process: Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example Additional Resources See Deploying the monitoring stack using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information.
|
[
"ceph config-key set mgr/cephadm/ HOSTNAME /grafana_key -i PRESENT_WORKING_DIRECTORY /key.pem ceph config-key set mgr/cephadm/ HOSTNAME /grafana_crt -i PRESENT_WORKING_DIRECTORY /certificate.pem",
"ceph mgr module enable prometheus",
"ceph orch redeploy prometheus",
"cd /var/lib/ceph/ DAEMON_PATH /",
"cd /var/lib/ceph/monitoring/",
"touch monitoring.yml",
"service_type: prometheus service_name: prometheus placement: hosts: - host01 networks: - 192.169.142.0/24 --- service_type: node-exporter --- service_type: alertmanager service_name: alertmanager placement: hosts: - host01 networks: - 192.169.142.0/24 --- service_type: grafana service_name: grafana placement: hosts: - host01 networks: - 192.169.142.0/24 --- service_type: ceph-exporter",
"ceph orch apply -i monitoring.yml",
"ceph orch ls",
"ceph orch ps --service_name= SERVICE_NAME",
"ceph orch ps --service_name=prometheus",
"cephadm shell",
"ceph orch rm SERVICE_NAME --force",
"ceph orch rm grafana ceph orch rm prometheus ceph orch rm node-exporter ceph orch rm ceph-exporter ceph orch rm alertmanager ceph mgr module disable prometheus",
"ceph orch status",
"ceph orch ls",
"ceph orch ps",
"ceph orch ps"
] |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/operations_guide/management-of-monitoring-stack-using-the-ceph-orchestrator
|
28.3.2. Reporting Problems
|
28.3.2. Reporting Problems To report a certain problem, use the command: abrt-cli report directory ...where directory stands for the problem data directory of the problem that is being reported. For example: ABRT prompts you to select an analyzer event for the problem that is being reported. After selecting an event, the problem is analyzed. This can take a considerable amount of time. When the problem report is ready, abrt-cli opens a text editor with the content of the report. You can see what is being reported, and you can fill in instructions on how to reproduce the crash and other comments. You should also check the backtrace, because the backtrace might be sent to a public server and viewed by anyone, depending on the problem reporter event settings. Note You can choose which text editor is used to check the reports. abrt-cli uses the editor defined in the ABRT_EDITOR environment variable. If the variable is not defined, it checks the VISUAL and EDITOR variables. If none of these variables is set, vi is used. You can set the preferred editor in your .bashrc configuration file. For example, if you prefer GNU Emacs, add the following line to the file: When you are done with the report, save your changes and close the editor. You will be asked which of the configured ABRT reporter events you want to use to send the report. After selecting a reporting method, you can proceed with reviewing data to be sent with the report. The following table shows options available with the abrt-cli report command. Table 28.4. The abrt-cli report command options Option Description With no additional option, the abrt-cli report command provides the usual output. -v , --verbose abrt-cli report provides additional information on its actions.
|
[
"~]USD abrt-cli report /var/spool/abrt/ccpp-2011-09-13-10:18:14-2895 How you would like to analyze the problem? 1) Collect .xsession-errors 2) Local GNU Debugger Select analyzer: _",
"export VISUAL = emacs",
"How would you like to report the problem? 1) Logger 2) Red Hat Customer Support Select reporter(s): _"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sect-abrt-cli_reporting_problems
|
Chapter 27. Red Hat Enterprise Linux Atomic Host 7.4.2
|
Chapter 27. Red Hat Enterprise Linux Atomic Host 7.4.2 27.1. Atomic Host OStree update : New Tree Version: 7.4.2-1 (hash: 36d9eb2d9b734e5e8552dcdbbe029bb250c00262dffc49f614b1c7a61eb53555) Changes since Tree Version 7.4.1 (hash: ee6c16cac30b7d6fcfcad0ed6f7a8d99e2539755b8fd46f08e1bb2f9bc3eba4c) Updated packages : cockpit-ostree-151-1.el7 rpm-ostree-client-2017.9-1.atomic.el7 New packages : anaconda-21.48.22.121-3.rhelah.0.el7 27.2. Extras Updated packages : ansible-2.4.0.0-5.el7 * atomic-1.19.1-4.gitb39a783.el7 cockpit-151-1.el7 container-selinux-2.28-1.git85ce147.el7 container-storage-setup-0.7.0-1.git4ca59c5.el7 docker-1.12.6-61.git85d7426.el7 docker-latest-1.13.1-26.git1faa135.el7 etcd-3.2.7-1.el7 oci-register-machine-0-3.13.gitcd1e331.el7 oci-systemd-hook-0.1.14-1.git1ba44c6.el7 ostree-2017.11-1.el7 python-docker-py-1.10.6-3.el7 python-flask-0.10.1-4.el7 python-websocket-client-0.32.0-116.el7 python-werkzeug-0.9.1-2.el7 rhel-system-roles-0.5-1.el7 * runc-1.0.0-14.rc4dev.git84a082b.el7 skopeo-0.1.24-1.dev.git28d4e08.el7 The asterisk (*) marks packages which are available for Red Hat Enterprise Linux only. New packages : python-jmespath-0.9.0-3.el7 oci-umount-2.0.0-1.git299e781.el7 27.2.1. Container Images Updated : Red Hat Enterprise Linux Atomic cockpit-ws Container Image (rhel7/cockpit-ws) Red Hat Enterprise Linux Atomic open-vm-tools Container Image (rhel7/open-vm-tools) Red Hat Enterprise Linux 7.4 Container Image (rhel7.4, rhel7, rhel7/rhel, rhel) Red Hat Enterprise Linux Atomic Image (rhel-atomic, rhel7-atomic, rhel7/rhel-atomic) Red Hat Enterprise Linux Atomic Tools Container Image (rhel7/rhel-tools) Red Hat Enterprise Linux Atomic SSSD Container Image (rhel7/sssd) Red Hat Enterprise Linux 7 Init Container Image (rhel7/rhel7-init) Red Hat Enterprise Linux Atomic rsyslog Container Image (rhel7/rsyslog) Red Hat Enterprise Linux Atomic sadc Container Image (rhel7/sadc) Red Hat Enterprise Linux Atomic etcd Container Image (rhel7/etcd) Red Hat Enterprise Linux Atomic flannel Container Image (rhel7/flannel) Red Hat Enterprise Linux Atomic openscap Container Image (rhel7/openscap) Red Hat Enterprise Linux Atomic Identity Management Server Container Image (rhel7/ipa-server) Red Hat Enterprise Linux Atomic Kubernetes apiserver Container Image (rhel7/kubernetes-apiserver) Red Hat Enterprise Linux Atomic Kubernetes controller-manager Container (rhel7/kubernetes-controller-mgr) Red Hat Enterprise Linux Atomic Kubernetes scheduler Container Image (rhel7/kubernetes-scheduler) 27.3. New Features Beginning RHEL Atomic Host 7.4.2, you can configure /var to be a mount point. This allows placing /var into a separate partition, which prevents other mount points from getting full if /var gets full. For more information and instructions, see Manual Partitioning . The skopeo tool now by default requires a TLS connection. It fails when trying to use an unencrypted connection. To override the default and use an http registry, prepend http: to the <registry>/<image> string. For information on using skopeo , see Using skopeo to work with container registries . The oci-umount package, which was previously shipped as a subpackage of docker , is now shipped separately. The oci-umount package provides an OCI hook program. If you add it to the runc JSON data file as a hook, runc will execute the application after the container process is created, but before it is executed, with a prestart flag. Docker adds the oci-umount as a container hook to the runc configuration when it is installed in the USDHOOKSDIR directory. To modify the list of file systems to umount, edit the /etc/oci-umount.conf file.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/release_notes/red_hat_enterprise_linux_atomic_host_7_4_2
|
Chapter 1. Preparing to deploy OpenShift Data Foundation
|
Chapter 1. Preparing to deploy OpenShift Data Foundation Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Before you begin the deployment of OpenShift Data Foundation, follow these steps: Setup a chrony server. See Configuring chrony time service and use knowledgebase solution to create rules allowing all traffic. Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) HashiCorp Vault, follow these steps: Ensure that you have a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . When the Token authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Token authentication using KMS . When the Kubernetes authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Kubernetes authentication using KMS . Ensure that you are using signed certificates on your Vault servers. Note If you are using Thales CipherTrust Manager as your KMS, you will enable it during deployment. Minimum starting node requirements An OpenShift Data Foundation cluster is deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in Planning guide. Disaster recovery requirements [Technology Preview] Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_openshift_data_foundation_using_microsoft_azure/preparing_to_deploy_openshift_data_foundation
|
Chapter 1. Removing OpenShift Serverless overview
|
Chapter 1. Removing OpenShift Serverless overview If you need to remove OpenShift Serverless from your cluster, you can do so by manually removing the OpenShift Serverless Operator and other OpenShift Serverless components. Before you can remove the OpenShift Serverless Operator, you must remove Knative Serving and Knative Eventing. After uninstalling the OpenShift Serverless, you can remove the Operator and API custom resource definitions (CRDs) that remain on the cluster. The steps for fully removing OpenShift Serverless are detailed in the following procedures: Uninstalling Knative Eventing . Uninstalling Knative Serving . Removing the OpenShift Serverless Operator . Deleting OpenShift Serverless custom resource definitions .
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.33/html/removing_openshift_serverless/removing-openshift-serverless
|
Chapter 45. Real-Time Kernel
|
Chapter 45. Real-Time Kernel New scheduler class: SCHED_DEADLINE This update introduces the SCHED_DEADLINE scheduler class for the real-time kernel as a Technology Preview. The new scheduler enables predictable task scheduling based on application deadlines. SCHED_DEADLINE benefits periodic workloads by reducing application timer manipulation. (BZ#1297061)
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/technology_previews_real-time_kernel
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/eclipse_temurin_8.0.352_release_notes/making-open-source-more-inclusive
|
12.3. Querying Cluster Property Settings
|
12.3. Querying Cluster Property Settings In most cases, when you use the pcs command to display values of the various cluster components, you can use pcs list or pcs show interchangeably. In the following examples, pcs list is the format used to display an entire list of all settings for more than one property, while pcs show is the format used to display the values of a specific property. To display the values of the property settings that have been set for the cluster, use the following pcs command. To display all of the values of the property settings for the cluster, including the default values of the property settings that have not been explicitly set, use the following command. To display the current value of a specific cluster property, use the following command. For example, to display the current value of the cluster-infrastructure property, execute the following command: For informational purposes, you can display a list of all of the default values for the properties, whether they have been set to a value other than the default or not, by using the following command.
|
[
"pcs property list",
"pcs property list --all",
"pcs property show property",
"pcs property show cluster-infrastructure Cluster Properties: cluster-infrastructure: cman",
"pcs property [list|show] --defaults"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-queryingclusterprops-haar
|
Chapter 7. Expanding persistent volumes
|
Chapter 7. Expanding persistent volumes 7.1. Enabling volume expansion support Before you can expand persistent volumes, the StorageClass object must have the allowVolumeExpansion field set to true . Procedure Edit the StorageClass object and add the allowVolumeExpansion attribute by running the following command: USD oc edit storageclass <storage_class_name> 1 1 Specifies the name of the storage class. The following example demonstrates adding this line at the bottom of the storage class configuration. apiVersion: storage.k8s.io/v1 kind: StorageClass ... parameters: type: gp2 reclaimPolicy: Delete allowVolumeExpansion: true 1 1 Setting this attribute to true allows PVCs to be expanded after creation. 7.2. Expanding CSI volumes You can use the Container Storage Interface (CSI) to expand storage volumes after they have already been created. CSI volume expansion does not support the following: Recovering from failure when expanding volumes Shrinking Prerequisites The underlying CSI driver supports resize. Dynamic provisioning is used. The controlling StorageClass object has allowVolumeExpansion set to true . For more information, see "Enabling volume expansion support." Procedure For the persistent volume claim (PVC), set .spec.resources.requests.storage to the desired new size. Watch the status.conditions field of the PVC to see if the resize has completed. OpenShift Container Platform adds the Resizing condition to the PVC during expansion, which is removed after expansion completes. 7.3. Expanding FlexVolume with a supported driver When using FlexVolume to connect to your back-end storage system, you can expand persistent storage volumes after they have already been created. This is done by manually updating the persistent volume claim (PVC) in OpenShift Container Platform. FlexVolume allows expansion if the driver is set with RequiresFSResize to true . The FlexVolume can be expanded on pod restart. Similar to other volume types, FlexVolume volumes can also be expanded when in use by a pod. Prerequisites The underlying volume driver supports resize. The driver is set with the RequiresFSResize capability to true . Dynamic provisioning is used. The controlling StorageClass object has allowVolumeExpansion set to true . Procedure To use resizing in the FlexVolume plugin, you must implement the ExpandableVolumePlugin interface using these methods: RequiresFSResize If true , updates the capacity directly. If false , calls the ExpandFS method to finish the filesystem resize. ExpandFS If true , calls ExpandFS to resize filesystem after physical volume expansion is done. The volume driver can also perform physical volume resize together with filesystem resize. Important Because OpenShift Container Platform does not support installation of FlexVolume plugins on control plane nodes, it does not support control-plane expansion of FlexVolume. 7.4. Expanding local volumes You can manually expand persistent volumes (PVs) and persistent volume claims (PVCs) created by using the local storage operator (LSO). Procedure Expand the underlying devices. Ensure that appropriate capacity is available on these devices. Update the corresponding PV objects to match the new device sizes by editing the .spec.capacity field of the PV. For the storage class that is used for binding the PVC to PVet, set allowVolumeExpansion:true . For the PVC, set .spec.resources.requests.storage to match the new size. Kubelet should automatically expand the underlying file system on the volume, if necessary, and update the status field of the PVC to reflect the new size. 7.5. Expanding persistent volume claims (PVCs) with a file system Expanding PVCs based on volume types that need file system resizing, such as GCE, EBS, and Cinder, is a two-step process. First, expand the volume objects in the cloud provider. Second, expand the file system on the node. Expanding the file system on the node only happens when a new pod is started with the volume. Prerequisites The controlling StorageClass object must have allowVolumeExpansion set to true . Procedure Edit the PVC and request a new size by editing spec.resources.requests . For example, the following expands the ebs PVC to 8 Gi: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ebs spec: storageClass: "storageClassWithFlagSet" accessModes: - ReadWriteOnce resources: requests: storage: 8Gi 1 1 Updating spec.resources.requests to a larger amount expands the PVC. After the cloud provider object has finished resizing, the PVC is set to FileSystemResizePending . Check the condition by entering the following command: USD oc describe pvc <pvc_name> When the cloud provider object has finished resizing, the PersistentVolume object reflects the newly requested size in PersistentVolume.Spec.Capacity . At this point, you can create or recreate a new pod from the PVC to finish the file system resizing. Once the pod is running, the newly requested size is available and the FileSystemResizePending condition is removed from the PVC. 7.6. Recovering from failure when expanding volumes If expanding underlying storage fails, the OpenShift Container Platform administrator can manually recover the persistent volume claim (PVC) state and cancel the resize requests. Otherwise, the resize requests are continuously retried by the controller. Procedure Mark the persistent volume (PV) that is bound to the PVC with the Retain reclaim policy. This can be done by editing the PV and changing persistentVolumeReclaimPolicy to Retain . Delete the PVC. Manually edit the PV and delete the claimRef entry from the PV specs to ensure that the newly created PVC can bind to the PV marked Retain . This marks the PV as Available . Re-create the PVC in a smaller size, or a size that can be allocated by the underlying storage provider. Set the volumeName field of the PVC to the name of the PV. This binds the PVC to the provisioned PV only. Restore the reclaim policy on the PV. Additional resources The controlling StorageClass object has allowVolumeExpansion set to true (see Enabling volume expansion support ).
|
[
"oc edit storageclass <storage_class_name> 1",
"apiVersion: storage.k8s.io/v1 kind: StorageClass parameters: type: gp2 reclaimPolicy: Delete allowVolumeExpansion: true 1",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ebs spec: storageClass: \"storageClassWithFlagSet\" accessModes: - ReadWriteOnce resources: requests: storage: 8Gi 1",
"oc describe pvc <pvc_name>"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/storage/expanding-persistent-volumes
|
Chapter 4. Installing a cluster
|
Chapter 4. Installing a cluster 4.1. Cleaning up installations In case of an earlier failed deployment, remove the artifacts from the failed attempt before trying to deploy OpenShift Container Platform again. Procedure Power off all bare-metal nodes before installing the OpenShift Container Platform cluster by using the following command: USD ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off Remove all old bootstrap resources if any remain from an earlier deployment attempt by using the following script: for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done Delete the artifacts that the earlier installation generated by using the following command: USD cd ; /bin/rm -rf auth/ bootstrap.ign master.ign worker.ign metadata.json \ .openshift_install.log .openshift_install_state.json Re-create the OpenShift Container Platform manifests by using the following command: USD ./openshift-baremetal-install --dir ~/clusterconfigs create manifests 4.2. Deploying the cluster via the OpenShift Container Platform installer Run the OpenShift Container Platform installer: USD ./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster 4.3. Following the progress of the installation During the deployment process, you can check the installation's overall status by issuing the tail command to the .openshift_install.log log file in the install directory folder: USD tail -f /path/to/install-dir/.openshift_install.log 4.4. Verifying static IP address configuration If the DHCP reservation for a cluster node specifies an infinite lease, after the installer successfully provisions the node, the dispatcher script checks the node's network configuration. If the script determines that the network configuration contains an infinite DHCP lease, it creates a new connection using the IP address of the DHCP lease as a static IP address. Note The dispatcher script might run on successfully provisioned nodes while the provisioning of other nodes in the cluster is ongoing. Verify the network configuration is working properly. Procedure Check the network interface configuration on the node. Turn off the DHCP server and reboot the OpenShift Container Platform node and ensure that the network configuration works properly. 4.5. Additional resources Understanding update channels and releases
|
[
"ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off",
"for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done",
"cd ; /bin/rm -rf auth/ bootstrap.ign master.ign worker.ign metadata.json .openshift_install.log .openshift_install_state.json",
"./openshift-baremetal-install --dir ~/clusterconfigs create manifests",
"./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster",
"tail -f /path/to/install-dir/.openshift_install.log"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/deploying_installer-provisioned_clusters_on_bare_metal/ipi-install-installing-a-cluster
|
Chapter 9. Reporting on user access on hosts using SSSD
|
Chapter 9. Reporting on user access on hosts using SSSD The Security System Services Daemon (SSSD) tracks which users can or cannot access clients. This chapter describes creating access control reports and displaying user data using the sssctl tool. Prerequisites SSSD packages are installed in your network environment 9.1. The sssctl command sssctl is a command-line tool that provides a unified way to obtain information about the Security System Services Daemon (SSSD) status. You can use the sssctl utility to gather information about: Domain state Client user authentication User access on clients of a particular domain Information about cached content With the sssctl tool, you can: Manage the SSSD cache Manage logs Check configuration files Note The sssctl tool replaces sss_cache and sss_debuglevel tools. Additional resources sssctl --help 9.2. Generating access control reports using sssctl You can list the access control rules applied to the machine on which you are running the report because SSSD controls which users can log in to the client. Note The access report is not accurate because the tool does not track users locked out by the Key Distribution Center (KDC). Prerequisites You must be logged in with administrator privileges The sssctl tool is available on RHEL 7 and RHEL 8 systems. Procedure To generate a report for the idm.example.com domain, enter: 9.3. Displaying user authorization details using sssctl The sssctl user-checks command helps debug problems in applications that use the System Security Services Daemon (SSSD) for user lookup, authentication, and authorization. The sssctl user-checks [USER_NAME] command displays user data available through Name Service Switch (NSS) and the InfoPipe responder for the D-Bus interface. The displayed data shows whether the user is authorized to log in using the system-auth Pluggable Authentication Module (PAM) service. The command has two options: -a for a PAM action -s for a PAM service If you do not define -a and -s options, the sssctl tool uses default options: -a acct -s system-auth . Prerequisites You must be logged in with administrator privileges The sssctl tool is available on RHEL 7 and RHEL 8 systems. Procedure To display user data for a particular user, enter: Additional resources sssctl user-checks --help
|
[
"sssctl access-report idm.example.com 1 rule cached Rule name: example.user Member users: example.user Member services: sshd",
"sssctl user-checks -a acct -s sshd example.user user: example.user action: acct service: sshd ."
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_authentication_and_authorization_in_rhel/reporting-on-user-access-on-hosts-using-sssd_configuring-authentication-and-authorization-in-rhel
|
4.5. Examples of Using Tuna
|
4.5. Examples of Using Tuna Example 4.1. Assigning Tasks to Specific CPUs The following example uses a system with four or more processors, and shows how to make all ssh threads run on CPU 0 and 1, and all http threads on CPU 2 and 3. The above example command performs the following operations sequentially: Select CPUs 0 and 1. Select all threads that begin with ssh . Move the selected threads to the selected CPUs. Tuna sets the affinity mask of threads starting with ssh to the appropriate CPUs. The CPUs can be expressed numerically as 0 and 1, in hex mask as 0x3 , or in binary as 11 . Reset the CPU list to 2 and 3. Select all threads that begin with http . Move the selected threads to the selected CPUs. Tuna sets the affinity mask of threads starting with http to the appropriate CPUs. The CPUs can be expressed numerically as 2 and 3, in hex mask as 0xC , or in binary as 1100 . Example 4.2. Viewing Current Configurations The following example uses the --show_threads ( -P ) parameter to display the current configuration, and then tests if the requested changes were made as expected. The above example command performs the following operations sequentially: Select all threads that begin with gnome-sc . Show the selected threads to enable the user to verify their affinity mask and RT priority. Select CPU 0. Move the gnome-sc threads to the selected CPU (CPU 0). Show the result of the move. Reset the CPU list to CPU 1. Move the gnome-sc threads to the selected CPU (CPU 1). Show the result of the move. Add CPU 0 to the CPU list. Move the gnome-sc threads to the selected CPUs (CPUs 0 and 1). Show the result of the move.
|
[
"tuna --cpus=0,1 --threads=ssh\\* --move --cpus=2,3 --threads=http\\* --move",
"tuna --threads=gnome-sc\\* --show_threads --cpus=0 --move --show_threads --cpus=1 --move --show_threads --cpus=+0 --move --show_threads thread ctxt_switches pid SCHED_ rtpri affinity voluntary nonvoluntary cmd 3861 OTHER 0 0,1 33997 58 gnome-screensav thread ctxt_switches pid SCHED_ rtpri affinity voluntary nonvoluntary cmd 3861 OTHER 0 0 33997 58 gnome-screensav thread ctxt_switches pid SCHED_ rtpri affinity voluntary nonvoluntary cmd 3861 OTHER 0 1 33997 58 gnome-screensav thread ctxt_switches pid SCHED_ rtpri affinity voluntary nonvoluntary cmd 3861 OTHER 0 0,1 33997 58 gnome-screensav"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sec-examples-of-using-tuna
|
Chapter 9. Concepts for configuring thread pools
|
Chapter 9. Concepts for configuring thread pools This section is intended when you want to understand the considerations and best practices on how to configure thread pools connection pools for Red Hat build of Keycloak. For a configuration where this is applied, visit Deploy Red Hat build of Keycloak for HA with the Red Hat build of Keycloak Operator . 9.1. Concepts 9.1.1. Quarkus executor pool Red Hat build of Keycloak requests, as well as blocking probes, are handled by an executor pool. Depending on the available CPU cores, it has a maximum size of 50 or more threads. Threads are created as needed, and will end when no longer needed, so the system will scale up and down automatically. Red Hat build of Keycloak allows configuring the maximum thread pool size by the http-pool-max-threads configuration option. See Deploy Red Hat build of Keycloak for HA with the Red Hat build of Keycloak Operator for an example. When running on Kubernetes, adjust the number of worker threads to avoid creating more load than what the CPU limit allows for the Pod to avoid throttling, which would lead to congestion. When running on physical machines, adjust the number of worker threads to avoid creating more load than the node can handle to avoid congestion. Congestion would result in longer response times and an increased memory usage, and eventually an unstable system. Ideally, you should start with a low limit of threads and adjust it accordingly to the target throughput and response time. When the load and the number of threads increases, the database connections can also become a bottleneck. Once a request cannot acquire a database connection within 5 seconds, it will fail with a message in the log like Unable to acquire JDBC Connection . The caller will receive a response with a 5xx HTTP status code indicating a server side error. If you increase the number of database connections and the number of threads too much, the system will be congested under a high load with requests queueing up, which leads to a bad performance. The number of database connections is configured via the Database settings db-pool-initial-size , db-pool-min-size and db-pool-max-size respectively. Low numbers ensure fast response times for all clients, even if there is an occasionally failing request when there is a load spike. 9.1.2. JGroups connection pool Note This currently applies to single-site setups only. In a multi-site setup with an external Data Grid this is no longer a restriction. The combined number of executor threads in all Red Hat build of Keycloak nodes in the cluster should not exceed the number of threads available in JGroups thread pool to avoid the error org.jgroups.util.ThreadPool: thread pool is full . To see the error the first time it happens, the system property jgroups.thread_dumps_threshold needs to be set to 1 , as otherwise the message appears only after 10000 requests have been rejected. The number of JGroup threads is 200 by default. While it can be configured using the Java system property jgroups.thread_pool.max_threads , we advise keeping it at this value. As shown in experiments, the total number of Quarkus worker threads in the cluster must not exceed the number of threads in the JGroup thread pool of 200 in each node to avoid deadlocks in the JGroups communication. Given a Red Hat build of Keycloak cluster with four Pods, each Pod should then have 50 Quarkus worker threads. Use the Red Hat build of Keycloak configuration option http-pool-max-threads to configure the maximum number of Quarkus worker threads. Use metrics to monitor the total JGroup threads in the pool and for the threads active in the pool. When using TCP as the JGroups transport protocol, the metrics vendor_jgroups_tcp_get_thread_pool_size and vendor_jgroups_tcp_get_thread_pool_size_active are available for monitoring. When using UDP, the metrics vendor_jgroups_udp_get_thread_pool_size and vendor_jgroups_udp_get_thread_pool_size_active are available. This is useful to monitor that limiting the Quarkus thread pool size keeps the number of active JGroup threads below the maximum JGroup thread pool size. 9.1.3. Load Shedding By default, Red Hat build of Keycloak will queue all incoming requests infinitely, even if the request processing stalls. This will use additional memory in the Pod, can exhaust resources in the load balancers, and the requests will eventually time out on the client side without the client knowing if the request has been processed. To limit the number of queued requests in Red Hat build of Keycloak, set an additional Quarkus configuration option. Configure http-max-queued-requests to specify a maximum queue length to allow for effective load shedding once this queue size is exceeded. Assuming a Red Hat build of Keycloak Pod processes around 200 requests per second, a queue of 1000 would lead to maximum waiting times of around 5 seconds. When this setting is active, requests that exceed the number of queued requests will return with an HTTP 503 error. Red Hat build of Keycloak logs the error message in its log. 9.1.4. Probes Red Hat build of Keycloak's liveness probe is non-blocking to avoid a restart of a Pod under a high load. The overall health probe and the readiness probe can in some cases block to check the connection to the database, so they might fail under a high load. Due to this, a Pod can become non-ready under a high load. 9.1.5. OS Resources In order for Java to create threads, when running on Linux it needs to have file handles available. Therefore, the number of open files (as retrieved as ulimit -n on Linux) need to provide head-space for Red Hat build of Keycloak to increase the number of threads needed. Each thread will also consume memory, and the container memory limits need to be set to a value that allows for this or the Pod will be killed by Kubernetes.
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/high_availability_guide/concepts-threads-
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.