title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 16. Red Hat Enterprise Linux Atomic Host 7.6.2
Chapter 16. Red Hat Enterprise Linux Atomic Host 7.6.2 16.1. Atomic Host OStree update : New Tree Version: 7.6.2 (hash: 50c320468370132958eeeffb90a23431a5bd1cc717aa68d969eb471d78879e66) Changes since Tree Version 7.6.1 (hash: cbdf1df91ffb370cad574ad2bfdcf5e9999629437e23e620055af0dbef2c0cae) 16.2. Extras Updated packages : WALinuxAgent-2.2.32-1.el7 buildah-1.5-2.gite94b4f9.el7 container-selinux-2.77-1.el7_6 containernetworking-plugins-0.7.4-1.el7 docker-1.13.1-90.git07f3374.el7 dpdk-18.11-2.el7_6 etcd-3.3.11-2.el7 libdnf-0.22.5-1.el7_6 oci-systemd-hook-0.1.18-3.git8787307.el7_6 podman-0.12.1.2-2.git9551f6b.el7 python-docker-py-1.10.6-8.el7_6 skopeo-0.1.31-8.gitb0b750d.el7 16.2.1. Container Images Updated : Red Hat Enterprise Linux 7 Init Container Image (rhel7/rhel7-init) Red Hat Enterprise Linux 7.6 Container Image (rhel7.6, rhel7, rhel7/rhel, rhel) Red Hat Enterprise Linux 7.6 Container Image for aarch64 (rhel7.6, rhel7, rhel7/rhel, rhel) Red Hat Enterprise Linux Atomic Identity Management Server Container Image (rhel7/ipa-server) Red Hat Enterprise Linux Atomic Image (rhel-atomic, rhel7-atomic, rhel7/rhel-atomic) Red Hat Enterprise Linux Atomic Net-SNMP Container Image (rhel7/net-snmp) Red Hat Enterprise Linux Atomic OpenSCAP Container Image (rhel7/openscap) Red Hat Enterprise Linux Atomic SSSD Container Image (rhel7/sssd) Red Hat Enterprise Linux Atomic Support Tools Container Image (rhel7/support-tools) Red Hat Enterprise Linux Atomic Tools Container Image (rhel7/rhel-tools) Red Hat Enterprise Linux Atomic cockpit-ws Container Image (rhel7/cockpit-ws) Red Hat Enterprise Linux Atomic etcd Container Image (rhel7/etcd) Red Hat Enterprise Linux Atomic flannel Container Image (rhel7/flannel) Red Hat Enterprise Linux Atomic open-vm-tools Container Image (rhel7/open-vm-tools) Red Hat Enterprise Linux Atomic rsyslog Container Image (rhel7/rsyslog) Red Hat Enterprise Linux Atomic sadc Container Image (rhel7/sadc)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/release_notes/red_hat_enterprise_linux_atomic_host_7_6_2
Chapter 6. Creating a self-contained Red Hat Process Automation Manager Spring Boot JAR file
Chapter 6. Creating a self-contained Red Hat Process Automation Manager Spring Boot JAR file You can create a single self-contained Red Hat Process Automation Manager Spring Boot JAR file that contains a complete service, including KIE Server and one or more KJAR files. The Red Hat Process Automation Manager Spring Boot JAR file does not depend on any KJAR files loading at runtime. If necessary, the Red Hat Process Automation Manager Spring Boot JAR file can contain multiple versions of the same KJAR file, including modules. These KJAR files can have the same artifactID and groupID attribute values, but have different version values. The included KJAR files are separated from any JAR files in the BOOT-INF/lib directory to avoid class loader collisions. Each KJAR classpath container file is isolated from other KJAR classpath container files and does not rely on the Spring Boot class loader. Prerequisites You have an existing Red Hat Process Automation Manager Spring Boot project. You have completed development of one or more KJAR files for the project. Procedure Build all KJAR files for the project. In the default business application, the KJAR source is contained in the <BUSINESS-APPLICATION>/<BUSINESS-APPLICATION>-kjar directory, where BUSINESS-APPLICATION is the name of the business application. Your project might include other KJAR source directories. To build the KJAR files, for every KJAR source directory, complete the following steps: Change to the KJAR source directory. Enter the following command: This command builds the KJAR file and places it into the local Maven repository. By default, this repository is located in the ~/.m2/repo directory. In the <BUSINESS-APPLICATION>/<BUSINESS-APPLICATION>-service/src/main/resources directory, add the following property to your Spring Boot application application.properties file: When this property is set to true , KIE Server uses the class loader used by the container to load KJAR files and their dependencies. Complete one of the following actions to ensure that KIE Server loads the necessary KJAR modules: To configure KIE Server to scans and deploy all KJAR modules available in the Spring Boot application, add the following property to the application.properties file: When this property is set to true , KIE Server deploys all KJAR modules available in the application, whether they are declared programmatically or through the Maven plug-in. This option is the simplest method to include all KJAR modules. However, it has two drawbacks: The application sets all container IDs and aliases automatically, based on the group, artifact, and version (GAV) of every KJAR module. You cannot set a custom container ID or alias for a KJAR module. At startup time, the application scans the JAR file and the class path for KJAR modules. Therefore, the duration of startup might be increased. To avoid these drawbacks, you can configure every KJAR module individually using the application.properties file or using Java source code, as described in one of the following options. To configure every KJAR module individually using the application.properties file, for each of the KJAR modules that you want to include in the service, add the following properties to the application.properties file: Replace the following values: <n> : A sequential number: 0 for the first KJAR module, 1 for the second module, and so on <container> : The container ID for the KJAR module <alias> : The alias for the KJAR module <artifact> : The artifact ID for the KJAR module <group> : The group ID for the KJAR module <version> : The version ID for the KJAR module The following example configures two versions of the Evaluation KJAR module: To configure every KJAR module individually using Java source code, create a class in your business application service, similar to the following example: @Configuration public class KieContainerDeployer { @Bean public KieContainerResource evaluation_v1() { KieContainerResource container = new KieContainerResource("evaluation_v1", new ReleaseId("com.myspace", "Evaluation", "1.0.0-SNAPSHOT"), STARTED); container.setConfigItems(Arrays.asList(new KieServerConfigItem(KieServerConstants.PCFG_RUNTIME_STRATEGY, "PER_PROCESS_INSTANCE", "String"))); return container; } @Bean public KieContainerResource evaluation_v2() { KieContainerResource container = new KieContainerResource("evaluation_v2", new ReleaseId("com.myspace", "Evaluation", "2.0.0-SNAPSHOT"), STARTED); container.setConfigItems(Arrays.asList(new KieServerConfigItem(KieServerConstants.PCFG_RUNTIME_STRATEGY, "PER_PROCESS_INSTANCE", "String"))); return container; } } For every KJAR module that you want to include, create a KieContainerResource bean in this class. The name of the bean is the container name, the first parameter of KieContainerResource() is the alias name, and the parameters of ReleaseId() are the group ID, artifact ID, and version ID of the KJAR module. Optional: If your business application will run in an Red Hat OpenShift Container Platform pod or in any other environment where the current directory is not writable, add the spring.jta.log-dir property to the application.properties file and set it to a writable location. For example: This parameter sets the location for the transaction log. In the <BUSINESS-APPLICATION>/<BUSINESS-APPLICATION>-service directory, add the following Maven plug-in in the Spring Boot pom.xml file where <GROUP_ID> , <ARTIFACT_ID> , and <VERSION> are the group, artifact, and version (GAV) of a KJAR artifact that your project uses. You can find these values in the pom.xml file that is located in the KJAR source directory. Note You can add more than one version of an artifact. <build> <plugins> <plugin> <groupId>org.kie</groupId> <artifactId>kie-maven-plugin</artifactId> <version>USD{version.org.kie}</version> <executions> <execution> <id>copy</id> <phase>prepare-package</phase> <goals> <goal>package-dependencies-kjar</goal> </goals> </execution> </executions> <configuration> <artifactItems> <artifactItem> <groupId><GROUP_ID></groupId> <artifactId><ARTIFACT_ID></artifactId> <version><VERSION></version> </artifactItem> </artifactItems> </configuration> </plugin> <plugins> <build> The artifacts required to run the KJAR will be resolved at build time. The following example adds two version of the Evaluation artifact: <build> <plugins> <plugin> <groupId>org.kie</groupId> <artifactId>kie-maven-plugin</artifactId> <version>USD{version.org.kie}</version> <executions> <execution> <id>copy</id> <phase>prepare-package</phase> <goals> <goal>package-dependencies-kjar</goal> </goals> </execution> </executions> <configuration> <artifactItems> <artifactItem> <groupId>com.myspace</groupId> <artifactId>Evaluation</artifactId> <version>1.0.0-SNAPSHOT</version> </artifactItem> <artifactItem> <groupId>com.myspace</groupId> <artifactId>Evaluation</artifactId> <version>2.0.0-SNAPSHOT</version> </artifactItem> </artifactItems> </configuration> </plugin> </plugins> </build> Optional: if you want to be able to configure the KIE Server instance in the JAR file to communicate with a Business Central monitoring instance using WebSockets, make the following changes: Add the following lines to the pom.xml file under the <dependencies> tag: <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-controller-websocket-client</artifactId> <version>USD{version.org.kie}</version> </dependency> WebSockets communication with a Business Central monitoring instance is supported in all cases, including running the instance on Red Hat OpenShift Container Platform. In the <BUSINESS-APPLICATION>/<BUSINESS-APPLICATION>-service/src/main/resources/application.properties file, add or change the following properties: To build the self-contained Spring Boot image, enter the following command in the <BUSINESS-APPLICATION>/<BUSINESS-APPLICATION>-service directory: Optional: to run the self-contained Spring Boot image, locate the JAR file in the target subdirectory and enter the following command: In this command, replace <FILENAME> with the name of the JAR file. To configure KIE Server to connect to a Business Central monitoring instance using WebSockets and run the image, enter the following command: In this command, replace the following values: <LOCATION> with the fully qualified host name for accessing your service. Business Central monitoring accesses the service to retrieve process information and displays a URL for the service with this host name <PORT> with the port for accessing your service, for example, 8090 <BC-HOSTNAME> with the fully qualified name of the Business Central monitoring instance <BC-PORT> with the port of the Business Central Monitoring instance, for example, 8080 <USER> with the username of a user configured on the Business Central monitoring instance <PASSWORD> with the password of the user configured on the Business Central monitoring instance <FILENAME> with the name of the JAR file Note This configuration uses unsecured HTTP communication for your service. If you configure your Spring Boot business application with a valid SSL certificate, you can replace http: with https: to use secure HTTPS communication. For more information about configuring SSL on Spring Boot, see Spring Boot documentation . Note If you want to view process information from Business Central monitoring, you must ensure that the user that is logged into Business Central can also be authenticated with your service using the same password.
[ "mvn install", "kieserver.classPathContainer=true", "kieserver.autoScanDeployments=true", "kieserver.deployments[<n>].containerId=<container> kieserver.deployments[<n>].alias=<alias> kieserver.deployments[<n>].artifactId=<artifact> kieserver.deployments[<n>].groupId=<group> kieserver.deployments[<n>].version=<version>", "kieserver.deployments[0].alias=evaluation_v1 kieserver.deployments[0].containerId=evaluation_v1 kieserver.deployments[0].artifactId=Evaluation kieserver.deployments[0].groupId=com.myspace kieserver.deployments[0].version=1.0.0-SNAPSHOT kieserver.deployments[1].alias=evaluation_v2 kieserver.deployments[1].containerId=evaluation_v2 kieserver.deployments[1].artifactId=Evaluation kieserver.deployments[1].groupId=com.myspace kieserver.deployments[1].version=2.0.0-SNAPSHOT", "@Configuration public class KieContainerDeployer { @Bean public KieContainerResource evaluation_v1() { KieContainerResource container = new KieContainerResource(\"evaluation_v1\", new ReleaseId(\"com.myspace\", \"Evaluation\", \"1.0.0-SNAPSHOT\"), STARTED); container.setConfigItems(Arrays.asList(new KieServerConfigItem(KieServerConstants.PCFG_RUNTIME_STRATEGY, \"PER_PROCESS_INSTANCE\", \"String\"))); return container; } @Bean public KieContainerResource evaluation_v2() { KieContainerResource container = new KieContainerResource(\"evaluation_v2\", new ReleaseId(\"com.myspace\", \"Evaluation\", \"2.0.0-SNAPSHOT\"), STARTED); container.setConfigItems(Arrays.asList(new KieServerConfigItem(KieServerConstants.PCFG_RUNTIME_STRATEGY, \"PER_PROCESS_INSTANCE\", \"String\"))); return container; } }", "spring.jta.log-dir=/tmp", "<build> <plugins> <plugin> <groupId>org.kie</groupId> <artifactId>kie-maven-plugin</artifactId> <version>USD{version.org.kie}</version> <executions> <execution> <id>copy</id> <phase>prepare-package</phase> <goals> <goal>package-dependencies-kjar</goal> </goals> </execution> </executions> <configuration> <artifactItems> <artifactItem> <groupId><GROUP_ID></groupId> <artifactId><ARTIFACT_ID></artifactId> <version><VERSION></version> </artifactItem> </artifactItems> </configuration> </plugin> <plugins> <build>", "<build> <plugins> <plugin> <groupId>org.kie</groupId> <artifactId>kie-maven-plugin</artifactId> <version>USD{version.org.kie}</version> <executions> <execution> <id>copy</id> <phase>prepare-package</phase> <goals> <goal>package-dependencies-kjar</goal> </goals> </execution> </executions> <configuration> <artifactItems> <artifactItem> <groupId>com.myspace</groupId> <artifactId>Evaluation</artifactId> <version>1.0.0-SNAPSHOT</version> </artifactItem> <artifactItem> <groupId>com.myspace</groupId> <artifactId>Evaluation</artifactId> <version>2.0.0-SNAPSHOT</version> </artifactItem> </artifactItems> </configuration> </plugin> </plugins> </build>", "<dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-controller-websocket-client</artifactId> <version>USD{version.org.kie}</version> </dependency>", "kieserver.location=USD{org.kie.server.location} kieserver.controllers=USD{org.kie.server.controller}", "mvn install", "java -jar <FILENAME>.jar", "java -Dorg.kie.server.location=http://<LOCATION>:<PORT>/rest/server -Dorg.kie.server.controller=ws://<BC-HOSTNAME>:<BC-PORT>/websocket/controller -Dorg.kie.server.controller.user=<USER> -Dorg.kie.server.controller.pwd=<PASSWORD> -jar <FILENAME>.jar" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/integrating_red_hat_process_automation_manager_with_other_products_and_components/creating-self-contained-image-proc_business-applications
Chapter 4. Network and Port Configuration
Chapter 4. Network and Port Configuration 4.1. Interfaces JBoss EAP references named interfaces throughout the configuration. This allows the configuration to reference individual interface declarations with logical names, rather than requiring the full details of the interface at each use. This also allows for easier configuration in a managed domain, where network interface details can vary across multiple machines. Each server instance can correspond to a logical name group. The standalone.xml , domain.xml , and host.xml files all include interface declarations. There are several preconfigured interface names, depending on which default configuration is used. The management interface can be used for all components and services that require the management layer, including the HTTP management endpoint. The public interface can be used for all application-related network communications. The unsecure interface is used for IIOP sockets in the standard configuration. The private interface is used for JGroups sockets in the standard configuration. 4.1.1. Default interface configurations The following interface configurations are set by default: <interfaces> <interface name="management"> <inet-address value="USD{jboss.bind.address.management:127.0.0.1}"/> </interface> <interface name="public"> <inet-address value="USD{jboss.bind.address:127.0.0.1}"/> </interface> <interface name="private"> <inet-address value="USD{jboss.bind.address.private:127.0.0.1}"/> </interface> <interface name="unsecure"> <inet-address value="USD{jboss.bind.address.unsecure:127.0.0.1}"/> </interface> </interfaces> JBoss EAP binds these interfaces to 127.0.0.1 , but these values can be overridden at runtime by setting the appropriate property. For example, the inet-address of the public interface can be set when starting JBoss EAP as a standalone server with the following command. Alternatively, you can use the -b switch on the server start command line. In the above command, -b IP_ADDRESS is equivalent to -Djboss.bind.address= IP_ADDRESS . You can also use the -b switch to set the inet-address of the management interface. If you only want to set a single variable, you can change jboss.bind.address.management to jboss.bind.address . When you set -b switch or -Djboss.bind.address , the public and management interfaces will share the same IP_ADDRESS . For more information about server start options, see Server Runtime Arguments . Important If you modify the default network interfaces or ports that JBoss EAP uses, you must also remember to change any scripts that use the modified interfaces or ports. These include JBoss EAP service scripts, as well as remembering to specify the correct interface and port when accessing the management console or management CLI. 4.1.2. Configuring interfaces Network interfaces are declared by specifying a logical name and selection criteria for the physical interface. The selection criteria can reference a wildcard address or specify a set of one or more characteristics that an interface or address must have in order to be a valid match. For a listing of all available interface selection criteria, see the Interface Attributes section. Interfaces can be configured using the management console or the management CLI. Below are several examples of adding and updating interfaces. The management CLI command is shown first, followed by the corresponding configuration XML. Add an interface with a NIC value Add a new interface with a NIC value of eth0 . <interface name="external"> <nic name="eth0"/> </interface> Add an interface with several conditional values Add a new interface that matches any interface/address on the correct subnet if it is up, supports multicast, and is not point-to-point. <interface name="default"> <subnet-match value="192.168.0.0/16"/> <up/> <multicast/> <not> <point-to-point/> </not> </interface> Update an interface attribute Update the public interface's default inet-address value, keeping the jboss.bind.address property to allow for this value to be set at runtime. <interface name="public"> <inet-address value="USD{jboss.bind.address:192.168.0.0}"/> </interface> Add an interface to a server in a managed domain <servers> <server name=" SERVER_NAME " group="main-server-group"> <interfaces> <interface name=" INTERFACE_NAME "> <inet-address value="127.0.0.1"/> </interface> </interfaces> </server> </servers> 4.2. Socket bindings Socket bindings and socket binding groups allow you to define network ports and their relationship to the networking interfaces required for your JBoss EAP configuration. A socket binding is a named configuration for a socket. A socket binding group is a collection of socket binding declarations that are grouped under a logical name. This allows other sections of the configuration to reference socket bindings by their logical name, rather than requiring the full details of the socket configuration at each use. The declarations for these named configurations can be found in the standalone.xml and domain.xml configuration files. A standalone server contains only one socket binding group, while a managed domain can contain multiple groups. You can create a socket binding group for each server group in the managed domain, or share a socket binding group between multiple server groups. The ports JBoss EAP uses by default depend on which socket binding groups are used and the requirements of your individual deployments. There are three types of socket bindings that can be defined in a socket binding group in the JBoss EAP configuration: Inbound socket bindings The socket-binding element is used to configure inbound socket bindings for the JBoss EAP server. The default JBoss EAP configurations provide several preconfigured socket-binding elements, for example, for HTTP and HTTPS traffic. Another example can be found in the Broadcast groups section of Configuring Messaging for JBoss EAP. Attributes for this element can be found in the Inbound socket binding attributes table. Remote outbound socket bindings The remote-destination-outbound-socket-binding element is used to configure outbound socket bindings for destinations that are remote to the JBoss EAP server. The default JBoss EAP configurations provide an example remote destination socket binding that can be used for a mail server. Another example can be found in the Using the Integrated Artemis resource adapter for remote connections section of Configuring Messaging for JBoss EAP. Attributes for this element can be found in the Remote outbound socket binding attributes table. Local outbound socket bindings The local-destination-outbound-socket-binding element is used to configure outbound socket bindings for destinations that are local to the JBoss EAP server. This type of socket binding is not expected to be commonly used. Attributes for this element can be found in the Local outbound socket binding attributes table. 4.2.1. Management ports Management ports were consolidated in JBoss EAP 7. By default, JBoss EAP 8.0 uses port 9990 for both native management, used by the management CLI, and HTTP management, used by the web-based management console. Port 9999 , which was used as the native management port in JBoss EAP 6, is no longer used but can still be enabled if desired. If HTTPS is enabled for the management console, then port 9993 is used by default. 4.2.2. Default socket bindings JBoss EAP ships with a socket binding group for each of the five predefined profiles ( default , ha , full , full-ha , load-balancer ). For detailed information about the default socket bindings, such as default ports and descriptions, see the Default socket bindings groups section. Important If you modify the default network interfaces or ports that JBoss EAP uses, you must also remember to change any scripts that use the modified interfaces or ports. These include JBoss EAP service scripts, as well as remembering to specify the correct interface and port when accessing the management console or management CLI. Standalone server When running as a standalone server, only one socket binding group is defined per configuration file. Each standalone configuration file ( standalone.xml , standalone-ha.xml , standalone-full.xml , standalone-full-ha.xml , standalone-load-balancer.xml ) defines socket bindings for the technologies used by its corresponding profile. For example, the default standalone configuration file ( standalone.xml ) specifies the below socket bindings. <socket-binding-group name="standard-sockets" default-interface="public" port-offset="USD{jboss.socket.binding.port-offset:0}"> <socket-binding name="management-http" interface="management" port="USD{jboss.management.http.port:9990}"/> <socket-binding name="management-https" interface="management" port="USD{jboss.management.https.port:9993}"/> <socket-binding name="ajp" port="USD{jboss.ajp.port:8009}"/> <socket-binding name="http" port="USD{jboss.http.port:8080}"/> <socket-binding name="https" port="USD{jboss.https.port:8443}"/> <socket-binding name="txn-recovery-environment" port="4712"/> <socket-binding name="txn-status-manager" port="4713"/> <outbound-socket-binding name="mail-smtp"> <remote-destination host="localhost" port="25"/> </outbound-socket-binding> </socket-binding-group> Managed domain When running in a managed domain, all socket binding groups are defined in the domain.xml file. There are five predefined socket binding groups: standard-sockets ha-sockets full-sockets full-ha-sockets load-balancer-sockets Each socket binding group specifies socket bindings for the technologies used by its corresponding profile. For example, the full-ha-sockets socket binding group defines several jgroups socket bindings, which are used by the full-ha profile for high availability. <socket-binding-groups> <socket-binding-group name="standard-sockets" default-interface="public"> <!-- Needed for server groups using the 'default' profile --> <socket-binding name="ajp" port="USD{jboss.ajp.port:8009}"/> <socket-binding name="http" port="USD{jboss.http.port:8080}"/> <socket-binding name="https" port="USD{jboss.https.port:8443}"/> <socket-binding name="txn-recovery-environment" port="4712"/> <socket-binding name="txn-status-manager" port="4713"/> <outbound-socket-binding name="mail-smtp"> <remote-destination host="localhost" port="25"/> </outbound-socket-binding> </socket-binding-group> <socket-binding-group name="ha-sockets" default-interface="public"> <!-- Needed for server groups using the 'ha' profile --> ... </socket-binding-group> <socket-binding-group name="full-sockets" default-interface="public"> <!-- Needed for server groups using the 'full' profile --> ... </socket-binding-group> <socket-binding-group name="full-ha-sockets" default-interface="public"> <!-- Needed for server groups using the 'full-ha' profile --> <socket-binding name="ajp" port="USD{jboss.ajp.port:8009}"/> <socket-binding name="http" port="USD{jboss.http.port:8080}"/> <socket-binding name="https" port="USD{jboss.https.port:8443}"/> <socket-binding name="iiop" interface="unsecure" port="3528"/> <socket-binding name="iiop-ssl" interface="unsecure" port="3529"/> <socket-binding name="jgroups-mping" interface="private" port="0" multicast-address="USD{jboss.default.multicast.address:230.0.0.4}" multicast-port="45700"/> <socket-binding name="jgroups-tcp" interface="private" port="7600"/> <socket-binding name="jgroups-udp" interface="private" port="55200" multicast-address="USD{jboss.default.multicast.address:230.0.0.4}" multicast-port="45688"/> <socket-binding name="modcluster" port="0" multicast-address="224.0.1.105" multicast-port="23364"/> <socket-binding name="txn-recovery-environment" port="4712"/> <socket-binding name="txn-status-manager" port="4713"/> <outbound-socket-binding name="mail-smtp"> <remote-destination host="localhost" port="25"/> </outbound-socket-binding> </socket-binding-group> <socket-binding-group name="load-balancer-sockets" default-interface="public"> <!-- Needed for server groups using the 'load-balancer' profile --> ... </socket-binding-group> </socket-binding-groups> Note The socket configuration for the management interfaces is defined in the domain controller's host.xml file. 4.2.3. Configuring socket bindings When defining a socket binding, you can configure the port and interface attributes, as well as multicast settings such as multicast-address and multicast-port . For details on all available socket bindings attributes, see the Socket binding attributes section. Socket bindings can be configured using the management console or the management CLI. The following steps go through adding a socket binding group, adding a socket binding, and configuring socket binding settings using the management CLI. Procedure Add a new socket binding group. Note This step cannot be performed when running as a standalone server. Add a socket binding. Change the socket binding to use an interface other than the default, which is set by the socket binding group. The following example shows how the XML configuration may look after the above steps have been completed. <socket-binding-groups> ... <socket-binding-group name="new-sockets" default-interface="public"> <socket-binding name="new-socket-binding" interface="unsecure" port="1234"/> </socket-binding-group> </socket-binding-groups> 4.2.4. Viewing socket bindings and open ports for a server You can view the socket binding name and the open ports for a server from the management console. Prerequisites Socket binding names and open ports are only visible when the server is in one of the following states: running reload-required restart-required Procedure Access the management console and navigate to Runtime . Click the server to view the socket binding name and the open ports in the right pane. 4.2.5. Port offsets A port offset is a numeric offset value added to all port values specified in the socket binding group for that server. This allows the server to inherit the port values defined in its socket binding group, with an offset to ensure that it does not conflict with any other servers on the same host. For instance, if the HTTP port of the socket binding group is 8080 , and a server uses a port offset of 100 , then its HTTP port is 8180 . Below is an example of setting a port offset of 250 for a server in a managed domain using the management CLI. Port offsets can be used for servers in a managed domain and for running multiple standalone servers on the same host. You can pass in a port offset when starting a standalone server using the jboss.socket.binding.port-offset property. 4.3. IPv6 Addresses By default, JBoss EAP is configured to run using IPv4 addresses. The steps below show how to configure JBoss EAP to run using IPv6 addresses. 4.3.1. Configure the JVM stack for IPv6 addresses Update the startup configuration to prefer IPv6 addresses. Procedure Open the startup configuration file. When running as a standalone server, edit the EAP_HOME /bin/standalone.conf file (or standalone.conf.bat for Windows Server). When running in a managed domain, edit the EAP_HOME /bin/domain.conf file (or domain.conf.bat for Windows Server). Set the java.net.preferIPv4Stack property to false . Append the java.net.preferIPv6Addresses property and set it to true . The following example shows how the JVM options in the startup configuration file may look after making the above changes. # Specify options to pass to the Java VM. # if [ "xUSDJAVA_OPTS" = "x" ]; then JAVA_OPTS="USDJBOSS_JAVA_SIZING -Djava.net.preferIPv4Stack=false" JAVA_OPTS="USDJAVA_OPTS -Djava.net.preferIPv6Addresses=true" else 4.3.2. Update interface declarations for IPv6 addresses The default interface values in the configuration can be changed to IPv6 addresses. For example, the below management CLI command sets the management interface to the IPv6 loopback address ( ::1 ). The following example shows how the XML configuration may look after running the above command. <interfaces> <interface name="management"> <inet-address value="USD{jboss.bind.address.management:[::1]}"/> </interface> .... </interfaces>
[ "<interfaces> <interface name=\"management\"> <inet-address value=\"USD{jboss.bind.address.management:127.0.0.1}\"/> </interface> <interface name=\"public\"> <inet-address value=\"USD{jboss.bind.address:127.0.0.1}\"/> </interface> <interface name=\"private\"> <inet-address value=\"USD{jboss.bind.address.private:127.0.0.1}\"/> </interface> <interface name=\"unsecure\"> <inet-address value=\"USD{jboss.bind.address.unsecure:127.0.0.1}\"/> </interface> </interfaces>", "EAP_HOME /bin/standalone.sh -Djboss.bind.address= IP_ADDRESS", "EAP_HOME /bin/standalone.sh -b IP_ADDRESS", "EAP_HOME /bin/standalone.sh -bmanagement=IP_ADDRESS", "/interface=external:add(nic=eth0)", "<interface name=\"external\"> <nic name=\"eth0\"/> </interface>", "/interface=default:add(subnet-match=192.168.0.0/16,up=true,multicast=true,not={point-to-point=true})", "<interface name=\"default\"> <subnet-match value=\"192.168.0.0/16\"/> <up/> <multicast/> <not> <point-to-point/> </not> </interface>", "/interface=public:write-attribute(name=inet-address,value=\"USD{jboss.bind.address:192.168.0.0}\")", "<interface name=\"public\"> <inet-address value=\"USD{jboss.bind.address:192.168.0.0}\"/> </interface>", "/host= HOST_NAME /server-config= SERVER_NAME /interface= INTERFACE_NAME :add(inet-address=127.0.0.1)", "<servers> <server name=\" SERVER_NAME \" group=\"main-server-group\"> <interfaces> <interface name=\" INTERFACE_NAME \"> <inet-address value=\"127.0.0.1\"/> </interface> </interfaces> </server> </servers>", "<socket-binding-group name=\"standard-sockets\" default-interface=\"public\" port-offset=\"USD{jboss.socket.binding.port-offset:0}\"> <socket-binding name=\"management-http\" interface=\"management\" port=\"USD{jboss.management.http.port:9990}\"/> <socket-binding name=\"management-https\" interface=\"management\" port=\"USD{jboss.management.https.port:9993}\"/> <socket-binding name=\"ajp\" port=\"USD{jboss.ajp.port:8009}\"/> <socket-binding name=\"http\" port=\"USD{jboss.http.port:8080}\"/> <socket-binding name=\"https\" port=\"USD{jboss.https.port:8443}\"/> <socket-binding name=\"txn-recovery-environment\" port=\"4712\"/> <socket-binding name=\"txn-status-manager\" port=\"4713\"/> <outbound-socket-binding name=\"mail-smtp\"> <remote-destination host=\"localhost\" port=\"25\"/> </outbound-socket-binding> </socket-binding-group>", "<socket-binding-groups> <socket-binding-group name=\"standard-sockets\" default-interface=\"public\"> <!-- Needed for server groups using the 'default' profile --> <socket-binding name=\"ajp\" port=\"USD{jboss.ajp.port:8009}\"/> <socket-binding name=\"http\" port=\"USD{jboss.http.port:8080}\"/> <socket-binding name=\"https\" port=\"USD{jboss.https.port:8443}\"/> <socket-binding name=\"txn-recovery-environment\" port=\"4712\"/> <socket-binding name=\"txn-status-manager\" port=\"4713\"/> <outbound-socket-binding name=\"mail-smtp\"> <remote-destination host=\"localhost\" port=\"25\"/> </outbound-socket-binding> </socket-binding-group> <socket-binding-group name=\"ha-sockets\" default-interface=\"public\"> <!-- Needed for server groups using the 'ha' profile --> </socket-binding-group> <socket-binding-group name=\"full-sockets\" default-interface=\"public\"> <!-- Needed for server groups using the 'full' profile --> </socket-binding-group> <socket-binding-group name=\"full-ha-sockets\" default-interface=\"public\"> <!-- Needed for server groups using the 'full-ha' profile --> <socket-binding name=\"ajp\" port=\"USD{jboss.ajp.port:8009}\"/> <socket-binding name=\"http\" port=\"USD{jboss.http.port:8080}\"/> <socket-binding name=\"https\" port=\"USD{jboss.https.port:8443}\"/> <socket-binding name=\"iiop\" interface=\"unsecure\" port=\"3528\"/> <socket-binding name=\"iiop-ssl\" interface=\"unsecure\" port=\"3529\"/> <socket-binding name=\"jgroups-mping\" interface=\"private\" port=\"0\" multicast-address=\"USD{jboss.default.multicast.address:230.0.0.4}\" multicast-port=\"45700\"/> <socket-binding name=\"jgroups-tcp\" interface=\"private\" port=\"7600\"/> <socket-binding name=\"jgroups-udp\" interface=\"private\" port=\"55200\" multicast-address=\"USD{jboss.default.multicast.address:230.0.0.4}\" multicast-port=\"45688\"/> <socket-binding name=\"modcluster\" port=\"0\" multicast-address=\"224.0.1.105\" multicast-port=\"23364\"/> <socket-binding name=\"txn-recovery-environment\" port=\"4712\"/> <socket-binding name=\"txn-status-manager\" port=\"4713\"/> <outbound-socket-binding name=\"mail-smtp\"> <remote-destination host=\"localhost\" port=\"25\"/> </outbound-socket-binding> </socket-binding-group> <socket-binding-group name=\"load-balancer-sockets\" default-interface=\"public\"> <!-- Needed for server groups using the 'load-balancer' profile --> </socket-binding-group> </socket-binding-groups>", "/socket-binding-group=new-sockets:add(default-interface=public)", "/socket-binding-group=new-sockets/socket-binding=new-socket-binding:add(port=1234)", "/socket-binding-group=new-sockets/socket-binding=new-socket-binding:write-attribute(name=interface,value=unsecure)", "<socket-binding-groups> <socket-binding-group name=\"new-sockets\" default-interface=\"public\"> <socket-binding name=\"new-socket-binding\" interface=\"unsecure\" port=\"1234\"/> </socket-binding-group> </socket-binding-groups>", "/host=primary/server-config=server-two/:write-attribute(name=socket-binding-port-offset,value=250)", "EAP_HOME /bin/standalone.sh -Djboss.socket.binding.port-offset=100", "-Djava.net.preferIPv4Stack=false", "-Djava.net.preferIPv6Addresses=true", "Specify options to pass to the Java VM. # if [ \"xUSDJAVA_OPTS\" = \"x\" ]; then JAVA_OPTS=\"USDJBOSS_JAVA_SIZING -Djava.net.preferIPv4Stack=false\" JAVA_OPTS=\"USDJAVA_OPTS -Djava.net.preferIPv6Addresses=true\" else", "/interface=management:write-attribute(name=inet-address,value=\"USD{jboss.bind.address.management:[::1]}\")", "<interfaces> <interface name=\"management\"> <inet-address value=\"USD{jboss.bind.address.management:[::1]}\"/> </interface> . </interfaces>" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/configuration_guide/network_and_port_configuration
Making Open Source More Inclusive
Making Open Source More Inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see " our CTO Chris Wright's message " .
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/making-open-source-more-inclusive
Chapter 2. AdminNetworkPolicy [policy.networking.k8s.io/v1alpha1]
Chapter 2. AdminNetworkPolicy [policy.networking.k8s.io/v1alpha1] Description AdminNetworkPolicy is a cluster level resource that is part of the AdminNetworkPolicy API. Type object Required metadata spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the desired behavior of AdminNetworkPolicy. status object Status is the status to be reported by the implementation. 2.1.1. .spec Description Specification of the desired behavior of AdminNetworkPolicy. Type object Required priority subject Property Type Description egress array Egress is the list of Egress rules to be applied to the selected pods. A total of 100 rules will be allowed in each ANP instance. The relative precedence of egress rules within a single ANP object (all of which share the priority) will be determined by the order in which the rule is written. Thus, a rule that appears at the top of the egress rules would take the highest precedence. ANPs with no egress rules do not affect egress traffic. Support: Core egress[] object AdminNetworkPolicyEgressRule describes an action to take on a particular set of traffic originating from pods selected by a AdminNetworkPolicy's Subject field. <network-policy-api:experimental:validation> ingress array Ingress is the list of Ingress rules to be applied to the selected pods. A total of 100 rules will be allowed in each ANP instance. The relative precedence of ingress rules within a single ANP object (all of which share the priority) will be determined by the order in which the rule is written. Thus, a rule that appears at the top of the ingress rules would take the highest precedence. ANPs with no ingress rules do not affect ingress traffic. Support: Core ingress[] object AdminNetworkPolicyIngressRule describes an action to take on a particular set of traffic destined for pods selected by an AdminNetworkPolicy's Subject field. priority integer Priority is a value from 0 to 1000. Rules with lower priority values have higher precedence, and are checked before rules with higher priority values. All AdminNetworkPolicy rules have higher precedence than NetworkPolicy or BaselineAdminNetworkPolicy rules The behavior is undefined if two ANP objects have same priority. Support: Core subject object Subject defines the pods to which this AdminNetworkPolicy applies. Note that host-networked pods are not included in subject selection. Support: Core 2.1.2. .spec.egress Description Egress is the list of Egress rules to be applied to the selected pods. A total of 100 rules will be allowed in each ANP instance. The relative precedence of egress rules within a single ANP object (all of which share the priority) will be determined by the order in which the rule is written. Thus, a rule that appears at the top of the egress rules would take the highest precedence. ANPs with no egress rules do not affect egress traffic. Support: Core Type array 2.1.3. .spec.egress[] Description AdminNetworkPolicyEgressRule describes an action to take on a particular set of traffic originating from pods selected by a AdminNetworkPolicy's Subject field. <network-policy-api:experimental:validation> Type object Required action to Property Type Description action string Action specifies the effect this rule will have on matching traffic. Currently the following actions are supported: Allow: allows the selected traffic (even if it would otherwise have been denied by NetworkPolicy) Deny: denies the selected traffic Pass: instructs the selected traffic to skip any remaining ANP rules, and then pass execution to any NetworkPolicies that select the pod. If the pod is not selected by any NetworkPolicies then execution is passed to any BaselineAdminNetworkPolicies that select the pod. Support: Core name string Name is an identifier for this rule, that may be no more than 100 characters in length. This field should be used by the implementation to help improve observability, readability and error-reporting for any applied AdminNetworkPolicies. Support: Core ports array Ports allows for matching traffic based on port and protocols. This field is a list of destination ports for the outgoing egress traffic. If Ports is not set then the rule does not filter traffic via port. Support: Core ports[] object AdminNetworkPolicyPort describes how to select network ports on pod(s). Exactly one field must be set. to array To is the List of destinations whose traffic this rule applies to. If any AdminNetworkPolicyEgressPeer matches the destination of outgoing traffic then the specified action is applied. This field must be defined and contain at least one item. Support: Core to[] object AdminNetworkPolicyEgressPeer defines a peer to allow traffic to. Exactly one of the selector pointers must be set for a given peer. If a consumer observes none of its fields are set, they must assume an unknown option has been specified and fail closed. 2.1.4. .spec.egress[].ports Description Ports allows for matching traffic based on port and protocols. This field is a list of destination ports for the outgoing egress traffic. If Ports is not set then the rule does not filter traffic via port. Support: Core Type array 2.1.5. .spec.egress[].ports[] Description AdminNetworkPolicyPort describes how to select network ports on pod(s). Exactly one field must be set. Type object Property Type Description namedPort string NamedPort selects a port on a pod(s) based on name. Support: Extended <network-policy-api:experimental> portNumber object Port selects a port on a pod(s) based on number. Support: Core portRange object PortRange selects a port range on a pod(s) based on provided start and end values. Support: Core 2.1.6. .spec.egress[].ports[].portNumber Description Port selects a port on a pod(s) based on number. Support: Core Type object Required port protocol Property Type Description port integer Number defines a network port value. Support: Core protocol string Protocol is the network protocol (TCP, UDP, or SCTP) which traffic must match. If not specified, this field defaults to TCP. Support: Core 2.1.7. .spec.egress[].ports[].portRange Description PortRange selects a port range on a pod(s) based on provided start and end values. Support: Core Type object Required end start Property Type Description end integer End defines a network port that is the end of a port range, the End value must be greater than Start. Support: Core protocol string Protocol is the network protocol (TCP, UDP, or SCTP) which traffic must match. If not specified, this field defaults to TCP. Support: Core start integer Start defines a network port that is the start of a port range, the Start value must be less than End. Support: Core 2.1.8. .spec.egress[].to Description To is the List of destinations whose traffic this rule applies to. If any AdminNetworkPolicyEgressPeer matches the destination of outgoing traffic then the specified action is applied. This field must be defined and contain at least one item. Support: Core Type array 2.1.9. .spec.egress[].to[] Description AdminNetworkPolicyEgressPeer defines a peer to allow traffic to. Exactly one of the selector pointers must be set for a given peer. If a consumer observes none of its fields are set, they must assume an unknown option has been specified and fail closed. Type object Property Type Description namespaces object Namespaces defines a way to select all pods within a set of Namespaces. Note that host-networked pods are not included in this type of peer. Support: Core networks array (string) Networks defines a way to select peers via CIDR blocks. This is intended for representing entities that live outside the cluster, which can't be selected by pods, namespaces and nodes peers, but note that cluster-internal traffic will be checked against the rule as well. So if you Allow or Deny traffic to "0.0.0.0/0" , that will allow or deny all IPv4 pod-to-pod traffic as well. If you don't want that, add a rule that Passes all pod traffic before the Networks rule. Each item in Networks should be provided in the CIDR format and should be IPv4 or IPv6, for example "10.0.0.0/8" or "fd00::/8". Networks can have upto 25 CIDRs specified. Support: Extended <network-policy-api:experimental> nodes object Nodes defines a way to select a set of nodes in the cluster. This field follows standard label selector semantics; if present but empty, it selects all Nodes. Support: Extended <network-policy-api:experimental> pods object Pods defines a way to select a set of pods in a set of namespaces. Note that host-networked pods are not included in this type of peer. Support: Core 2.1.10. .spec.egress[].to[].namespaces Description Namespaces defines a way to select all pods within a set of Namespaces. Note that host-networked pods are not included in this type of peer. Support: Core Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.11. .spec.egress[].to[].namespaces.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.12. .spec.egress[].to[].namespaces.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.13. .spec.egress[].to[].nodes Description Nodes defines a way to select a set of nodes in the cluster. This field follows standard label selector semantics; if present but empty, it selects all Nodes. Support: Extended <network-policy-api:experimental> Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.14. .spec.egress[].to[].nodes.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.15. .spec.egress[].to[].nodes.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.16. .spec.egress[].to[].pods Description Pods defines a way to select a set of pods in a set of namespaces. Note that host-networked pods are not included in this type of peer. Support: Core Type object Required namespaceSelector podSelector Property Type Description namespaceSelector object NamespaceSelector follows standard label selector semantics; if empty, it selects all Namespaces. podSelector object PodSelector is used to explicitly select pods within a namespace; if empty, it selects all Pods. 2.1.17. .spec.egress[].to[].pods.namespaceSelector Description NamespaceSelector follows standard label selector semantics; if empty, it selects all Namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.18. .spec.egress[].to[].pods.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.19. .spec.egress[].to[].pods.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.20. .spec.egress[].to[].pods.podSelector Description PodSelector is used to explicitly select pods within a namespace; if empty, it selects all Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.21. .spec.egress[].to[].pods.podSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.22. .spec.egress[].to[].pods.podSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.23. .spec.ingress Description Ingress is the list of Ingress rules to be applied to the selected pods. A total of 100 rules will be allowed in each ANP instance. The relative precedence of ingress rules within a single ANP object (all of which share the priority) will be determined by the order in which the rule is written. Thus, a rule that appears at the top of the ingress rules would take the highest precedence. ANPs with no ingress rules do not affect ingress traffic. Support: Core Type array 2.1.24. .spec.ingress[] Description AdminNetworkPolicyIngressRule describes an action to take on a particular set of traffic destined for pods selected by an AdminNetworkPolicy's Subject field. Type object Required action from Property Type Description action string Action specifies the effect this rule will have on matching traffic. Currently the following actions are supported: Allow: allows the selected traffic (even if it would otherwise have been denied by NetworkPolicy) Deny: denies the selected traffic Pass: instructs the selected traffic to skip any remaining ANP rules, and then pass execution to any NetworkPolicies that select the pod. If the pod is not selected by any NetworkPolicies then execution is passed to any BaselineAdminNetworkPolicies that select the pod. Support: Core from array From is the list of sources whose traffic this rule applies to. If any AdminNetworkPolicyIngressPeer matches the source of incoming traffic then the specified action is applied. This field must be defined and contain at least one item. Support: Core from[] object AdminNetworkPolicyIngressPeer defines an in-cluster peer to allow traffic from. Exactly one of the selector pointers must be set for a given peer. If a consumer observes none of its fields are set, they must assume an unknown option has been specified and fail closed. name string Name is an identifier for this rule, that may be no more than 100 characters in length. This field should be used by the implementation to help improve observability, readability and error-reporting for any applied AdminNetworkPolicies. Support: Core ports array Ports allows for matching traffic based on port and protocols. This field is a list of ports which should be matched on the pods selected for this policy i.e the subject of the policy. So it matches on the destination port for the ingress traffic. If Ports is not set then the rule does not filter traffic via port. Support: Core ports[] object AdminNetworkPolicyPort describes how to select network ports on pod(s). Exactly one field must be set. 2.1.25. .spec.ingress[].from Description From is the list of sources whose traffic this rule applies to. If any AdminNetworkPolicyIngressPeer matches the source of incoming traffic then the specified action is applied. This field must be defined and contain at least one item. Support: Core Type array 2.1.26. .spec.ingress[].from[] Description AdminNetworkPolicyIngressPeer defines an in-cluster peer to allow traffic from. Exactly one of the selector pointers must be set for a given peer. If a consumer observes none of its fields are set, they must assume an unknown option has been specified and fail closed. Type object Property Type Description namespaces object Namespaces defines a way to select all pods within a set of Namespaces. Note that host-networked pods are not included in this type of peer. Support: Core pods object Pods defines a way to select a set of pods in a set of namespaces. Note that host-networked pods are not included in this type of peer. Support: Core 2.1.27. .spec.ingress[].from[].namespaces Description Namespaces defines a way to select all pods within a set of Namespaces. Note that host-networked pods are not included in this type of peer. Support: Core Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.28. .spec.ingress[].from[].namespaces.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.29. .spec.ingress[].from[].namespaces.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.30. .spec.ingress[].from[].pods Description Pods defines a way to select a set of pods in a set of namespaces. Note that host-networked pods are not included in this type of peer. Support: Core Type object Required namespaceSelector podSelector Property Type Description namespaceSelector object NamespaceSelector follows standard label selector semantics; if empty, it selects all Namespaces. podSelector object PodSelector is used to explicitly select pods within a namespace; if empty, it selects all Pods. 2.1.31. .spec.ingress[].from[].pods.namespaceSelector Description NamespaceSelector follows standard label selector semantics; if empty, it selects all Namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.32. .spec.ingress[].from[].pods.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.33. .spec.ingress[].from[].pods.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.34. .spec.ingress[].from[].pods.podSelector Description PodSelector is used to explicitly select pods within a namespace; if empty, it selects all Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.35. .spec.ingress[].from[].pods.podSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.36. .spec.ingress[].from[].pods.podSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.37. .spec.ingress[].ports Description Ports allows for matching traffic based on port and protocols. This field is a list of ports which should be matched on the pods selected for this policy i.e the subject of the policy. So it matches on the destination port for the ingress traffic. If Ports is not set then the rule does not filter traffic via port. Support: Core Type array 2.1.38. .spec.ingress[].ports[] Description AdminNetworkPolicyPort describes how to select network ports on pod(s). Exactly one field must be set. Type object Property Type Description namedPort string NamedPort selects a port on a pod(s) based on name. Support: Extended <network-policy-api:experimental> portNumber object Port selects a port on a pod(s) based on number. Support: Core portRange object PortRange selects a port range on a pod(s) based on provided start and end values. Support: Core 2.1.39. .spec.ingress[].ports[].portNumber Description Port selects a port on a pod(s) based on number. Support: Core Type object Required port protocol Property Type Description port integer Number defines a network port value. Support: Core protocol string Protocol is the network protocol (TCP, UDP, or SCTP) which traffic must match. If not specified, this field defaults to TCP. Support: Core 2.1.40. .spec.ingress[].ports[].portRange Description PortRange selects a port range on a pod(s) based on provided start and end values. Support: Core Type object Required end start Property Type Description end integer End defines a network port that is the end of a port range, the End value must be greater than Start. Support: Core protocol string Protocol is the network protocol (TCP, UDP, or SCTP) which traffic must match. If not specified, this field defaults to TCP. Support: Core start integer Start defines a network port that is the start of a port range, the Start value must be less than End. Support: Core 2.1.41. .spec.subject Description Subject defines the pods to which this AdminNetworkPolicy applies. Note that host-networked pods are not included in subject selection. Support: Core Type object Property Type Description namespaces object Namespaces is used to select pods via namespace selectors. pods object Pods is used to select pods via namespace AND pod selectors. 2.1.42. .spec.subject.namespaces Description Namespaces is used to select pods via namespace selectors. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.43. .spec.subject.namespaces.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.44. .spec.subject.namespaces.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.45. .spec.subject.pods Description Pods is used to select pods via namespace AND pod selectors. Type object Required namespaceSelector podSelector Property Type Description namespaceSelector object NamespaceSelector follows standard label selector semantics; if empty, it selects all Namespaces. podSelector object PodSelector is used to explicitly select pods within a namespace; if empty, it selects all Pods. 2.1.46. .spec.subject.pods.namespaceSelector Description NamespaceSelector follows standard label selector semantics; if empty, it selects all Namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.47. .spec.subject.pods.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.48. .spec.subject.pods.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.49. .spec.subject.pods.podSelector Description PodSelector is used to explicitly select pods within a namespace; if empty, it selects all Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.50. .spec.subject.pods.podSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.51. .spec.subject.pods.podSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.52. .status Description Status is the status to be reported by the implementation. Type object Required conditions Property Type Description conditions array conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } 2.1.53. .status.conditions Description Type array 2.1.54. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 2.2. API endpoints The following API endpoints are available: /apis/policy.networking.k8s.io/v1alpha1/adminnetworkpolicies DELETE : delete collection of AdminNetworkPolicy GET : list objects of kind AdminNetworkPolicy POST : create an AdminNetworkPolicy /apis/policy.networking.k8s.io/v1alpha1/adminnetworkpolicies/{name} DELETE : delete an AdminNetworkPolicy GET : read the specified AdminNetworkPolicy PATCH : partially update the specified AdminNetworkPolicy PUT : replace the specified AdminNetworkPolicy /apis/policy.networking.k8s.io/v1alpha1/adminnetworkpolicies/{name}/status GET : read status of the specified AdminNetworkPolicy PATCH : partially update status of the specified AdminNetworkPolicy PUT : replace status of the specified AdminNetworkPolicy 2.2.1. /apis/policy.networking.k8s.io/v1alpha1/adminnetworkpolicies HTTP method DELETE Description delete collection of AdminNetworkPolicy Table 2.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind AdminNetworkPolicy Table 2.2. HTTP responses HTTP code Reponse body 200 - OK AdminNetworkPolicyList schema 401 - Unauthorized Empty HTTP method POST Description create an AdminNetworkPolicy Table 2.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.4. Body parameters Parameter Type Description body AdminNetworkPolicy schema Table 2.5. HTTP responses HTTP code Reponse body 200 - OK AdminNetworkPolicy schema 201 - Created AdminNetworkPolicy schema 202 - Accepted AdminNetworkPolicy schema 401 - Unauthorized Empty 2.2.2. /apis/policy.networking.k8s.io/v1alpha1/adminnetworkpolicies/{name} Table 2.6. Global path parameters Parameter Type Description name string name of the AdminNetworkPolicy HTTP method DELETE Description delete an AdminNetworkPolicy Table 2.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified AdminNetworkPolicy Table 2.9. HTTP responses HTTP code Reponse body 200 - OK AdminNetworkPolicy schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified AdminNetworkPolicy Table 2.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.11. HTTP responses HTTP code Reponse body 200 - OK AdminNetworkPolicy schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified AdminNetworkPolicy Table 2.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.13. Body parameters Parameter Type Description body AdminNetworkPolicy schema Table 2.14. HTTP responses HTTP code Reponse body 200 - OK AdminNetworkPolicy schema 201 - Created AdminNetworkPolicy schema 401 - Unauthorized Empty 2.2.3. /apis/policy.networking.k8s.io/v1alpha1/adminnetworkpolicies/{name}/status Table 2.15. Global path parameters Parameter Type Description name string name of the AdminNetworkPolicy HTTP method GET Description read status of the specified AdminNetworkPolicy Table 2.16. HTTP responses HTTP code Reponse body 200 - OK AdminNetworkPolicy schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified AdminNetworkPolicy Table 2.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.18. HTTP responses HTTP code Reponse body 200 - OK AdminNetworkPolicy schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified AdminNetworkPolicy Table 2.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.20. Body parameters Parameter Type Description body AdminNetworkPolicy schema Table 2.21. HTTP responses HTTP code Reponse body 200 - OK AdminNetworkPolicy schema 201 - Created AdminNetworkPolicy schema 401 - Unauthorized Empty
[ "type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: \"Available\", \"Progressing\", and \"Degraded\" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition `json:\"conditions,omitempty\" patchStrategy:\"merge\" patchMergeKey:\"type\" protobuf:\"bytes,1,rep,name=conditions\"`", "// other fields }" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/network_apis/adminnetworkpolicy-policy-networking-k8s-io-v1alpha1
Chapter 6. Installing a cluster on VMC with user-provisioned infrastructure
Chapter 6. Installing a cluster on VMC with user-provisioned infrastructure In OpenShift Container Platform version 4.12, you can install a cluster on VMware vSphere infrastructure that you provision by deploying it to VMware Cloud (VMC) on AWS . Once you configure your VMC environment for OpenShift Container Platform deployment, you use the OpenShift Container Platform installation program from the bastion management host, co-located in the VMC environment. The installation program and control plane automates the process of deploying and managing the resources needed for the OpenShift Container Platform cluster. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 6.1. Setting up VMC for vSphere You can install OpenShift Container Platform on VMware Cloud (VMC) on AWS hosted vSphere clusters to enable applications to be deployed and managed both on-premise and off-premise, across the hybrid cloud. You must configure several options in your VMC environment prior to installing OpenShift Container Platform on VMware vSphere. Ensure your VMC environment has the following prerequisites: Create a non-exclusive, DHCP-enabled, NSX-T network segment and subnet. Other virtual machines (VMs) can be hosted on the subnet, but at least eight IP addresses must be available for the OpenShift Container Platform deployment. Configure the following firewall rules: An ANY:ANY firewall rule between the OpenShift Container Platform compute network and the internet. This is used by nodes and applications to download container images. An ANY:ANY firewall rule between the installation host and the software-defined data center (SDDC) management network on port 443. This allows you to upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA during deployment. An HTTPS firewall rule between the OpenShift Container Platform compute network and vCenter. This connection allows OpenShift Container Platform to communicate with vCenter for provisioning and managing nodes, persistent volume claims (PVCs), and other resources. You must have the following information to deploy OpenShift Container Platform: The OpenShift Container Platform cluster name, such as vmc-prod-1 . The base DNS name, such as companyname.com . If not using the default, the pod network CIDR and services network CIDR must be identified, which are set by default to 10.128.0.0/14 and 172.30.0.0/16 , respectively. These CIDRs are used for pod-to-pod and pod-to-service communication and are not accessible externally; however, they must not overlap with existing subnets in your organization. The following vCenter information: vCenter hostname, username, and password Datacenter name, such as SDDC-Datacenter Cluster name, such as Cluster-1 Network name Datastore name, such as WorkloadDatastore Note It is recommended to move your vSphere cluster to the VMC Compute-ResourcePool resource pool after your cluster installation is finished. A Linux-based host deployed to VMC as a bastion. The bastion host can be Red Hat Enterprise Linux (RHEL) or any another Linux-based host; it must have internet connectivity and the ability to upload an OVA to the ESXi hosts. Download and install the OpenShift CLI tools to the bastion host. The openshift-install installation program The OpenShift CLI ( oc ) tool Note You cannot use the VMware NSX Container Plugin for Kubernetes (NCP), and NSX is not used as the OpenShift SDN. The version of NSX currently available with VMC is incompatible with the version of NCP certified with OpenShift Container Platform. However, the NSX DHCP service is used for virtual machine IP management with the full-stack automated OpenShift Container Platform deployment and with nodes provisioned, either manually or automatically, by the Machine API integration with vSphere. Additionally, NSX firewall rules are created to enable access with the OpenShift Container Platform cluster and between the bastion host and the VMC vSphere hosts. 6.1.1. VMC Sizer tool VMware Cloud on AWS is built on top of AWS bare metal infrastructure; this is the same bare metal infrastructure which runs AWS native services. When a VMware cloud on AWS software-defined data center (SDDC) is deployed, you consume these physical server nodes and run the VMware ESXi hypervisor in a single tenant fashion. This means the physical infrastructure is not accessible to anyone else using VMC. It is important to consider how many physical hosts you will need to host your virtual infrastructure. To determine this, VMware provides the VMC on AWS Sizer . With this tool, you can define the resources you intend to host on VMC: Types of workloads Total number of virtual machines Specification information such as: Storage requirements vCPUs vRAM Overcommit ratios With these details, the sizer tool can generate a report, based on VMware best practices, and recommend your cluster configuration and the number of hosts you will need. 6.2. vSphere prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You provisioned block registry storage . For more information on persistent storage, see Understanding persistent storage . If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 6.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 6.4. VMware vSphere infrastructure requirements You must install an OpenShift Container Platform cluster on one of the following versions of a VMware vSphere instance that meets the requirements for the components that you use: Version 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later Version 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later You can host the VMware vSphere infrastructure on-premise or on a VMware Cloud Verified provider that meets the requirements outlined in the following table: Table 6.1. Version requirements for vSphere virtual environments Virtual environment product Required version VMware virtual hardware 15 or later vSphere ESXi hosts 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter host 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Important Installing a cluster on VMware vSphere versions 7.0 and 7.0 Update 1 is deprecated. These versions are still fully supported, but all vSphere 6.x versions are no longer supported. Version 4.12 of OpenShift Container Platform requires VMware virtual hardware version 15 or later. To update the hardware version for your vSphere virtual machines, see the "Updating hardware on nodes running in vSphere" article in the Updating clusters section. Table 6.2. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; vSphere 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later with virtual hardware version 15 This hypervisor version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. For more information about supported hardware on the latest version of Red Hat Enterprise Linux (RHEL) that is compatible with RHCOS, see Hardware on the Red Hat Customer Portal. Storage with in-tree drivers vSphere 7.0 Update 2 or later; 8.0 Update 1 or later This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. 6.5. VMware vSphere CSI Driver Operator requirements To install the vSphere CSI Driver Operator, the following requirements must be met: VMware vSphere version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Virtual machines of hardware version 15 or later No third-party vSphere CSI driver already installed in the cluster If a third-party vSphere CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. The presence of a third-party vSphere CSI driver prevents OpenShift Container Platform from updating to OpenShift Container Platform 4.13 or later. Note The VMware vSphere CSI Driver Operator is supported only on clusters deployed with platform: vsphere in the installation manifest. Additional resources To remove a third-party CSI driver, see Removing a third-party vSphere CSI Driver . To update the hardware version for your vSphere nodes, see Updating hardware on nodes running in vSphere . 6.6. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 6.6.1. vCenter requirements Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that you provided, you must prepare your environment. Required vCenter account privileges To install an OpenShift Container Platform cluster in a vCenter, your vSphere account must include privileges for reading and creating the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. Example 6.1. Roles and privileges required for installation in vSphere API vSphere object for role When required Required privileges in vSphere API vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update StorageProfile.View vSphere vCenter Cluster If VMs will be created in the cluster root Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere vCenter Resource Pool If an existing resource pool is provided Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement InventoryService.Tagging.ObjectAttachable vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.MarkAsTemplate VirtualMachine.Provisioning.DeployTemplate vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For UPI, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.DeployTemplate VirtualMachine.Provisioning.MarkAsTemplate Folder.Create Folder.Delete Example 6.2. Roles and privileges required for installation in vCenter graphical user interface (GUI) vSphere object for role When required Required privileges in vCenter GUI vSphere vCenter Always Cns.Searchable "vSphere Tagging"."Assign or Unassign vSphere Tag" "vSphere Tagging"."Create vSphere Tag Category" "vSphere Tagging"."Create vSphere Tag" vSphere Tagging"."Delete vSphere Tag Category" "vSphere Tagging"."Delete vSphere Tag" "vSphere Tagging"."Edit vSphere Tag Category" "vSphere Tagging"."Edit vSphere Tag" Sessions."Validate session" "Profile-driven storage"."Profile-driven storage update" "Profile-driven storage"."Profile-driven storage view" vSphere vCenter Cluster If VMs will be created in the cluster root Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere vCenter Resource Pool If an existing resource pool is provided Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere Datastore Always Datastore."Allocate space" Datastore."Browse datastore" Datastore."Low level file operations" "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" vSphere Port Group Always Network."Assign network" Virtual Machine Folder Always "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Mark as template" "Virtual machine".Provisioning."Deploy template" vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For UPI, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Deploy template" "Virtual machine".Provisioning."Mark as template" Folder."Create folder" Folder."Delete folder" Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder. Example 6.3. Required permissions and propagation settings vSphere object When required Propagate to children Permissions required vSphere vCenter Always False Listed required privileges vSphere vCenter Datacenter Existing folder False ReadOnly permission Installation program creates the folder True Listed required privileges vSphere vCenter Cluster Existing resource pool False ReadOnly permission VMs in cluster root True Listed required privileges vSphere vCenter Datastore Always False Listed required privileges vSphere Switch Always False ReadOnly permission vSphere Port Group Always False Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder True Listed required privileges vSphere vCenter Resource Pool Existing resource pool True Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Using OpenShift Container Platform with vMotion If you intend on using vMotion in your vSphere environment, consider the following before installing an OpenShift Container Platform cluster. OpenShift Container Platform generally supports compute-only vMotion, where generally implies that you meet all VMware best practices for vMotion. To help ensure the uptime of your compute and control plane nodes, ensure that you follow the VMware best practices for vMotion, and use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues. For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules . Using Storage vMotion can cause issues and is not supported. If you are using vSphere volumes in your pods, migrating a VM across datastores, either manually or through Storage vMotion, causes invalid references within OpenShift Container Platform persistent volume (PV) objects that can result in data loss. OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs. Cluster resources When you deploy an OpenShift Container Platform cluster that uses infrastructure that you provided, you must create the following resources in your vCenter instance: 1 Folder 1 Tag category 1 Tag Virtual machines: 1 template 1 temporary bootstrap node 3 control plane nodes 3 compute machines Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster. If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage. Cluster limits Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks. Networking requirements You can use Dynamic Host Configuration Protocol (DHCP) for the network and configure the DHCP server to set persistent IP addresses to machines in your cluster. In the DHCP lease, you must configure the DHCP to use the default gateway. Note You do not need to use the DHCP for the network if you want to provision nodes with static IP addresses. If you are installing to a restricted environment, the VM in your restricted network must have access to vCenter so that it can provision and manage nodes, persistent volume claims (PVCs), and other resources. Note Ensure that each OpenShift Container Platform node in the cluster has access to a Network Time Protocol (NTP) server that is discoverable by DHCP. Installation is possible without an NTP server. However, asynchronous server clocks can cause errors, which the NTP server prevents. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster: Required IP Addresses DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 6.3. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Additional resources Creating a compute machine set on vSphere 6.6.2. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 6.4. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 6.6.3. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 6.5. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 6.6.4. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 6.6.5. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 6.6.5.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 6.6.5.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 6.6. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 6.7. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 6.8. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Ethernet adaptor hardware address requirements When provisioning VMs for the cluster, the ethernet interfaces configured for each VM must use a MAC address from the VMware Organizationally Unique Identifier (OUI) allocation ranges: 00:05:69:00:00:00 to 00:05:69:FF:FF:FF 00:0c:29:00:00:00 to 00:0c:29:FF:FF:FF 00:1c:14:00:00:00 to 00:1c:14:FF:FF:FF 00:50:56:00:00:00 to 00:50:56:FF:FF:FF If a MAC address outside the VMware OUI is used, the cluster installation will not succeed. NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. 6.6.6. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 6.9. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 6.6.6.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 6.4. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 6.5. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 6.6.7. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application ingress load balancing infrastructure. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 6.10. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 6.11. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 6.6.7.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 6.6. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 6.7. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 6.8. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 6.9. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. 6.10. VMware vSphere region and zone enablement You can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. Each datacenter can run multiple clusters. This configuration reduces the risk of a hardware failure or network outage that can cause your cluster to fail. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Important VMware vSphere region and zone enablement is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The default installation configuration deploys a cluster to a single vSphere datacenter. If you want to deploy a cluster to multiple vSphere datacenters, you must create an installation configuration file that enables the region and zone feature. The default install-config.yaml file includes vcenters and failureDomains fields, where you can specify multiple vSphere datacenters and clusters for your OpenShift Container Platform cluster. You can leave these fields blank if you want to install an OpenShift Container Platform cluster in a vSphere environment that consists of single datacenter. The following list describes terms associated with defining zones and regions for your cluster: Failure domain: Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. Region: Specifies a vCenter datacenter. You define a region by using a tag from the openshift-region tag category. Zone: Specifies a vCenter cluster. You define a zone by using a tag from the openshift-zone tag category. Note If you plan on specifying more than one failure domain in your install-config.yaml file, you must create tag categories, zone tags, and region tags in advance of creating the configuration file. You must create a vCenter tag for each vCenter datacenter, which represents a region. Additionally, you must create a vCenter tag for each cluster than runs in a datacenter, which represents a zone. After you create the tags, you must attach each tag to their respective datacenters and clusters. The following table outlines an example of the relationship among regions, zones, and tags for a configuration with multiple vSphere datacenters running in a single VMware vCenter. Table 6.12. Example of a configuration with multiple vSphere datacenters that run in a single VMware vCenter Datacenter (region) Cluster (zone) Tags us-east us-east-1 us-east-1a us-east-1b us-east-2 us-east-2a us-east-2b us-west us-west-1 us-west-1a us-west-1b us-west-2 us-west-2a us-west-2b 6.11. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 6.12. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 6.12.1. Sample install-config.yaml file for VMware vSphere You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 name: worker replicas: 0 3 controlPlane: 4 name: master replicas: 3 5 metadata: name: test 6 platform: vsphere: vcenter: your.vcenter.server 7 username: username 8 password: password 9 datacenter: datacenter 10 defaultDatastore: datastore 11 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" 12 resourcePool: "/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>" 13 diskType: thin 14 fips: false 15 pullSecret: '{"auths": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, ( - ), and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 You must set the value of the replicas parameter to 0 . This parameter controls the number of workers that the cluster creates and manages for you, which are functions that the cluster does not perform when you use user-provisioned infrastructure. You must manually deploy worker machines for the cluster to use before you finish installing OpenShift Container Platform. 5 The number of control plane machines that you add to the cluster. Because the cluster uses this values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 6 The cluster name that you specified in your DNS records. 7 The fully-qualified hostname or IP address of the vCenter server. Important The Cluster Cloud Controller Manager Operator performs a connectivity check on a provided hostname or IP address. Ensure that you specify a hostname or an IP address to a reachable vCenter server. If you provide metadata to a non-existent vCenter server, installation of the cluster fails at the bootstrap stage. 8 The name of the user for accessing the server. 9 The password associated with the vSphere user. 10 The vSphere datacenter. 11 The default vSphere datastore to use. 12 Optional parameter: For installer-provisioned infrastructure, the absolute path of an existing folder where the installation program creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster and you do not want to use the default StorageClass object, named thin , you can omit the folder parameter from the install-config.yaml file. 13 Optional parameter: For installer-provisioned infrastructure, the absolute path of an existing folder where the installation program creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster, omit this parameter. 14 The vSphere disk provisioning method. 15 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. 16 The pull secret that you obtained from OpenShift Cluster Manager Hybrid Cloud Console . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 17 The public portion of the default SSH key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). 6.12.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 6.12.3. Configuring regions and zones for a VMware vCenter You can modify the default installation configuration file to deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. Important VMware vSphere region and zone enablement is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important The example uses the govc command. The govc command is an open source command available from VMware. The govc command is not available from Red Hat. Red Hat Support does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website. Prerequisites You have an existing install-config.yaml installation configuration file. Important You must specify at least one failure domain for your OpenShift Container Platform cluster, so that you can provision datacenter objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different datacenters, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Note You cannot change a failure domain after you installed an OpenShift Container Platform cluster on the VMware vSphere platform. You can add additional failure domains after cluster installation. Procedure Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: Important If you specify different names for the openshift-region and openshift-zone vCenter tag categories, the installation of the OpenShift Container Platform cluster fails. USD govc tags.category.create -d "OpenShift region" openshift-region USD govc tags.category.create -d "OpenShift zone" openshift-zone To create a region tag for each region vSphere datacenter where you want to deploy your cluster, enter the following command in your terminal: USD govc tags.create -c <region_tag_category> <region_tag> To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: USD govc tags.create -c <zone_tag_category> <zone_tag> Attach region tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1> Attach the zone tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1 Change to the directory that contains the installation program and initialize the cluster deployment according to your chosen installation requirements. Sample install-config.yaml file with multiple datacenters defined in a vSphere center apiVersion: v1 baseDomain: example.com featureSet: TechPreviewNoUpgrade 1 compute: name: worker replicas: 3 vsphere: zones: 2 - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" controlPlane: name: master replicas: 3 vsphere: zones: 3 - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" metadata: name: cluster platform: vsphere: vcenter: <vcenter_server> 4 username: <username> 5 password: <password> 6 datacenter: datacenter 7 defaultDatastore: datastore 8 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" 9 cluster: cluster 10 resourcePool: "/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>" 11 diskType: thin failureDomains: 12 - name: <machine_pool_zone_1> 13 region: <region_tag_1> 14 zone: <zone_tag_1> 15 topology: 16 datacenter: <datacenter1> 17 computeCluster: "/<datacenter1>/host/<cluster1>" 18 resourcePool: "/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>" 19 networks: 20 - <VM_Network1_name> datastore: "/<datacenter1>/datastore/<datastore1>" 21 - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> topology: datacenter: <datacenter2> computeCluster: "/<datacenter2>/host/<cluster2>" networks: - <VM_Network2_name> datastore: "/<datacenter2>/datastore/<datastore2>" resourcePool: "/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>" folder: "/<datacenter2>/vm/<folder2>" # ... 1 You must define set the TechPreviewNoUpgrade as the value for this parameter, so that you can use the VMware vSphere region and zone enablement feature. 2 3 An optional parameter for specifying a vCenter cluster. You define a zone by using a tag from the openshift-zone tag category. If you do not define this parameter, nodes will be distributed among all defined failure-domains. 4 5 6 7 8 9 10 11 The default vCenter topology. The installation program uses this topology information to deploy the bootstrap node. Additionally, the topology defines the default datastore for vSphere persistent volumes. 12 Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. If you do not define this parameter, the installation program uses the default vCenter topology. 13 Defines the name of the failure domain. Each failure domain is referenced in the zones parameter to scope a machine pool to the failure domain. 14 You define a region by using a tag from the openshift-region tag category. The tag must be attached to the vCenter datacenter. 15 You define a zone by using a tag from the openshift-zone tag category. The tag must be attached to the vCenter datacenter. 16 Specifies the vCenter resources associated with the failure domain. 17 An optional parameter for defining the vSphere datacenter that is associated with a failure domain. If you do not define this parameter, the installation program uses the default vCenter topology. 18 An optional parameter for stating the absolute file path for the compute cluster that is associated with the failure domain. If you do not define this parameter, the installation program uses the default vCenter topology. 19 An optional parameter for the installer-provisioned infrastructure. The parameter sets the absolute path of an existing resource pool where the installation program creates the virtual machines, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . If you do not specify a value, resources are installed in the root of the cluster /example_datacenter/host/example_cluster/Resources . 20 An optional parameter that lists any network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. If you do not define this parameter, the installation program uses the default vCenter topology. 21 An optional parameter for specifying a datastore to use for provisioning volumes. If you do not define this parameter, the installation program uses the default vCenter topology. 6.13. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines and compute machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the compute machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 6.14. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in VMware Cloud on AWS. If you plan to use the cluster identifier as the name of your virtual machine folder, you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 6.15. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on user-provisioned infrastructure on VMware vSphere, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on vSphere hosts. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. Prerequisites You have obtained the Ignition config files for your cluster. You have access to an HTTP server that you can access from your computer and that the machines that you create can access. You have created a vSphere cluster . Procedure Upload the bootstrap Ignition config file, which is named <installation_directory>/bootstrap.ign , that the installation program created to your HTTP server. Note the URL of this file. Save the following secondary Ignition config file for your bootstrap node to your computer as <installation_directory>/merge-bootstrap.ign : { "ignition": { "config": { "merge": [ { "source": "<bootstrap_ignition_config_url>", 1 "verification": {} } ] }, "timeouts": {}, "version": "3.2.0" }, "networkd": {}, "passwd": {}, "storage": {}, "systemd": {} } 1 Specify the URL of the bootstrap Ignition config file that you hosted. When you create the virtual machine (VM) for the bootstrap machine, you use this Ignition config file. Locate the following Ignition config files that the installation program created: <installation_directory>/master.ign <installation_directory>/worker.ign <installation_directory>/merge-bootstrap.ign Convert the Ignition config files to Base64 encoding. Later in this procedure, you must add these files to the extra configuration parameter guestinfo.ignition.config.data in your VM. For example, if you use a Linux operating system, you can use the base64 command to encode the files. USD base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64 USD base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64 USD base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64 Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Obtain the RHCOS OVA image. Images are available from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The filename contains the OpenShift Container Platform version number in the format rhcos-vmware.<architecture>.ova . In the vSphere Client, create a folder in your datacenter to store your VMs. Click the VMs and Templates view. Right-click the name of your datacenter. Click New Folder New VM and Template Folder . In the window that is displayed, enter the folder name. If you did not specify an existing folder in the install-config.yaml file, then create a folder with the same name as the infrastructure ID. You use this folder name so vCenter dynamically provisions storage in the appropriate location for its Workspace configuration. In the vSphere Client, create a template for the OVA image and then clone the template as needed. Note In the following steps, you create a template and then clone the template for all of your cluster machines. You then provide the location for the Ignition config file for that cloned machine type when you provision the VMs. From the Hosts and Clusters tab, right-click your cluster name and select Deploy OVF Template . On the Select an OVF tab, specify the name of the RHCOS OVA file that you downloaded. On the Select a name and folder tab, set a Virtual machine name for your template, such as Template-RHCOS . Click the name of your vSphere cluster and select the folder you created in the step. On the Select a compute resource tab, click the name of your vSphere cluster. On the Select storage tab, configure the storage options for your VM. Select Thin Provision or Thick Provision , based on your storage preferences. Select the datastore that you specified in your install-config.yaml file. On the Select network tab, specify the network that you configured for the cluster, if available. When creating the OVF template, do not specify values on the Customize template tab or configure the template any further. Important Do not start the original VM template. The VM template must remain off and must be cloned for new RHCOS machines. Starting the VM template configures the VM template as a VM on the platform, which prevents it from being used as a template that compute machine sets can apply configurations to. Optional: Update the configured virtual hardware version in the VM template, if necessary. Follow Upgrading a virtual machine to the latest hardware version in the VMware documentation for more information. Important It is recommended that you update the hardware version of the VM template to version 15 before creating VMs from it, if necessary. Using hardware version 13 for your cluster nodes running on vSphere is now deprecated. If your imported template defaults to hardware version 13, you must ensure that your ESXi host is on 6.7U3 or later before upgrading the VM template to hardware version 15. If your vSphere version is less than 6.7U3, you can skip this upgrade step; however, a future version of OpenShift Container Platform is scheduled to remove support for hardware version 13 and vSphere versions less than 6.7U3. After the template deploys, deploy a VM for a machine in the cluster. Right-click the template name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as control-plane-0 or compute-1 . Note Ensure that all virtual machine names across a vSphere installation are unique. On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. On the Select clone options tab, select Customize this virtual machine's hardware . On the Customize hardware tab, click Advanced Parameters . Important The following configuration suggestions are for example purposes only. As a cluster administrator, you must configure resources according to the resource demands placed on your cluster. To best manage cluster resources, consider creating a resource pool from the cluster's root resource pool. Optional: Override default DHCP networking in vSphere. To enable static IP networking: Set your static IP configuration: Example command USD export IPCFG="ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]" Example command USD export IPCFG="ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8" Set the guestinfo.afterburn.initrd.network-kargs property before you boot a VM from an OVA in vSphere: Example command USD govc vm.change -vm "<vm_name>" -e "guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}" Add the following configuration parameter names and values by specifying data in the Attribute and Values fields. Ensure that you select the Add button for each parameter that you create. guestinfo.ignition.config.data : Locate the base-64 encoded files that you created previously in this procedure, and paste the contents of the base64-encoded Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . stealclock.enable : If this parameter was not defined, add it and specify TRUE . Create a child resource pool from the cluster's root resource pool. Perform resource allocation in this child resource pool. In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Complete the remaining configuration steps. On clicking the Finish button, you have completed the cloning operation. From the Virtual Machines tab, right-click on your VM and then select Power Power On . Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied steps Create the rest of the machines for your cluster by following the preceding steps for each machine. Important You must create the bootstrap and control plane machines at this time. Because some pods are deployed on compute machines by default, also create at least two compute machines before you install the cluster. 6.16. Adding more compute machines to a cluster in vSphere You can add more compute machines to a user-provisioned OpenShift Container Platform cluster on VMware vSphere. After your vSphere template deploys in your OpenShift Container Platform cluster, you can deploy a virtual machine (VM) for a machine in that cluster. Prerequisites Obtain the base64-encoded Ignition file for your compute machines. You have access to the vSphere template that you created for your cluster. Procedure Right-click the template's name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as compute-1 . Note Ensure that all virtual machine names across a vSphere installation are unique. On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. On the Select storage tab, select storage for your configuration and disk files. On the Select clone options , select Customize this virtual machine's hardware . On the Customize hardware tab, click Advanced . Click Edit Configuration , and on the Configuration Parameters window, click Add Configuration Params . Define the following parameter names and values: guestinfo.ignition.config.data : Paste the contents of the base64-encoded compute Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. If many networks exist, select Add New Device > Network Adapter , and then enter your network information in the fields provided by the New Network menu item. Complete the remaining configuration steps. On clicking the Finish button, you have completed the cloning operation. From the Virtual Machines tab, right-click on your VM and then select Power Power On . steps Continue to create more compute machines for your cluster. 6.17. Disk partitioning In most cases, data partitions are originally created by installing RHCOS, rather than by installing another operating system. In such cases, the OpenShift Container Platform installer should be allowed to configure your disk partitions. However, there are two cases where you might want to intervene to override the default partitioning when installing an OpenShift Container Platform node: Create separate partitions: For greenfield installations on an empty disk, you might want to add separate storage to a partition. This is officially supported for making /var or a subdirectory of /var , such as /var/lib/etcd , a separate partition, but not both. Important For disk sizes larger than 100GB, and especially disk sizes larger than 1TB, create a separate /var partition. See "Creating a separate /var partition" and this Red Hat Knowledgebase article for more information. Important Kubernetes supports only two file system partitions. If you add more than one partition to the original configuration, Kubernetes cannot monitor all of them. Retain existing partitions: For a brownfield installation where you are reinstalling OpenShift Container Platform on an existing node and want to retain data partitions installed from your operating system, there are both boot arguments and options to coreos-installer that allow you to retain existing data partitions. Creating a separate /var partition In general, disk partitioning for OpenShift Container Platform should be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Important For disk sizes larger than 100GB, and especially larger than 1TB, create a separate /var partition. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ... USD ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.12.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the vSphere installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 6.18. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 6.19. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 6.20. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 6.21. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 6.22. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m Configure the Operators that are not available. 6.22.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 6.22.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 6.22.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 6.22.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 6.22.2.3. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 6.23. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation machine configuration tasks documentation for more information. You can add extra compute machines after the cluster installation is completed by following Adding compute machines to vSphere . 6.24. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 6.25. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 6.26. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues.
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 name: worker replicas: 0 3 controlPlane: 4 name: master replicas: 3 5 metadata: name: test 6 platform: vsphere: vcenter: your.vcenter.server 7 username: username 8 password: password 9 datacenter: datacenter 10 defaultDatastore: datastore 11 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 12 resourcePool: \"/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>\" 13 diskType: thin 14 fips: false 15 pullSecret: '{\"auths\": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "govc tags.category.create -d \"OpenShift region\" openshift-region", "govc tags.category.create -d \"OpenShift zone\" openshift-zone", "govc tags.create -c <region_tag_category> <region_tag>", "govc tags.create -c <zone_tag_category> <zone_tag>", "govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1>", "govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1", "apiVersion: v1 baseDomain: example.com featureSet: TechPreviewNoUpgrade 1 compute: name: worker replicas: 3 vsphere: zones: 2 - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" controlPlane: name: master replicas: 3 vsphere: zones: 3 - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" metadata: name: cluster platform: vsphere: vcenter: <vcenter_server> 4 username: <username> 5 password: <password> 6 datacenter: datacenter 7 defaultDatastore: datastore 8 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 9 cluster: cluster 10 resourcePool: \"/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>\" 11 diskType: thin failureDomains: 12 - name: <machine_pool_zone_1> 13 region: <region_tag_1> 14 zone: <zone_tag_1> 15 topology: 16 datacenter: <datacenter1> 17 computeCluster: \"/<datacenter1>/host/<cluster1>\" 18 resourcePool: \"/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>\" 19 networks: 20 - <VM_Network1_name> datastore: \"/<datacenter1>/datastore/<datastore1>\" 21 - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> topology: datacenter: <datacenter2> computeCluster: \"/<datacenter2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<datacenter2>/datastore/<datastore2>\" resourcePool: \"/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<datacenter2>/vm/<folder2>\"", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }", "base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64", "base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64", "base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64", "export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"", "export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"", "govc vm.change -vm \"<vm_name>\" -e \"guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}\"", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.12.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_vmc/installing-vmc-user-infra
Chapter 4. Specifics of Individual Software Collections
Chapter 4. Specifics of Individual Software Collections This chapter is focused on the specifics of certain Software Collections and provides additional details concerning these components. 4.1. Red Hat Developer Toolset Red Hat Developer Toolset is designed for developers working on the Red Hat Enterprise Linux platform. Red Hat Developer Toolset provides current versions of the GNU Compiler Collection , GNU Debugger , and other development, debugging, and performance monitoring tools. Similarly to other Software Collections, an additional set of tools is installed into the /opt/ directory. These tools are enabled by the user on demand using the supplied scl utility. Similarly to other Software Collections, these do not replace the Red Hat Enterprise Linux system versions of these tools, nor will they be used in preference to those system versions unless explicitly invoked using the scl utility. For an overview of features, refer to the Features section of the Red Hat Developer Toolset Release Notes . For detailed information regarding usage and changes in 12.1, see the Red Hat Developer Toolset User Guide . 4.2. Maven The rh-maven36 Software Collection, available only for Red Hat Enterprise Linux 7, provides a software project management and comprehension tool. Based on the concept of a project object model (POM), Maven can manage a project's build, reporting, and documentation from a central piece of information. To install the rh-maven36 Collection, type the following command as root : yum install rh-maven36 To enable this collection, type the following command at a shell prompt: scl enable rh-maven36 bash Global Maven settings, such as remote repositories or mirrors, can be customized by editing the /opt/rh/rh-maven36/root/etc/maven/settings.xml file. For more information about using Maven, refer to the Maven documentation . Usage of plug-ins is described in this section ; to find documentation regarding individual plug-ins, see the index of plug-ins . 4.3. Database Connectors Database connector packages provide the database client functionality, which is necessary for local or remote connection to a database server. Table 4.1, "Interoperability Between Languages and Databases" lists Software Collections with language runtimes that include connectors for certain database servers: yes - the combination is supported no - the combination is not supported Table 4.1. Interoperability Between Languages and Databases Database Language (Software Collection) MariaDB MongoDB MySQL PostgreSQL Redis SQLite3 rh-nodejs4 no no no no no no rh-nodejs6 no no no no no no rh-nodejs8 no no no no no no rh-nodejs10 no no no no no no rh-nodejs12 no no no no no no rh-nodejs14 no no no no no no rh-perl520 yes no yes yes no no rh-perl524 yes no yes yes no no rh-perl526 yes no yes yes no no rh-perl530 yes no yes yes no yes rh-php56 yes yes yes yes no yes rh-php70 yes no yes yes no yes rh-php71 yes no yes yes no yes rh-php72 yes no yes yes no yes rh-php73 yes no yes yes no yes python27 yes yes yes yes no yes rh-python34 no yes no yes no yes rh-python35 yes yes yes yes no yes rh-python36 yes yes yes yes no yes rh-python38 yes no yes yes no yes rh-ror41 yes yes yes yes no yes rh-ror42 yes yes yes yes no yes rh-ror50 yes yes yes yes no yes rh-ruby25 yes yes yes yes no no rh-ruby26 yes yes yes yes no no rh-ruby27 yes yes yes yes no no rh-ruby30 yes no yes yes no yes
null
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.8_release_notes/chap-Individual_Collections
Chapter 7. ConsoleQuickStart [console.openshift.io/v1]
Chapter 7. ConsoleQuickStart [console.openshift.io/v1] Description ConsoleQuickStart is an extension for guiding user through various workflows in the OpenShift web console. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ConsoleQuickStartSpec is the desired quick start configuration. 7.1.1. .spec Description ConsoleQuickStartSpec is the desired quick start configuration. Type object Required description displayName durationMinutes introduction tasks Property Type Description accessReviewResources array accessReviewResources contains a list of resources that the user's access will be reviewed against in order for the user to complete the Quick Start. The Quick Start will be hidden if any of the access reviews fail. accessReviewResources[] object ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface conclusion string conclusion sums up the Quick Start and suggests the possible steps. (includes markdown) description string description is the description of the Quick Start. (includes markdown) displayName string displayName is the display name of the Quick Start. durationMinutes integer durationMinutes describes approximately how many minutes it will take to complete the Quick Start. icon string icon is a base64 encoded image that will be displayed beside the Quick Start display name. The icon should be an vector image for easy scaling. The size of the icon should be 40x40. introduction string introduction describes the purpose of the Quick Start. (includes markdown) nextQuickStart array (string) nextQuickStart is a list of the following Quick Starts, suggested for the user to try. prerequisites array (string) prerequisites contains all prerequisites that need to be met before taking a Quick Start. (includes markdown) tags array (string) tags is a list of strings that describe the Quick Start. tasks array tasks is the list of steps the user has to perform to complete the Quick Start. tasks[] object ConsoleQuickStartTask is a single step in a Quick Start. 7.1.2. .spec.accessReviewResources Description accessReviewResources contains a list of resources that the user's access will be reviewed against in order for the user to complete the Quick Start. The Quick Start will be hidden if any of the access reviews fail. Type array 7.1.3. .spec.accessReviewResources[] Description ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface Type object Property Type Description group string Group is the API Group of the Resource. "*" means all. name string Name is the name of the resource being requested for a "get" or deleted for a "delete". "" (empty) means all. namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces "" (empty) is defaulted for LocalSubjectAccessReviews "" (empty) is empty for cluster-scoped resources "" (empty) means "all" for namespace scoped resources from a SubjectAccessReview or SelfSubjectAccessReview resource string Resource is one of the existing resource types. "*" means all. subresource string Subresource is one of the existing resource types. "" means none. verb string Verb is a kubernetes resource API verb, like: get, list, watch, create, update, delete, proxy. "*" means all. version string Version is the API Version of the Resource. "*" means all. 7.1.4. .spec.tasks Description tasks is the list of steps the user has to perform to complete the Quick Start. Type array 7.1.5. .spec.tasks[] Description ConsoleQuickStartTask is a single step in a Quick Start. Type object Required description title Property Type Description description string description describes the steps needed to complete the task. (includes markdown) review object review contains instructions to validate the task is complete. The user will select 'Yes' or 'No'. using a radio button, which indicates whether the step was completed successfully. summary object summary contains information about the passed step. title string title describes the task and is displayed as a step heading. 7.1.6. .spec.tasks[].review Description review contains instructions to validate the task is complete. The user will select 'Yes' or 'No'. using a radio button, which indicates whether the step was completed successfully. Type object Required failedTaskHelp instructions Property Type Description failedTaskHelp string failedTaskHelp contains suggestions for a failed task review and is shown at the end of task. (includes markdown) instructions string instructions contains steps that user needs to take in order to validate his work after going through a task. (includes markdown) 7.1.7. .spec.tasks[].summary Description summary contains information about the passed step. Type object Required failed success Property Type Description failed string failed briefly describes the unsuccessfully passed task. (includes markdown) success string success describes the succesfully passed task. 7.2. API endpoints The following API endpoints are available: /apis/console.openshift.io/v1/consolequickstarts DELETE : delete collection of ConsoleQuickStart GET : list objects of kind ConsoleQuickStart POST : create a ConsoleQuickStart /apis/console.openshift.io/v1/consolequickstarts/{name} DELETE : delete a ConsoleQuickStart GET : read the specified ConsoleQuickStart PATCH : partially update the specified ConsoleQuickStart PUT : replace the specified ConsoleQuickStart 7.2.1. /apis/console.openshift.io/v1/consolequickstarts HTTP method DELETE Description delete collection of ConsoleQuickStart Table 7.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ConsoleQuickStart Table 7.2. HTTP responses HTTP code Reponse body 200 - OK ConsoleQuickStartList schema 401 - Unauthorized Empty HTTP method POST Description create a ConsoleQuickStart Table 7.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.4. Body parameters Parameter Type Description body ConsoleQuickStart schema Table 7.5. HTTP responses HTTP code Reponse body 200 - OK ConsoleQuickStart schema 201 - Created ConsoleQuickStart schema 202 - Accepted ConsoleQuickStart schema 401 - Unauthorized Empty 7.2.2. /apis/console.openshift.io/v1/consolequickstarts/{name} Table 7.6. Global path parameters Parameter Type Description name string name of the ConsoleQuickStart HTTP method DELETE Description delete a ConsoleQuickStart Table 7.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 7.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConsoleQuickStart Table 7.9. HTTP responses HTTP code Reponse body 200 - OK ConsoleQuickStart schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConsoleQuickStart Table 7.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.11. HTTP responses HTTP code Reponse body 200 - OK ConsoleQuickStart schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConsoleQuickStart Table 7.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.13. Body parameters Parameter Type Description body ConsoleQuickStart schema Table 7.14. HTTP responses HTTP code Reponse body 200 - OK ConsoleQuickStart schema 201 - Created ConsoleQuickStart schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/console_apis/consolequickstart-console-openshift-io-v1
Preface
Preface Red Hat OpenStack Platform (RHOSP) provides the foundation to build a private or public Infrastructure-as-a-Service (IaaS) cloud on top of Red Hat Enterprise Linux. It offers a massively scalable, fault-tolerant platform for the development of cloud-enabled workloads. With the Shared File Systems service (manila) with Ceph File System (CephFS) through NFS, you can use the same Ceph cluster that you use for block and object storage to provide file shares through the NFS protocol. For more information, Shared File Systems service in the Storage Guide . Note For the complete suite of documentation for Red Hat OpenStack Platform, see Red Hat OpenStack Platform Documentation .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/deploying_the_shared_file_systems_service_with_cephfs_through_nfs/pr01
7.95. kernel
7.95. kernel 7.95.1. RHSA-2015:1272 - Moderate: kernel security, bug fix, and enhancement update Updated kernel packages that fix multiple security issues, address several hundred bugs, and add numerous enhancements are now available as part of the ongoing support and maintenance of Red Hat Enterprise Linux version 6. This is the seventh regular update. Red Hat Product Security has rated this update as having Moderate security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links in the References section. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fixes CVE-2014-3940 , Moderate A flaw was found in the way Linux kernel's Transparent Huge Pages (THP) implementation handled non-huge page migration. A local, unprivileged user could use this flaw to crash the kernel by migrating transparent hugepages. CVE-2014-9683 , Moderate * A buffer overflow flaw was found in the way the Linux kernel's eCryptfs implementation decoded encrypted file names. A local, unprivileged user could use this flaw to crash the system or, potentially, escalate their privileges on the system. CVE-2015-3339 , Moderate * A race condition flaw was found between the chown and execve system calls. When changing the owner of a setuid user binary to root, the race condition could momentarily make the binary setuid root. A local, unprivileged user could potentially use this flaw to escalate their privileges on the system. CVE-2014-3184 , Low * Multiple out-of-bounds write flaws were found in the way the Cherry Cymotion keyboard driver, KYE/Genius device drivers, Logitech device drivers, Monterey Genius KB29E keyboard driver, Petalynx Maxter remote control driver, and Sunplus wireless desktop driver handled HID reports with an invalid report descriptor size. An attacker with physical access to the system could use either of these flaws to write data past an allocated memory buffer. CVE-2014-4652 , Low * An information leak flaw was found in the way the Linux kernel's Advanced Linux Sound Architecture (ALSA) implementation handled access of the user control's state. A local, privileged user could use this flaw to leak kernel memory to user space. CVE-2014-8133 , Low * It was found that the espfix functionality could be bypassed by installing a 16-bit RW data segment into GDT instead of LDT (which espfix checks), and using that segment on the stack. A local, unprivileged user could potentially use this flaw to leak kernel stack addresses. CVE-2014-8709 , Low * An information leak flaw was found in the Linux kernel's IEEE 802.11 wireless networking implementation. When software encryption was used, a remote attacker could use this flaw to leak up to 8 bytes of plaintext. CVE-2015-0239 , Low * It was found that the Linux kernel KVM subsystem's sysenter instruction emulation was not sufficient. An unprivileged guest user could use this flaw to escalate their privileges by tricking the hypervisor to emulate a SYSENTER instruction in 16-bit mode, if the guest OS did not initialize the SYSENTER model-specific registers (MSRs). Note: Certified guest operating systems for Red Hat Enterprise Linux with KVM do initialize the SYSENTER MSRs and are thus not vulnerable to this issue when running on a KVM hypervisor. Red Hat would like to thank Andy Lutomirski for reporting the CVE-2014-8133 issue, and Nadav Amit for reporting the CVE-2015-0239 issue. This update fixes several hundred bugs and adds numerous enhancements. Refer to the Red Hat Enterprise Linux 6.7 Release Notes for information on the most significant of these changes, and the following Knowledgebase article for further information: https://access.redhat.com/articles/1466073 All kernel users are advised to upgrade to these updated packages, which contain backported patches to correct these issues and add these enhancements. The system must be rebooted for this update to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-kernel
4.2. Example - Laptop
4.2. Example - Laptop One other very common place where power management and savings can really make a difference are laptops. As laptops by design normally already use drastically less energy than workstations or servers the potential for absolute savings are less than for other machines. When in battery mode, though, any saving can help to get a few more minutes of battery life out of a laptop. Although this section focuses on laptops in battery mode, but you certainly can still use some or all of those tunings while running on AC power as well. Savings for single components usually make a bigger relative difference on laptops than they do on workstations. For example, a 1 Gbit/s network interface running at 100 Mbits/s saves around 3-4 watts. For a typical server with a total power consumption of around 400 watts, this saving is approximately 1 %. On a laptop with a total power consumption of around 40 watts, the power saving on just this one component amounts to 10 % of the total. Specific power-saving optimizations on a typical laptop include: Configure the system BIOS to disable all hardware that you do not use. For example, parallel or serial ports, card readers, webcams, WiFi, and Bluetooth just to name a few possible candidates. Dim the display in darker environments where you do not need full illumination to read the screen comfortably. Use System + Preferences Power Management on the GNOME desktop, Kickoff Application Launcher + Computer + System Settings + Advanced Power Management on the KDE desktop; or gnome-power-manager or xbacklight at the command line; or the function keys on your laptop. Additionally, (or alternatively) you can perform many small adjustments to various system settings: use the ondemand governor (enabled by default in Red Hat Enterprise Linux 7) enable AC97 audio power-saving (enabled by default in Red Hat Enterprise Linux 7): enable USB auto-suspend: Note that USB auto-suspend does not work correctly with all USB devices. mount file system using relatime (default in Red Hat Enterprise Linux 7): reduce screen brightness to 50 or less, for example: activate DPMS for screen idle: deactivate Wi-Fi:
[ "~]# echo Y > /sys/module/snd_ac97_codec/parameters/power_save", "~]# for i in /sys/bus/usb/devices/*/power/autosuspend; do echo 1 > USDi; done", "~]# mount -o remount,relatime mountpoint", "~]USD xbacklight -set 50", "~]USD xset +dpms; xset dpms 0 0 300", "~]# echo 1 > /sys/bus/pci/devices/*/rf_kill" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/power_management_guide/example_laptop
2.4.4. Monitoring Storage
2.4.4. Monitoring Storage Monitoring storage normally takes place at two different levels: Monitoring for sufficient disk space Monitoring for storage-related performance problems The reason for this is that it is possible to have dire problems in one area and no problems whatsoever in the other. For example, it is possible to cause a disk drive to run out of disk space without once causing any kind of performance-related problems. Likewise, it is possible to have a disk drive that has 99% free space, yet is being pushed past its limits in terms of performance. However, it is more likely that the average system experiences varying degrees of resource shortages in both areas. Because of this, it is also likely that -- to some extent -- problems in one area impact the other. Most often this type of interaction takes the form of poorer and poorer I/O performance as a disk drive nears 0% free space although, in cases of extreme I/O loads, it might be possible to slow I/O throughput to such a level that applications no longer run properly. In any case, the following statistics are useful for monitoring storage: Free Space Free space is probably the one resource all system administrators watch closely; it would be a rare administrator that never checks on free space (or has some automated way of doing so). File System-Related Statistics These statistics (such as number of files/directories, average file size, etc.) provide additional detail over a single free space percentage. As such, these statistics make it possible for system administrators to configure the system to give the best performance, as the I/O load imposed by a file system full of many small files is not the same as that imposed by a file system filled with a single massive file. Transfers per Second This statistic is a good way of determining whether a particular device's bandwidth limitations are being reached. Reads/Writes per Second A slightly more detailed breakdown of transfers per second, these statistics allow the system administrator to more fully understand the nature of the I/O loads a storage device is experiencing. This can be critical, as some storage technologies have widely different performance characteristics for read versus write operations.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s2-resource-what-to-monitor-storage
Chapter 9. Quotas
Chapter 9. Quotas 9.1. Resource quotas per project A resource quota , defined by a ResourceQuota object, provides constraints that limit aggregate resource consumption per project. It can limit the quantity of objects that can be created in a project by type, as well as the total amount of compute resources and storage that might be consumed by resources in that project. This guide describes how resource quotas work, how cluster administrators can set and manage resource quotas on a per project basis, and how developers and cluster administrators can view them. 9.1.1. Resources managed by quotas The following describes the set of compute resources and object types that can be managed by a quota. Note A pod is in a terminal state if status.phase in (Failed, Succeeded) is true. Table 9.1. Compute resources managed by quota Resource Name Description cpu The sum of CPU requests across all pods in a non-terminal state cannot exceed this value. cpu and requests.cpu are the same value and can be used interchangeably. memory The sum of memory requests across all pods in a non-terminal state cannot exceed this value. memory and requests.memory are the same value and can be used interchangeably. requests.cpu The sum of CPU requests across all pods in a non-terminal state cannot exceed this value. cpu and requests.cpu are the same value and can be used interchangeably. requests.memory The sum of memory requests across all pods in a non-terminal state cannot exceed this value. memory and requests.memory are the same value and can be used interchangeably. limits.cpu The sum of CPU limits across all pods in a non-terminal state cannot exceed this value. limits.memory The sum of memory limits across all pods in a non-terminal state cannot exceed this value. Table 9.2. Storage resources managed by quota Resource Name Description requests.storage The sum of storage requests across all persistent volume claims in any state cannot exceed this value. persistentvolumeclaims The total number of persistent volume claims that can exist in the project. <storage-class-name>.storageclass.storage.k8s.io/requests.storage The sum of storage requests across all persistent volume claims in any state that have a matching storage class, cannot exceed this value. <storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims The total number of persistent volume claims with a matching storage class that can exist in the project. ephemeral-storage The sum of local ephemeral storage requests across all pods in a non-terminal state cannot exceed this value. ephemeral-storage and requests.ephemeral-storage are the same value and can be used interchangeably. requests.ephemeral-storage The sum of ephemeral storage requests across all pods in a non-terminal state cannot exceed this value. ephemeral-storage and requests.ephemeral-storage are the same value and can be used interchangeably. limits.ephemeral-storage The sum of ephemeral storage limits across all pods in a non-terminal state cannot exceed this value. Table 9.3. Object counts managed by quota Resource Name Description pods The total number of pods in a non-terminal state that can exist in the project. replicationcontrollers The total number of ReplicationControllers that can exist in the project. resourcequotas The total number of resource quotas that can exist in the project. services The total number of services that can exist in the project. services.loadbalancers The total number of services of type LoadBalancer that can exist in the project. services.nodeports The total number of services of type NodePort that can exist in the project. secrets The total number of secrets that can exist in the project. configmaps The total number of ConfigMap objects that can exist in the project. persistentvolumeclaims The total number of persistent volume claims that can exist in the project. openshift.io/imagestreams The total number of imagestreams that can exist in the project. 9.1.2. Quota scopes Each quota can have an associated set of scopes . A quota only measures usage for a resource if it matches the intersection of enumerated scopes. Adding a scope to a quota restricts the set of resources to which that quota can apply. Specifying a resource outside of the allowed set results in a validation error. Scope Description BestEffort Match pods that have best effort quality of service for either cpu or memory . NotBestEffort Match pods that do not have best effort quality of service for cpu and memory . A BestEffort scope restricts a quota to limiting the following resources: pods A NotBestEffort scope restricts a quota to tracking the following resources: pods memory requests.memory limits.memory cpu requests.cpu limits.cpu 9.1.3. Quota enforcement After a resource quota for a project is first created, the project restricts the ability to create any new resources that may violate a quota constraint until it has calculated updated usage statistics. After a quota is created and usage statistics are updated, the project accepts the creation of new content. When you create or modify resources, your quota usage is incremented immediately upon the request to create or modify the resource. When you delete a resource, your quota use is decremented during the full recalculation of quota statistics for the project. A configurable amount of time determines how long it takes to reduce quota usage statistics to their current observed system value. If project modifications exceed a quota usage limit, the server denies the action, and an appropriate error message is returned to the user explaining the quota constraint violated, and what their currently observed usage statistics are in the system. 9.1.4. Requests versus limits When allocating compute resources, each container might specify a request and a limit value each for CPU, memory, and ephemeral storage. Quotas can restrict any of these values. If the quota has a value specified for requests.cpu or requests.memory , then it requires that every incoming container make an explicit request for those resources. If the quota has a value specified for limits.cpu or limits.memory , then it requires that every incoming container specify an explicit limit for those resources. 9.1.5. Sample resource quota definitions core-object-counts.yaml apiVersion: v1 kind: ResourceQuota metadata: name: core-object-counts spec: hard: configmaps: "10" 1 persistentvolumeclaims: "4" 2 replicationcontrollers: "20" 3 secrets: "10" 4 services: "10" 5 services.loadbalancers: "2" 6 1 The total number of ConfigMap objects that can exist in the project. 2 The total number of persistent volume claims (PVCs) that can exist in the project. 3 The total number of replication controllers that can exist in the project. 4 The total number of secrets that can exist in the project. 5 The total number of services that can exist in the project. 6 The total number of services of type LoadBalancer that can exist in the project. openshift-object-counts.yaml apiVersion: v1 kind: ResourceQuota metadata: name: openshift-object-counts spec: hard: openshift.io/imagestreams: "10" 1 1 The total number of image streams that can exist in the project. compute-resources.yaml apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources spec: hard: pods: "4" 1 requests.cpu: "1" 2 requests.memory: 1Gi 3 limits.cpu: "2" 4 limits.memory: 2Gi 5 1 The total number of pods in a non-terminal state that can exist in the project. 2 Across all pods in a non-terminal state, the sum of CPU requests cannot exceed 1 core. 3 Across all pods in a non-terminal state, the sum of memory requests cannot exceed 1Gi. 4 Across all pods in a non-terminal state, the sum of CPU limits cannot exceed 2 cores. 5 Across all pods in a non-terminal state, the sum of memory limits cannot exceed 2Gi. besteffort.yaml apiVersion: v1 kind: ResourceQuota metadata: name: besteffort spec: hard: pods: "1" 1 scopes: - BestEffort 2 1 The total number of pods in a non-terminal state with BestEffort quality of service that can exist in the project. 2 Restricts the quota to only matching pods that have BestEffort quality of service for either memory or CPU. compute-resources-long-running.yaml apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-long-running spec: hard: pods: "4" 1 limits.cpu: "4" 2 limits.memory: "2Gi" 3 scopes: - NotTerminating 4 1 The total number of pods in a non-terminal state. 2 Across all pods in a non-terminal state, the sum of CPU limits cannot exceed this value. 3 Across all pods in a non-terminal state, the sum of memory limits cannot exceed this value. 4 Restricts the quota to only matching pods where spec.activeDeadlineSeconds is set to nil . Build pods fall under NotTerminating unless the RestartNever policy is applied. compute-resources-time-bound.yaml apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-time-bound spec: hard: pods: "2" 1 limits.cpu: "1" 2 limits.memory: "1Gi" 3 scopes: - Terminating 4 1 The total number of pods in a terminating state. 2 Across all pods in a terminating state, the sum of CPU limits cannot exceed this value. 3 Across all pods in a terminating state, the sum of memory limits cannot exceed this value. 4 Restricts the quota to only matching pods where spec.activeDeadlineSeconds >=0 . For example, this quota charges for build or deployer pods, but not long running pods like a web server or database. storage-consumption.yaml apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption spec: hard: persistentvolumeclaims: "10" 1 requests.storage: "50Gi" 2 gold.storageclass.storage.k8s.io/requests.storage: "10Gi" 3 silver.storageclass.storage.k8s.io/requests.storage: "20Gi" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: "5" 5 bronze.storageclass.storage.k8s.io/requests.storage: "0" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "0" 7 requests.ephemeral-storage: 2Gi 8 limits.ephemeral-storage: 4Gi 9 1 The total number of persistent volume claims in a project 2 Across all persistent volume claims in a project, the sum of storage requested cannot exceed this value. 3 Across all persistent volume claims in a project, the sum of storage requested in the gold storage class cannot exceed this value. 4 Across all persistent volume claims in a project, the sum of storage requested in the silver storage class cannot exceed this value. 5 Across all persistent volume claims in a project, the total number of claims in the silver storage class cannot exceed this value. 6 Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this is set to 0 , it means bronze storage class cannot request storage. 7 Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this is set to 0 , it means bronze storage class cannot create claims. 8 Across all pods in a non-terminal state, the sum of ephemeral storage requests cannot exceed 2Gi. 9 Across all pods in a non-terminal state, the sum of ephemeral storage limits cannot exceed 4Gi. 9.1.6. Creating a quota You can create a quota to constrain resource usage in a given project. Procedure Define the quota in a file. Use the file to create the quota and apply it to a project: USD oc create -f <file> [-n <project_name>] For example: USD oc create -f core-object-counts.yaml -n demoproject 9.1.6.1. Creating object count quotas You can create an object count quota for all standard namespaced resource types on OpenShift Container Platform, such as BuildConfig and DeploymentConfig objects. An object quota count places a defined quota on all standard namespaced resource types. When using a resource quota, an object is charged against the quota upon creation. These types of quotas are useful to protect against exhaustion of resources. The quota can only be created if there are enough spare resources within the project. Procedure To configure an object count quota for a resource: Run the following command: USD oc create quota <name> \ --hard=count/<resource>.<group>=<quota>,count/<resource>.<group>=<quota> 1 1 The <resource> variable is the name of the resource, and <group> is the API group, if applicable. Use the oc api-resources command for a list of resources and their associated API groups. For example: USD oc create quota test \ --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4 Example output resourcequota "test" created This example limits the listed resources to the hard limit in each project in the cluster. Verify that the quota was created: USD oc describe quota test Example output Name: test Namespace: quota Resource Used Hard -------- ---- ---- count/deployments.extensions 0 2 count/pods 0 3 count/replicasets.extensions 0 4 count/secrets 0 4 9.1.6.2. Setting resource quota for extended resources Overcommitment of resources is not allowed for extended resources, so you must specify requests and limits for the same extended resource in a quota. Currently, only quota items with the prefix requests. is allowed for extended resources. The following is an example scenario of how to set resource quota for the GPU resource nvidia.com/gpu . Procedure Determine how many GPUs are available on a node in your cluster. For example: # oc describe node ip-172-31-27-209.us-west-2.compute.internal | egrep 'Capacity|Allocatable|gpu' Example output openshift.com/gpu-accelerator=true Capacity: nvidia.com/gpu: 2 Allocatable: nvidia.com/gpu: 2 nvidia.com/gpu 0 0 In this example, 2 GPUs are available. Set a quota in the namespace nvidia . In this example, the quota is 1 : # cat gpu-quota.yaml Example output apiVersion: v1 kind: ResourceQuota metadata: name: gpu-quota namespace: nvidia spec: hard: requests.nvidia.com/gpu: 1 Create the quota: # oc create -f gpu-quota.yaml Example output resourcequota/gpu-quota created Verify that the namespace has the correct quota set: # oc describe quota gpu-quota -n nvidia Example output Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 0 1 Define a pod that asks for a single GPU. The following example definition file is called gpu-pod.yaml : apiVersion: v1 kind: Pod metadata: generateName: gpu-pod- namespace: nvidia spec: restartPolicy: OnFailure containers: - name: rhel7-gpu-pod image: rhel7 env: - name: NVIDIA_VISIBLE_DEVICES value: all - name: NVIDIA_DRIVER_CAPABILITIES value: "compute,utility" - name: NVIDIA_REQUIRE_CUDA value: "cuda>=5.0" command: ["sleep"] args: ["infinity"] resources: limits: nvidia.com/gpu: 1 Create the pod: # oc create -f gpu-pod.yaml Verify that the pod is running: # oc get pods Example output NAME READY STATUS RESTARTS AGE gpu-pod-s46h7 1/1 Running 0 1m Verify that the quota Used counter is correct: # oc describe quota gpu-quota -n nvidia Example output Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 1 1 Attempt to create a second GPU pod in the nvidia namespace. This is technically available on the node because it has 2 GPUs: # oc create -f gpu-pod.yaml Example output Error from server (Forbidden): error when creating "gpu-pod.yaml": pods "gpu-pod-f7z2w" is forbidden: exceeded quota: gpu-quota, requested: requests.nvidia.com/gpu=1, used: requests.nvidia.com/gpu=1, limited: requests.nvidia.com/gpu=1 This Forbidden error message is expected because you have a quota of 1 GPU and this pod tried to allocate a second GPU, which exceeds its quota. 9.1.7. Viewing a quota You can view usage statistics related to any hard limits defined in a project's quota by navigating in the web console to the project's Quota page. You can also use the CLI to view quota details. Procedure Get the list of quotas defined in the project. For example, for a project called demoproject : USD oc get quota -n demoproject Example output NAME AGE besteffort 11m compute-resources 2m core-object-counts 29m Describe the quota you are interested in, for example the core-object-counts quota: USD oc describe quota core-object-counts -n demoproject Example output Name: core-object-counts Namespace: demoproject Resource Used Hard -------- ---- ---- configmaps 3 10 persistentvolumeclaims 0 4 replicationcontrollers 3 20 secrets 9 10 services 2 10 9.1.8. Configuring explicit resource quotas Configure explicit resource quotas in a project request template to apply specific resource quotas in new projects. Prerequisites Access to the cluster as a user with the cluster-admin role. Install the OpenShift CLI ( oc ). Procedure Add a resource quota definition to a project request template: If a project request template does not exist in a cluster: Create a bootstrap project template and output it to a file called template.yaml : USD oc adm create-bootstrap-project-template -o yaml > template.yaml Add a resource quota definition to template.yaml . The following example defines a resource quota named 'storage-consumption'. The definition must be added before the parameters: section in the template: - apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption namespace: USD{PROJECT_NAME} spec: hard: persistentvolumeclaims: "10" 1 requests.storage: "50Gi" 2 gold.storageclass.storage.k8s.io/requests.storage: "10Gi" 3 silver.storageclass.storage.k8s.io/requests.storage: "20Gi" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: "5" 5 bronze.storageclass.storage.k8s.io/requests.storage: "0" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "0" 7 1 The total number of persistent volume claims in a project. 2 Across all persistent volume claims in a project, the sum of storage requested cannot exceed this value. 3 Across all persistent volume claims in a project, the sum of storage requested in the gold storage class cannot exceed this value. 4 Across all persistent volume claims in a project, the sum of storage requested in the silver storage class cannot exceed this value. 5 Across all persistent volume claims in a project, the total number of claims in the silver storage class cannot exceed this value. 6 Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this value is set to 0 , the bronze storage class cannot request storage. 7 Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this value is set to 0 , the bronze storage class cannot create claims. Create a project request template from the modified template.yaml file in the openshift-config namespace: USD oc create -f template.yaml -n openshift-config Note To include the configuration as a kubectl.kubernetes.io/last-applied-configuration annotation, add the --save-config option to the oc create command. By default, the template is called project-request . If a project request template already exists within a cluster: Note If you declaratively or imperatively manage objects within your cluster by using configuration files, edit the existing project request template through those files instead. List templates in the openshift-config namespace: USD oc get templates -n openshift-config Edit an existing project request template: USD oc edit template <project_request_template> -n openshift-config Add a resource quota definition, such as the preceding storage-consumption example, into the existing template. The definition must be added before the parameters: section in the template. If you created a project request template, reference it in the cluster's project configuration resource: Access the project configuration resource for editing: By using the web console: Navigate to the Administration Cluster Settings page. Click Configuration to view all configuration resources. Find the entry for Project and click Edit YAML . By using the CLI: Edit the project.config.openshift.io/cluster resource: USD oc edit project.config.openshift.io/cluster Update the spec section of the project configuration resource to include the projectRequestTemplate and name parameters. The following example references the default project request template name project-request : apiVersion: config.openshift.io/v1 kind: Project metadata: ... spec: projectRequestTemplate: name: project-request Verify that the resource quota is applied when projects are created: Create a project: USD oc new-project <project_name> List the project's resource quotas: USD oc get resourcequotas Describe the resource quota in detail: USD oc describe resourcequotas <resource_quota_name> 9.2. Resource quotas across multiple projects A multi-project quota, defined by a ClusterResourceQuota object, allows quotas to be shared across multiple projects. Resources used in each selected project are aggregated and that aggregate is used to limit resources across all the selected projects. This guide describes how cluster administrators can set and manage resource quotas across multiple projects. 9.2.1. Selecting multiple projects during quota creation When creating quotas, you can select multiple projects based on annotation selection, label selection, or both. Procedure To select projects based on annotations, run the following command: USD oc create clusterquota for-user \ --project-annotation-selector openshift.io/requester=<user_name> \ --hard pods=10 \ --hard secrets=20 This creates the following ClusterResourceQuota object: apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: name: for-user spec: quota: 1 hard: pods: "10" secrets: "20" selector: annotations: 2 openshift.io/requester: <user_name> labels: null 3 status: namespaces: 4 - namespace: ns-one status: hard: pods: "10" secrets: "20" used: pods: "1" secrets: "9" total: 5 hard: pods: "10" secrets: "20" used: pods: "1" secrets: "9" 1 The ResourceQuotaSpec object that will be enforced over the selected projects. 2 A simple key-value selector for annotations. 3 A label selector that can be used to select projects. 4 A per-namespace map that describes current quota usage in each selected project. 5 The aggregate usage across all selected projects. This multi-project quota document controls all projects requested by <user_name> using the default project request endpoint. You are limited to 10 pods and 20 secrets. Similarly, to select projects based on labels, run this command: USD oc create clusterresourcequota for-name \ 1 --project-label-selector=name=frontend \ 2 --hard=pods=10 --hard=secrets=20 1 Both clusterresourcequota and clusterquota are aliases of the same command. for-name is the name of the ClusterResourceQuota object. 2 To select projects by label, provide a key-value pair by using the format --project-label-selector=key=value . This creates the following ClusterResourceQuota object definition: apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: creationTimestamp: null name: for-name spec: quota: hard: pods: "10" secrets: "20" selector: annotations: null labels: matchLabels: name: frontend 9.2.2. Viewing applicable cluster resource quotas A project administrator is not allowed to create or modify the multi-project quota that limits his or her project, but the administrator is allowed to view the multi-project quota documents that are applied to his or her project. The project administrator can do this via the AppliedClusterResourceQuota resource. Procedure To view quotas applied to a project, run: USD oc describe AppliedClusterResourceQuota Example output Name: for-user Namespace: <none> Created: 19 hours ago Labels: <none> Annotations: <none> Label Selector: <null> AnnotationSelector: map[openshift.io/requester:<user-name>] Resource Used Hard -------- ---- ---- pods 1 10 secrets 9 20 9.2.3. Selection granularity Because of the locking consideration when claiming quota allocations, the number of active projects selected by a multi-project quota is an important consideration. Selecting more than 100 projects under a single multi-project quota can have detrimental effects on API server responsiveness in those projects.
[ "apiVersion: v1 kind: ResourceQuota metadata: name: core-object-counts spec: hard: configmaps: \"10\" 1 persistentvolumeclaims: \"4\" 2 replicationcontrollers: \"20\" 3 secrets: \"10\" 4 services: \"10\" 5 services.loadbalancers: \"2\" 6", "apiVersion: v1 kind: ResourceQuota metadata: name: openshift-object-counts spec: hard: openshift.io/imagestreams: \"10\" 1", "apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources spec: hard: pods: \"4\" 1 requests.cpu: \"1\" 2 requests.memory: 1Gi 3 limits.cpu: \"2\" 4 limits.memory: 2Gi 5", "apiVersion: v1 kind: ResourceQuota metadata: name: besteffort spec: hard: pods: \"1\" 1 scopes: - BestEffort 2", "apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-long-running spec: hard: pods: \"4\" 1 limits.cpu: \"4\" 2 limits.memory: \"2Gi\" 3 scopes: - NotTerminating 4", "apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-time-bound spec: hard: pods: \"2\" 1 limits.cpu: \"1\" 2 limits.memory: \"1Gi\" 3 scopes: - Terminating 4", "apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption spec: hard: persistentvolumeclaims: \"10\" 1 requests.storage: \"50Gi\" 2 gold.storageclass.storage.k8s.io/requests.storage: \"10Gi\" 3 silver.storageclass.storage.k8s.io/requests.storage: \"20Gi\" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: \"5\" 5 bronze.storageclass.storage.k8s.io/requests.storage: \"0\" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: \"0\" 7 requests.ephemeral-storage: 2Gi 8 limits.ephemeral-storage: 4Gi 9", "oc create -f <file> [-n <project_name>]", "oc create -f core-object-counts.yaml -n demoproject", "oc create quota <name> --hard=count/<resource>.<group>=<quota>,count/<resource>.<group>=<quota> 1", "oc create quota test --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4", "resourcequota \"test\" created", "oc describe quota test", "Name: test Namespace: quota Resource Used Hard -------- ---- ---- count/deployments.extensions 0 2 count/pods 0 3 count/replicasets.extensions 0 4 count/secrets 0 4", "oc describe node ip-172-31-27-209.us-west-2.compute.internal | egrep 'Capacity|Allocatable|gpu'", "openshift.com/gpu-accelerator=true Capacity: nvidia.com/gpu: 2 Allocatable: nvidia.com/gpu: 2 nvidia.com/gpu 0 0", "cat gpu-quota.yaml", "apiVersion: v1 kind: ResourceQuota metadata: name: gpu-quota namespace: nvidia spec: hard: requests.nvidia.com/gpu: 1", "oc create -f gpu-quota.yaml", "resourcequota/gpu-quota created", "oc describe quota gpu-quota -n nvidia", "Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 0 1", "apiVersion: v1 kind: Pod metadata: generateName: gpu-pod- namespace: nvidia spec: restartPolicy: OnFailure containers: - name: rhel7-gpu-pod image: rhel7 env: - name: NVIDIA_VISIBLE_DEVICES value: all - name: NVIDIA_DRIVER_CAPABILITIES value: \"compute,utility\" - name: NVIDIA_REQUIRE_CUDA value: \"cuda>=5.0\" command: [\"sleep\"] args: [\"infinity\"] resources: limits: nvidia.com/gpu: 1", "oc create -f gpu-pod.yaml", "oc get pods", "NAME READY STATUS RESTARTS AGE gpu-pod-s46h7 1/1 Running 0 1m", "oc describe quota gpu-quota -n nvidia", "Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 1 1", "oc create -f gpu-pod.yaml", "Error from server (Forbidden): error when creating \"gpu-pod.yaml\": pods \"gpu-pod-f7z2w\" is forbidden: exceeded quota: gpu-quota, requested: requests.nvidia.com/gpu=1, used: requests.nvidia.com/gpu=1, limited: requests.nvidia.com/gpu=1", "oc get quota -n demoproject", "NAME AGE besteffort 11m compute-resources 2m core-object-counts 29m", "oc describe quota core-object-counts -n demoproject", "Name: core-object-counts Namespace: demoproject Resource Used Hard -------- ---- ---- configmaps 3 10 persistentvolumeclaims 0 4 replicationcontrollers 3 20 secrets 9 10 services 2 10", "oc adm create-bootstrap-project-template -o yaml > template.yaml", "- apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption namespace: USD{PROJECT_NAME} spec: hard: persistentvolumeclaims: \"10\" 1 requests.storage: \"50Gi\" 2 gold.storageclass.storage.k8s.io/requests.storage: \"10Gi\" 3 silver.storageclass.storage.k8s.io/requests.storage: \"20Gi\" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: \"5\" 5 bronze.storageclass.storage.k8s.io/requests.storage: \"0\" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: \"0\" 7", "oc create -f template.yaml -n openshift-config", "oc get templates -n openshift-config", "oc edit template <project_request_template> -n openshift-config", "oc edit project.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: project-request", "oc new-project <project_name>", "oc get resourcequotas", "oc describe resourcequotas <resource_quota_name>", "oc create clusterquota for-user --project-annotation-selector openshift.io/requester=<user_name> --hard pods=10 --hard secrets=20", "apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: name: for-user spec: quota: 1 hard: pods: \"10\" secrets: \"20\" selector: annotations: 2 openshift.io/requester: <user_name> labels: null 3 status: namespaces: 4 - namespace: ns-one status: hard: pods: \"10\" secrets: \"20\" used: pods: \"1\" secrets: \"9\" total: 5 hard: pods: \"10\" secrets: \"20\" used: pods: \"1\" secrets: \"9\"", "oc create clusterresourcequota for-name \\ 1 --project-label-selector=name=frontend \\ 2 --hard=pods=10 --hard=secrets=20", "apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: creationTimestamp: null name: for-name spec: quota: hard: pods: \"10\" secrets: \"20\" selector: annotations: null labels: matchLabels: name: frontend", "oc describe AppliedClusterResourceQuota", "Name: for-user Namespace: <none> Created: 19 hours ago Labels: <none> Annotations: <none> Label Selector: <null> AnnotationSelector: map[openshift.io/requester:<user-name>] Resource Used Hard -------- ---- ---- pods 1 10 secrets 9 20" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/building_applications/quotas
Chapter 11. Managing an instance
Chapter 11. Managing an instance You can perform management operations on an instance, such as resizing the instance or shelving the instance. For a complete list of management operations, see Instance management operations . 11.1. Resizing an instance You can resize an instance if you need to increase or decrease the memory or CPU count of the instance. To resize an instance, select a new flavor for the instance that has the required capacity. Resizing an instance rebuilds and restarts the instance. Procedure Retrieve the name or ID of the instance that you want to resize: Retrieve the name or ID of the new flavor that you want to use to resize the instance: Note When you resize an instance, you must use a new flavor. Resize the instance: Replace <flavor> with the name or ID of the flavor that you retrieved in step 2. Replace <instance> with the name or ID of the instance that you are resizing. Note Resizing can take time. The operating system on the instance performs a controlled shutdown before the instance is powered off and the instance is resized. During this time, the instance status is RESIZE : When the resize completes, the instance status changes to VERIFY_RESIZE . You must now either confirm or revert the resize: To confirm the resize, enter the following command: To revert the resize, enter the following command: The instance is reverted to the original flavor and the status is changed to ACTIVE . Note The cloud might be configured to automatically confirm instance resizes if you do not confirm or revert within a configured time frame. 11.2. Creating an instance snapshot A snapshot is an image that captures the state of the running disk of an instance. You can take a snapshot of an instance to create an image that you can use as a template to create new instances. Snapshots allow you to create new instances from another instance, and restore the state of an instance. If you delete an instance on which a snapshot is based, you can use the snapshot image to create a new instance to the same state as the snapshot. Procedure Retrieve the name or ID of the instance that you want to take a snapshot of: Create the snapshot: Replace <image_name> with a name for the new snapshot image. Replace <instance> with the name or ID of the instance that you want to create the snapshot from. Optional: To ensure that the disk state is consistent when you use the instance snapshot as a template to create new instances, enable the QEMU guest agent and specify that the filesystem must be quiesced during snapshot processing by adding the following metadata to the snapshot image: The QEMU guest agent is a background process that helps management applications execute instance OS level commands. Enabling this agent adds another device to the instance, which consumes a PCI slot, and limits the number of other devices you can allocate to the instance. It also causes Windows instances to display a warning message about an unknown hardware device. 11.3. Rescuing an instance In an emergency such as a system failure or access failure, you can put an instance in rescue mode. This shuts down the instance, reboots it with a new instance disk, and mounts the original instance disk and config drive as a volume on the rebooted instance. You can connect to the rebooted instance to view the original instance disk to repair the system and recover your data. Procedure Perform the instance rescue: Optional: By default, the instance is booted from a rescue image provided by the cloud admin, or a fresh copy of the original instance image. Use the --image option to specify an alternative image to use when rebooting the instance in rescue mode. Replace <instance> with the name or ID of the instance that you want to rescue. Connect to the rescued instance to fix the issue. Restart the instance from the normal boot disk: 11.4. Shelving an instance Shelving is useful if you have an instance that you are not using, but that you do not want to delete. When you shelve an instance, you retain the instance data and resource allocations, but clear the instance memory. Depending on the cloud configuration, shelved instances are moved to the SHELVED_OFFLOADED state either immediately or after a timed delay. When SHELVED_OFFLOADED , the instance data and resource allocations are deleted. When you shelve an instance, the Compute service generates a snapshot image that captures the state of the instance, and allocates a name to the image in the following format: <instance>-shelved . This snapshot image is deleted when the instance is unshelved or deleted. If you no longer need a shelved instance, you can delete it. You can shelve more than one instance at a time. Procedure Retrieve the name or ID of the instance or instances that you want to shelve: Shelve the instance or instances: Replace <instance> with the name or ID of the instance that you want to shelve. You can specify more than one instance to shelve, as required. Verify that the instance has been shelved: Shelved instances have status SHELVED_OFFLOADED . 11.5. Instance management operations After you create an instance, you can perform the following management operations. Table 11.1. Management operations Operation Description Command Stop an instance Stops the instance. openstack server stop Start an instance Starts a stopped instance. openstack server start Pause a running instance Immediately pause a running instance. The state of the instance is stored in memory (RAM). The paused instance continues to run in a frozen state. You are not prompted to confirm the pause action. openstack server pause Resume running of a paused instance Immediately resume a paused instance. You are not prompted to confirm the resume action. openstack server unpause Suspend a running instance Immediately suspend a running instance. The state of the instance is stored on the instance disk. You are not prompted to confirm the suspend action. openstack server suspend Resume running of a suspended instance Immediately resume a suspended instance. The state of the instance is stored on the instance disk. You are not prompted to confirm the resume action. openstack server resume Delete an instance Permanently destroy the instance. You are not prompted to confirm the destroy action. Deleted instances are not recoverable unless the cloud has been configured to enable soft delete. Note Deleting an instance does not delete its attached volumes. You must delete attached volumes separately. For more information, see Deleting a Block Storage service volume in the Configuring persistent storage guide. openstack server delete Edit the instance metadata You can use instance metadata to specify the properties of an instance. For more information, see Creating a customized instance . openstack server set --property <key=value> [--property <key=value>] <instance> Add security groups Adds the specified security group to the instance. openstack server add security group Remove security groups Removes the specified security group from the instance. openstack remove security group Rescue an instance In an emergency such as a system failure or access failure, you can put an instance in rescue mode. This shuts down the instance and mounts the root disk to a temporary server. You can connect to the temporary server to repair the system and recover your data. It is also possible to reboot a running instance into rescue mode. For example, this operation might be required if a filesystem of an instance becomes corrupted. openstack server rescue Restore a rescued instance Reboots the rescued instance. openstack server unrescue View instance logs View the most recent section of the instance console log. openstack console log show Shelve an instance When you shelve an instance you retain the instance data and resource allocations, but clear the instance memory. Depending on the cloud configuration, shelved instances are moved to the SHELVED_OFFLOADED state either immediately or after a timed delay. When an instance is in the SHELVED_OFFLOADED state, the instance data and resource allocations are deleted. The state of the instance is stored on the instance disk. If the instance was booted from volume, it goes to SHELVED_OFFLOADED immediately. You are not prompted to confirm the shelve action. openstack server shelve Unshelve an instance Restores the instance using the disk image of the shelved instance. openstack server unshelve Lock an instance Lock an instance to prevent non-admin users from executing actions on the instance. openstack server lock openstack server unlock Soft reboot an instance Gracefully stop and restart the instance. A soft reboot attempts to gracefully shut down all processes before restarting the instance. By default, when you reboot an instance it is a soft reboot. openstack server reboot --soft <server> Hard reboot an instance Stop and restart the instance. A hard reboot shuts down the power to the instance and then turns it back on. openstack server reboot --hard <server> Rebuild an instance Use new image and disk-partition options to rebuild the instance, which involves an instance shut down, re-image, and reboot. Use this option if you encounter operating system issues, rather than terminating the instance and starting over. openstack server rebuild
[ "openstack server list", "openstack flavor list", "openstack server resize --flavor <flavor> --wait <instance>", "openstack server list +----------------------+----------------+--------+----------------------------+ | ID | Name | Status | Networks | +----------------------+----------------+--------+----------------------------+ | 67bc9a9a-5928-47c... | myCirrosServer | RESIZE | admin_internal_net=192.168.111.139 | +----------------------+----------------+--------+----------------------------+", "openstack server resize confirm <instance>", "openstack server resize revert <instance>", "openstack server list", "openstack server image create --name <image_name> <instance>", "openstack image set --property hw_qemu_guest_agent=yes --property os_require_quiesce=yes <image_name>", "openstack server rescue [--image <image>] <instance>", "openstack server unrescue <instance>", "openstack server list", "openstack server shelve <instance> [<instance> ...]", "openstack server list" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/creating_and_managing_instances/assembly_managing-an-instance_instances
13.2.29. Downgrading SSSD
13.2.29. Downgrading SSSD When downgrading - either downgrading the version of SSSD or downgrading the operating system itself - then the existing SSSD cache needs to be removed. If the cache is not removed, then SSSD process is dead but a PID file remains. The SSSD logs show that it cannot connect to any of its associated domains because the cache version is unrecognized. Users are then no longer recognized and are unable to authenticate to domain services and hosts. After downgrading the SSSD version: Delete the existing cache database files. Restart the SSSD process.
[ "(Wed Nov 28 21:25:50 2012) [sssd] [sysdb_domain_init_internal] (0x0010): Unknown DB version [0.14], expected [0.10] for domain AD!", "~]# rm -rf /var/lib/sss/db/*", "~]# service sssd restart Stopping sssd: [FAILED] Starting sssd: [ OK ]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sssd-downgrade
21.2. Types
21.2. Types The main permission control method used in SELinux targeted policy to provide advanced process isolation is Type Enforcement. All files and processes are labeled with a type: types define a SELinux domain for processes and a SELinux type for files. SELinux policy rules define how types access each other, whether it be a domain accessing a type, or a domain accessing another domain. Access is only allowed if a specific SELinux policy rule exists that allows it. The following types are used with postgresql . Different types allow you to configure flexible access. Note that in the list below are used several regular expression to match the whole possible locations: postgresql_db_t This type is used for several locations. The locations labeled with this type are used for data files for PostgreSQL: /usr/lib/pgsql/test/regres /usr/share/jonas/pgsql /var/lib/pgsql/data /var/lib/postgres(ql)? postgresql_etc_t This type is used for configuration files in the /etc/postgresql/ directory. postgresql_exec_t This type is used for several locations. The locations labeled with this type are used for binaries for PostgreSQL: /usr/bin/initdb(.sepgsql)? /usr/bin/(se)?postgres /usr/lib(64)?/postgresql/bin/.* /usr/lib(64)?/pgsql/test/regress/pg_regress systemd_unit_file_t This type is used for the executable PostgreSQL-related files located in the /usr/lib/systemd/system/ directory. postgresql_log_t This type is used for several locations. The locations labeled with this type are used for log files: /var/lib/pgsql/logfile /var/lib/pgsql/pgstartup.log /var/lib/sepgsql/pgstartup.log /var/log/postgresql /var/log/postgres.log.* /var/log/rhdb/rhdb /var/log/sepostgresql.log.* postgresql_var_run_t This type is used for run-time files for PostgreSQL, such as the process id (PID) in the /var/run/postgresql/ directory.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-Managing_Confined_Services-PostgreSQL-Types
Web console
Web console Red Hat Advanced Cluster Management for Kubernetes 2.12 Console
[ "get search search-v2-operator -o yaml", "apiVersion: search.open-cluster-management.io/v1alpha1 kind: Search metadata: name: search-v2-operator namespace: open-cluster-management labels: cluster.open-cluster-management.io/backup: \"\" spec: dbStorage: size: 10Gi storageClassName: gp2", "apiVersion: search.open-cluster-management.io/v1alpha1 kind: Search metadata: name: search-v2-operator namespace: open-cluster-management spec: deployments: collector: resources: 1 limits: cpu: 500m memory: 128Mi requests: cpu: 250m memory: 64Mi indexer: replicaCount: 3 database: 2 envVar: - name: POSTGRESQL_EFFECTIVE_CACHE_SIZE value: 1024MB - name: POSTGRESQL_SHARED_BUFFERS value: 512MB - name: WORK_MEM value: 128MB queryapi: arguments: 3 - -v=3", "indexer: resources: limits: memory: 5Gi requests: memory: 1Gi", "spec: dbStorage: size: 10Gi deployments: collector: {} database: {} indexer: {} queryapi: {} nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra operator: Exists", "apply -f <your-search-collector-config>.yaml", "apiVersion: v1 kind: ConfigMap metadata: name: search-collector-config namespace: <namespace where search-collector add-on is deployed> data: AllowedResources: |- 1 - apiGroups: - \"*\" resources: - services - pods - apiGroups: - admission.k8s.io - authentication.k8s.io resources: - \"*\" DeniedResources: |- 2 - apiGroups: - \"*\" resources: - secrets - apiGroups: - admission.k8s.io resources: - policies - iampolicies - certificatepolicies", "patch configmap console-mce-config -n multicluster-engine --type merge -p '{\"data\":{\"SEARCH_RESULT_LIMIT\":\"100\"}}'", "kind: ConfigMap apiVersion: v1 metadata: name: console-search-config namespace: <acm-namespace> 1 data: suggestedSearches: |- [ { \"id\": \"search.suggested.workloads.name\", \"name\": \"Workloads\", \"description\": \"Show workloads running on your fleet\", \"searchText\": \"kind:DaemonSet,Deployment,Job,StatefulSet,ReplicaSet\" }, { \"id\": \"search.suggested.unhealthy.name\", \"name\": \"Unhealthy pods\", \"description\": \"Show pods with unhealthy status\", \"searchText\": \"kind:Pod status:Pending,Error,Failed,Terminating,ImagePullBackOff,CrashLoopBackOff,RunContainerError,ContainerCreating\" }, { \"id\": \"search.suggested.createdLastHour.name\", \"name\": \"Created last hour\", \"description\": \"Show resources created within the last hour\", \"searchText\": \"created:hour\" }, { \"id\": \"search.suggested.virtualmachines.name\", \"name\": \"Virtual Machines\", \"description\": \"Show virtual machine resources\", \"searchText\": \"kind:VirtualMachine\" } ]", "edit managedclusteraddon search-collector -n xyz", "apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: annotations: addon.open-cluster-management.io/search_memory_limit: 2048Mi addon.open-cluster-management.io/search_memory_request: 512Mi", "patch configmap console-mce-config -n multicluster-engine -p '{\"data\": {\"VIRTUAL_MACHINE_ACTIONS\": \"enabled\"}}'", "apiVersion: authentication.open-cluster-management.io/v1beta1 kind: ManagedServiceAccount metadata: name: vm-actor labels: app: search spec: rotation: {} --- apiVersion: rbac.open-cluster-management.io/v1alpha1 kind: ClusterPermission metadata: name: vm-actions labels: app: search spec: clusterRole: rules: - apiGroups: - subresources.kubevirt.io resources: - virtualmachines/start - virtualmachines/stop - virtualmachines/restart - virtualmachineinstances/pause - virtualmachineinstances/unpause verbs: - update clusterRoleBinding: subject: kind: ServiceAccount name: vm-actor namespace: open-cluster-management-agent-addon", "apply -n <MANAGED_CLUSTER> -f /path/to/file", "patch configmap console-mce-config -n multicluster-engine -p '{\"data\": {\"VIRTUAL_MACHINE_ACTIONS\": \"disabled\"}}'", "delete managedserviceaccount,clusterpermission -A -l app=search", "managedserviceaccount.authentication.open-cluster-management.io \"vm-actor\" deleted managedserviceaccount.authentication.open-cluster-management.io \"vm-actor\" deleted clusterpermission.rbac.open-cluster-management.io \"vm-actions\" deleted clusterpermission.rbac.open-cluster-management.io \"vm-actions\" deleted", "get managedserviceaccount,clusterpermission -A -l app=search", "\"No resources found\"" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html-single/web_console/index
Images
Images OpenShift Container Platform 4.7 Creating and managing images and imagestreams in OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/images/index
Part II. Requirements and your responsibilities
Part II. Requirements and your responsibilities Before you start using the subscriptions service, review the hardware and software requirements and your responsibilities when you use the service. Learn more Review the general requirements for using the subscriptions service: Requirements Review information about the tools that you must use to supply the subscriptions service with data about your subscription usage: How to select the right data collection tool Review information about improving the subscriptions service results by setting the right subscription attributes: How to set subscription attributes Review information about your responsibilities when you use the subscriptions service: Your responsibilities
null
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/getting_started_with_the_subscriptions_service/assembly-requirements-and-your-responsibilities
Chapter 3. Compiler and Tools
Chapter 3. Compiler and Tools C exception handling no longer causes unexpected terminations Previously, an incorrect unwind routine was called on the 32-bit Intel architecture because of an erroneous check in the code handling C exceptions. As a consequence, the pthread_cond_wait() function from the glibc library could write data out of bounds and applications written in the C programming language using glibc sometimes terminated unexpectedly. The erroneous check has been fixed and the unexpected termination no longer occurs. (BZ#1104812) Executable files created using the -pie option now start correctly Previously, the linker included in the binutils package produced incorrect dynamic relocations for position-independent binaries for the 32-bit Intel architecture. As a consequence, building code with the -pie compiler option produced binary files that failed to start. The linker has been fixed and now generates position-independent executable files that run correctly. (BZ# 1427285 ) Thread cancellation support for APIs depending on /etc/hosts.conf A defect in thread-cancellation support for the setmntent() function could cause the function to fail and return an error where it was expected to succeed. Consequently, programs that rely on setmntent() could fail to start. The setmntent() function has been fixed, and now works as expected. In addition, the setttyent() and setnetgrent() functions, and all APIs that rely on the /etc/hosts.conf file, have been enhanced to provide improved support for thread cancellation. (BZ# 1437147 ) ld no longer produces invalid executable files with code after initialized data Previously, the binutils ld linker placed code at an incorrect location in memory when the code followed after data initialized to zero values. As a consequence, programs in the linked executable files terminated unexpectedly with a segmentation fault. The linker has been fixed to properly allocate space for the data and position the executable code at the correct starting address. As a result, the linked executable files now run correctly. (BZ#1476412) The ss program no longer stops when providing a long list of filters Previously, providing a long list of filters to the ss command caused an integer value overflow. As a consequence, the 'ss' tool could stop the program execution. With this update, faulty bits in the source code are corrected, and the described problem no longer occurs. (BZ# 1476664 ) SystemTap no longer causes kernel panic on systems under heavy load Previously, when probes of the SystemTap tool were added and removed at the same time by multiple processes, a kernel panic occurred. As a consequence, unloading SystemTap modules on systems under heavy load in some cases caused kernel panics. The procedure for removing probes has now been fixed and SystemTap no longer causes a kernel panic in the described situation. (BZ#1525651)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.10_technical_notes/bug_fixes_compiler_and_tools
Chapter 6. Installing a cluster on vSphere with network customizations
Chapter 6. Installing a cluster on vSphere with network customizations In OpenShift Container Platform version 4.13, you can install a cluster on VMware vSphere infrastructure that you provision with customized network configuration options. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the vSphere platform and the installation process of OpenShift Container Platform. Use the user-provisioned infrastructure installation instructions as a guide; you are free to create the required resources through other methods. 6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . Completing the installation requires that you upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA on vSphere hosts. The machine from which you complete this process requires access to port 443 on the vCenter and ESXi hosts. Verify that port 443 is accessible. If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 6.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 6.3. VMware vSphere infrastructure requirements You must install an OpenShift Container Platform cluster on one of the following versions of a VMware vSphere instance that meets the requirements for the components that you use: Version 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later Version 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later You can host the VMware vSphere infrastructure on-premise or on a VMware Cloud Verified provider that meets the requirements outlined in the following table: Table 6.1. Version requirements for vSphere virtual environments Virtual environment product Required version VMware virtual hardware 15 or later vSphere ESXi hosts 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter host 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Table 6.2. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; vSphere 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later with virtual hardware version 15 This hypervisor version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. For more information about supported hardware on the latest version of Red Hat Enterprise Linux (RHEL) that is compatible with RHCOS, see Hardware on the Red Hat Customer Portal. Storage with in-tree drivers vSphere 7.0 Update 2 and later; 8.0 Update 1 or later This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. Optional: Networking (NSX-T) vSphere 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; vSphere 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later For more information about the compatibility of NSX and OpenShift Container Platform, see the Release Notes section of VMware's NSX container plugin documentation . CPU micro-architecture x86-64-v2 or higher OpenShift 4.13 and later are based on RHEL 9.2 host operating system which raised the microarchitecture requirements to x86-64-v2. See the RHEL Microarchitecture requirements documentation . You can verify compatibility by following the procedures outlined in this KCS article . Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. Additional resources For more information about CSI automatic migration, see "Overview" in VMware vSphere CSI Driver Operator . 6.4. VMware vSphere CSI Driver Operator requirements To install the vSphere CSI Driver Operator, the following requirements must be met: VMware vSphere version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Virtual machines of hardware version 15 or later No third-party vSphere CSI driver already installed in the cluster If a third-party vSphere CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. The presence of a third-party vSphere CSI driver prevents OpenShift Container Platform from updating to OpenShift Container Platform 4.13 or later. Note The VMware vSphere CSI Driver Operator is supported only on clusters deployed with platform: vsphere in the installation manifest. Additional resources To remove a third-party vSphere CSI driver, see Removing a third-party vSphere CSI Driver . To update the hardware version for your vSphere nodes, see Updating hardware on nodes running in vSphere . 6.5. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 6.5.1. vCenter requirements Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that you provided, you must prepare your environment. Required vCenter account privileges To install an OpenShift Container Platform cluster in a vCenter, your vSphere account must include privileges for reading and creating the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. Example 6.1. Roles and privileges required for installation in vSphere API vSphere object for role When required Required privileges in vSphere API vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update StorageProfile.View vSphere vCenter Cluster If VMs will be created in the cluster root Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere vCenter Resource Pool If an existing resource pool is provided Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement InventoryService.Tagging.ObjectAttachable vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.MarkAsTemplate VirtualMachine.Provisioning.DeployTemplate vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For UPI, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.DeployTemplate VirtualMachine.Provisioning.MarkAsTemplate Folder.Create Folder.Delete Example 6.2. Roles and privileges required for installation in vCenter graphical user interface (GUI) vSphere object for role When required Required privileges in vCenter GUI vSphere vCenter Always Cns.Searchable "vSphere Tagging"."Assign or Unassign vSphere Tag" "vSphere Tagging"."Create vSphere Tag Category" "vSphere Tagging"."Create vSphere Tag" vSphere Tagging"."Delete vSphere Tag Category" "vSphere Tagging"."Delete vSphere Tag" "vSphere Tagging"."Edit vSphere Tag Category" "vSphere Tagging"."Edit vSphere Tag" Sessions."Validate session" "Profile-driven storage"."Profile-driven storage update" "Profile-driven storage"."Profile-driven storage view" vSphere vCenter Cluster If VMs will be created in the cluster root Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere vCenter Resource Pool If an existing resource pool is provided Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere Datastore Always Datastore."Allocate space" Datastore."Browse datastore" Datastore."Low level file operations" "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" vSphere Port Group Always Network."Assign network" Virtual Machine Folder Always "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Mark as template" "Virtual machine".Provisioning."Deploy template" vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For UPI, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Deploy template" "Virtual machine".Provisioning."Mark as template" Folder."Create folder" Folder."Delete folder" Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder. Example 6.3. Required permissions and propagation settings vSphere object When required Propagate to children Permissions required vSphere vCenter Always False Listed required privileges vSphere vCenter Datacenter Existing folder False ReadOnly permission Installation program creates the folder True Listed required privileges vSphere vCenter Cluster Existing resource pool False ReadOnly permission VMs in cluster root True Listed required privileges vSphere vCenter Datastore Always False Listed required privileges vSphere Switch Always False ReadOnly permission vSphere Port Group Always False Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder True Listed required privileges vSphere vCenter Resource Pool Existing resource pool True Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Using OpenShift Container Platform with vMotion If you intend on using vMotion in your vSphere environment, consider the following before installing an OpenShift Container Platform cluster. Using Storage vMotion can cause issues and is not supported. Using VMware compute vMotion to migrate the workloads for both OpenShift Container Platform compute machines and control plane machines is generally supported, where generally implies that you meet all VMware best practices for vMotion. To help ensure the uptime of your compute and control plane nodes, ensure that you follow the VMware best practices for vMotion, and use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues. For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules . If you are using VMware vSphere volumes in your pods, migrating a VM across datastores, either manually or through Storage vMotion, causes invalid references within OpenShift Container Platform persistent volume (PV) objects that can result in data loss. OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs. Cluster resources When you deploy an OpenShift Container Platform cluster that uses infrastructure that you provided, you must create the following resources in your vCenter instance: 1 Folder 1 Tag category 1 Tag Virtual machines: 1 template 1 temporary bootstrap node 3 control plane nodes 3 compute machines Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster. If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage. Cluster limits Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks. Networking requirements You can use Dynamic Host Configuration Protocol (DHCP) for the network and configure the DHCP server to set persistent IP addresses to machines in your cluster. In the DHCP lease, you must configure the DHCP to use the default gateway. Note You do not need to use the DHCP for the network if you want to provision nodes with static IP addresses. If you specify nodes or groups of nodes on different VLANs for a cluster that you want to install on user-provisioned infrastructure, you must ensure that machines in your cluster meet the requirements outlined in the "Network connectivity requirements" section of the Networking requirements for user-provisioned infrastructure document. If you are installing to a restricted environment, the VM in your restricted network must have access to vCenter so that it can provision and manage nodes, persistent volume claims (PVCs), and other resources. Note Ensure that each OpenShift Container Platform node in the cluster has access to a Network Time Protocol (NTP) server that is discoverable by DHCP. Installation is possible without an NTP server. However, asynchronous server clocks can cause errors, which the NTP server prevents. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster: Required IP Addresses DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 6.3. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Additional resources Creating a compute machine set on vSphere 6.5.2. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 6.4. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 6.5.3. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 6.5. Minimum resource requirements Machine Operating System vCPU Virtual RAM Storage Input/Output Per Second (IOPS) [1] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [2] 2 8 GB 100 GB 300 OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 6.5.4. Requirements for encrypting virtual machines You can encrypt your virtual machines prior to installing OpenShift Container Platform 4.13 by meeting the following requirements. You have configured a Standard key provider in vSphere. For more information, see Adding a KMS to vCenter Server . Important The Native key provider in vCenter is not supported. For more information, see vSphere Native Key Provider Overview . You have enabled host encryption mode on all of the ESXi hosts that are hosting the cluster. For more information, see Enabling host encryption mode . You have a vSphere account which has all cryptographic privileges enabled. For more information, see Cryptographic Operations Privileges . When you deploy the OVF template in the section titled "Installing RHCOS and starting the OpenShift Container Platform bootstrap process", select the option to "Encrypt this virtual machine" when you are selecting storage for the OVF template. After completing cluster installation, create a storage class that uses the encryption storage policy you used to encrypt the virtual machines. Additional resources Creating an encrypted storage class 6.5.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 6.5.6. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 6.5.6.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 6.5.6.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 6.6. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 6.7. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 6.8. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Ethernet adaptor hardware address requirements When provisioning VMs for the cluster, the ethernet interfaces configured for each VM must use a MAC address from the VMware Organizationally Unique Identifier (OUI) allocation ranges: 00:05:69:00:00:00 to 00:05:69:FF:FF:FF 00:0c:29:00:00:00 to 00:0c:29:FF:FF:FF 00:1c:14:00:00:00 to 00:1c:14:FF:FF:FF 00:50:56:00:00:00 to 00:50:56:FF:FF:FF If a MAC address outside the VMware OUI is used, the cluster installation will not succeed. NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 6.5.7. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 6.9. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 6.5.7.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 6.4. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 6.5. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 6.5.8. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 6.10. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 6.11. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 6.5.8.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 6.6. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 6.6. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 6.7. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 6.8. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.9. VMware vSphere region and zone enablement You can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. Each datacenter can run multiple clusters. This configuration reduces the risk of a hardware failure or network outage that can cause your cluster to fail. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Important The VMware vSphere region and zone enablement feature requires the vSphere Container Storage Interface (CSI) driver as the default storage driver in the cluster. As a result, the feature only available on a newly installed cluster. A cluster that was upgraded from a release defaults to using the in-tree vSphere driver, so you must enable CSI automatic migration for the cluster. You can then configure multiple regions and zones for the upgraded cluster. The default installation configuration deploys a cluster to a single vSphere datacenter. If you want to deploy a cluster to multiple vSphere datacenters, you must create an installation configuration file that enables the region and zone feature. The default install-config.yaml file includes vcenters and failureDomains fields, where you can specify multiple vSphere datacenters and clusters for your OpenShift Container Platform cluster. You can leave these fields blank if you want to install an OpenShift Container Platform cluster in a vSphere environment that consists of single datacenter. The following list describes terms associated with defining zones and regions for your cluster: Failure domain: Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. Region: Specifies a vCenter datacenter. You define a region by using a tag from the openshift-region tag category. Zone: Specifies a vCenter cluster. You define a zone by using a tag from the openshift-zone tag category. Note If you plan on specifying more than one failure domain in your install-config.yaml file, you must create tag categories, zone tags, and region tags in advance of creating the configuration file. You must create a vCenter tag for each vCenter datacenter, which represents a region. Additionally, you must create a vCenter tag for each cluster than runs in a datacenter, which represents a zone. After you create the tags, you must attach each tag to their respective datacenters and clusters. The following table outlines an example of the relationship among regions, zones, and tags for a configuration with multiple vSphere datacenters running in a single VMware vCenter. Datacenter (region) Cluster (zone) Tags us-east us-east-1 us-east-1a us-east-1b us-east-2 us-east-2a us-east-2b us-west us-west-1 us-west-1a us-west-1b us-west-2 us-west-2a us-west-2b Additional resources Additional VMware vSphere configuration parameters Deprecated VMware vSphere configuration parameters vSphere automatic migration VMware vSphere CSI Driver Operator 6.10. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space. Important If you attempt to run the installation program on macOS, a known issue related to the golang compiler causes the installation of the OpenShift Container Platform cluster to fail. For more information about this issue, see the section named "Known Issues" in the OpenShift Container Platform 4.13 release notes document. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 6.11. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Important The Cluster Cloud Controller Manager Operator performs a connectivity check on a provided hostname or IP address. Ensure that you specify a hostname or an IP address to a reachable vCenter server. If you provide metadata to a non-existent vCenter server, installation of the cluster fails at the bootstrap stage. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . If you are installing a three-node cluster, modify the install-config.yaml file by setting the compute.replicas parameter to 0 . This ensures that the cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on {platform}". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 6.11.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 6.11.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 6.12. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters and hyphens ( - ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 6.11.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. If you use the Red Hat OpenShift Networking OVN-Kubernetes network plugin, both IPv4 and IPv6 address families are supported. If you use the Red Hat OpenShift Networking OpenShift SDN network plugin, only the IPv4 address family is supported. Note On VMware vSphere, dual-stack networking must specify IPv4 as the primary address family. The following additional limitations apply to dual-stack networking: Nodes report only their IPv6 IP address in node.status.addresses Nodes with only a single NIC are supported Pods configured for host networking report only their IPv6 addresses in pod.status.IP If you configure your cluster to use both IP address families, review the following requirements: Both IP families must use the same network interface for the default gateway. Both IP families must have the default gateway. You must specify IPv4 and IPv6 addresses in the same order for all network configuration parameters. For example, in the following configuration IPv4 addresses are listed before IPv6 addresses. networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112 Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 6.13. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 6.11.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 6.14. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array cpuPartitioningMode Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. 6.11.1.4. Additional VMware vSphere configuration parameters Additional VMware vSphere configuration parameters are described in the following table. Note The platform.vsphere parameter prefixes each parameter listed in the table. Table 6.15. Additional VMware vSphere cluster parameters Parameter Description Values Describes your account on the cloud platform that hosts your cluster. You can use the parameter to customize the platform. If you provide additional configuration settings for compute and control plane machines in the machine pool, the parameter is not required. You can only specify one vCenter server for your OpenShift Container Platform cluster. A dictionary of vSphere configuration objects Virtual IP (VIP) addresses that you configured for control plane API access. Note This parameter applies only to installer-provisioned infrastructure without an external load balancer configured. You must not specify this parameter in user-provisioned infrastructure. Multiple IP addresses Optional: The disk provisioning method. This value defaults to the vSphere default storage policy if not set. Valid values are thin , thick , or eagerZeroedThick . If you define multiple failure domains for your cluster, you must attach the tag to each vCenter datacenter. To define a region, use a tag from the openshift-region tag category. For a single vSphere datacenter environment, you do not need to attach a tag, but you must enter an alphanumeric value, such as datacenter , for the parameter. String Specifies the fully-qualified hostname or IP address of the VMware vCenter server, so that a client can access failure domain resources. You must apply the server role to the vSphere vCenter server location. String If you define multiple failure domains for your cluster, you must attach a tag to each vCenter cluster. To define a zone, use a tag from the openshift-zone tag category. For a single vSphere datacenter environment, you do not need to attach a tag, but you must enter an alphanumeric value, such as cluster , for the parameter. String Lists and defines the datacenters where OpenShift Container Platform virtual machines (VMs) operate. The list of datacenters must match the list of datacenters specified in the vcenters field. String Specifies the path to a vSphere datastore that stores virtual machines files for a failure domain. You must apply the datastore role to the vSphere vCenter datastore location. String Optional: The absolute path of an existing folder where the user creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster and you do not want to use the default StorageClass object, named thin , you can omit the folder parameter from the install-config.yaml file. String Lists any network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. String Optional: The absolute path of an existing resource pool where the installation program creates the virtual machines, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . If you do not specify a value, the installation program installs the resources in the root of the cluster under /<datacenter_name>/host/<cluster_name>/Resources . String Virtual IP (VIP) addresses that you configured for cluster Ingress. Note This parameter applies only to installer-provisioned infrastructure without an external load balancer configured. You must not specify this parameter in user-provisioned infrastructure. Multiple IP addresses Configures the connection details so that services can communicate with a vCenter server. Currently, only a single vCenter server is supported. An array of vCenter configuration objects. Lists and defines the datacenters where OpenShift Container Platform virtual machines (VMs) operate. The list of datacenters must match the list of datacenters specified in the failureDomains field. String The password associated with the vSphere user. String The port number used to communicate with the vCenter server. Integer The fully qualified host name (FQHN) or IP address of the vCenter server. String The username associated with the vSphere user. String 6.11.1.5. Deprecated VMware vSphere configuration parameters In OpenShift Container Platform 4.13, the following vSphere configuration parameters are deprecated. You can continue to use these parameters, but the installation program does not automatically specify these parameters in the install-config.yaml file. The following table lists each deprecated vSphere configuration parameter. Note The platform.vsphere parameter prefixes each parameter listed in the table. Table 6.16. Deprecated VMware vSphere cluster parameters Parameter Description Values The virtual IP (VIP) address that you configured for control plane API access. Note In OpenShift Container Platform 4.12 and later, the apiVIP configuration setting is deprecated. Instead, use a List format to enter a value in the apiVIPs configuration setting. An IP address, for example 128.0.0.1 . The vCenter cluster to install the OpenShift Container Platform cluster in. String Defines the datacenter where OpenShift Container Platform virtual machines (VMs) operate. String The name of the default datastore to use for provisioning volumes. String Optional: The absolute path of an existing folder where the installation program creates the virtual machines. If you do not provide this value, the installation program creates a folder that is named with the infrastructure ID in the data center virtual machine folder. String, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . Virtual IP (VIP) addresses that you configured for cluster Ingress. Note In OpenShift Container Platform 4.12 and later, the ingressVIP configuration setting is deprecated. Instead, use a List format to enter a value in the ingressVIPs configuration setting. An IP address, for example 128.0.0.1 . The network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. String The password for the vCenter user name. String Optional: The absolute path of an existing resource pool where the installation program creates the virtual machines. If you do not specify a value, the installation program installs the resources in the root of the cluster under /<datacenter_name>/host/<cluster_name>/Resources . String, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . The user name to use to connect to the vCenter instance with. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. String The fully-qualified hostname or IP address of a vCenter server. String 6.11.1.6. Optional VMware vSphere machine pool configuration parameters Optional VMware vSphere machine pool configuration parameters are described in the following table. Note The platform.vsphere parameter prefixes each parameter listed in the table. Table 6.17. Optional VMware vSphere machine pool parameters Parameter Description Values clusterOSImage The location from which the installation program downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with a SHA-256 checksum. For example, https://mirror.openshift.com/images/rhcos-<version>-vmware.<architecture>.ova . osDisk.diskSizeGB The size of the disk in gigabytes. Integer cpus The total number of virtual processor cores to assign a virtual machine. The value of platform.vsphere.cpus must be a multiple of platform.vsphere.coresPerSocket value. Integer coresPerSocket The number of cores per socket in a virtual machine. The number of virtual sockets on the virtual machine is platform.vsphere.cpus / platform.vsphere.coresPerSocket . The default value for control plane nodes and worker nodes is 4 and 2 , respectively. Integer memoryMB The size of a virtual machine's memory in megabytes. Integer 6.11.2. Sample install-config.yaml file for VMware vSphere You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. additionalTrustBundlePolicy: Proxyonly apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 0 3 controlPlane: 4 architecture: amd64 name: <parent_node> platform: {} replicas: 3 5 metadata: creationTimestamp: null name: test 6 networking: --- platform: vsphere: failureDomains: 7 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: "/<datacenter>/host/<cluster>" datacenter: <datacenter> 8 datastore: "/<datacenter>/datastore/<datastore>" 9 networks: - <VM_Network_name> resourcePool: "/<datacenter>/host/<cluster>/Resources/<resourcePool>" 10 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" 11 zone: <default_zone_name> vcenters: - datacenters: - <datacenter> password: <password> 12 port: 443 server: <fully_qualified_domain_name> 13 user: [email protected] diskType: thin 14 fips: false 15 pullSecret: '{"auths": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Both sections define a single machine pool, so only one control plane is used. OpenShift Container Platform does not support defining multiple compute pools. 3 You must set the value of the replicas parameter to 0 . This parameter controls the number of workers that the cluster creates and manages for you, which are functions that the cluster does not perform when you use user-provisioned infrastructure. You must manually deploy worker machines for the cluster to use before you finish installing OpenShift Container Platform. 5 The number of control plane machines that you add to the cluster. Because the cluster uses this values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 6 The cluster name that you specified in your DNS records. 7 Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. 8 The vSphere datacenter. 9 The path to the vSphere datastore that holds virtual machine files, templates, and ISO images. Important You can specify the path of any datastore that exists in a datastore cluster. By default, Storage vMotion is automatically enabled for a datastore cluster. Red Hat does not support Storage vMotion, so you must disable Storage vMotion to avoid data loss issues for your OpenShift Container Platform cluster. If you must specify VMs across multiple datastores, use a datastore object to specify a failure domain in your cluster's install-config.yaml configuration file. For more information, see "VMware vSphere region and zone enablement". 10 Optional: For installer-provisioned infrastructure, the absolute path of an existing resource pool where the installation program creates the virtual machines, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . If you do not specify a value, resources are installed in the root of the cluster /example_datacenter/host/example_cluster/Resources . 11 Optional: For installer-provisioned infrastructure, the absolute path of an existing folder where the installation program creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster and you do not want to use the default StorageClass object, named thin , you can omit the folder parameter from the install-config.yaml file. 12 The password associated with the vSphere user. 13 The fully-qualified hostname or IP address of the vCenter server. Important The Cluster Cloud Controller Manager Operator performs a connectivity check on a provided hostname or IP address. Ensure that you specify a hostname or an IP address to a reachable vCenter server. If you provide metadata to a non-existent vCenter server, installation of the cluster fails at the bootstrap stage. 14 The vSphere disk provisioning method. 15 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 cryptographic modules have not yet been submitted for FIPS validation. For more information, see "About this release" in the 4.13 OpenShift Container Platform Release Notes . 16 The pull secret that you obtained from OpenShift Cluster Manager Hybrid Cloud Console . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 17 The public portion of the default SSH key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). 6.11.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 6.11.4. Configuring regions and zones for a VMware vCenter You can modify the default installation configuration file, so that you can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. The default install-config.yaml file configuration from the release of OpenShift Container Platform is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer will prompt you with a warning message that indicates the use of deprecated fields in the configuration file. Important The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website Prerequisites You have an existing install-config.yaml installation configuration file. Important You must specify at least one failure domain for your OpenShift Container Platform cluster, so that you can provision datacenter objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different datacenters, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Procedure Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: Important If you specify different names for the openshift-region and openshift-zone vCenter tag categories, the installation of the OpenShift Container Platform cluster fails. USD govc tags.category.create -d "OpenShift region" openshift-region USD govc tags.category.create -d "OpenShift zone" openshift-zone To create a region tag for each region vSphere datacenter where you want to deploy your cluster, enter the following command in your terminal: USD govc tags.create -c <region_tag_category> <region_tag> To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: USD govc tags.create -c <zone_tag_category> <zone_tag> Attach region tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1> Attach the zone tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1 Change to the directory that contains the installation program and initialize the cluster deployment according to your chosen installation requirements. Sample install-config.yaml file with multiple datacenters defined in a vSphere center --- compute: --- vsphere: zones: - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" --- controlPlane: --- vsphere: zones: - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" --- platform: vsphere: vcenters: --- datacenters: - <datacenter1_name> - <datacenter2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <datacenter1> computeCluster: "/<datacenter1>/host/<cluster1>" networks: - <VM_Network1_name> datastore: "/<datacenter1>/datastore/<datastore1>" resourcePool: "/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>" folder: "/<datacenter1>/vm/<folder1>" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <datacenter2> computeCluster: "/<datacenter2>/host/<cluster2>" networks: - <VM_Network2_name> datastore: "/<datacenter2>/datastore/<datastore2>" resourcePool: "/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>" folder: "/<datacenter2>/vm/<folder2>" --- 6.12. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information, see "Installation configuration parameters". Note Set the networking.machineNetwork to match the Classless Inter-Domain Routing (CIDR) where the preferred subnet is located. Important The CIDR range 172.17.0.0/16 is reserved by libVirt . You cannot use any other CIDR range that overlaps with the 172.17.0.0/16 CIDR range for networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration. During phase 2, you cannot override the values that you specified in phase 1 in the install-config.yaml file. However, you can customize the network plugin during phase 2. 6.13. Specifying advanced network configuration You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following examples: Specify a different VXLAN port for the OpenShift SDN network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800 Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {} Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. Remove the Kubernetes manifest files that define the control plane machines and compute machineSets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the MachineSet files to create compute machines by using the machine API, but you must update references to them to match your environment. 6.13.1. Specifying multiple subnets for your network Before you install an OpenShift Container Platform cluster on a vSphere host, you can specify multiple subnets for a networking implementation so that the vSphere cloud controller manager (CCM) can select the appropriate subnet for a given networking situation. vSphere can use the subnet for managing pods and services on your cluster. For this configuration, you must specify internal and external Classless Inter-Domain Routing (CIDR) implementations in the vSphere CCM configuration. Each CIDR implementation lists an IP address range that the CCM uses to decide what subnets interact with traffic from internal and external networks. Important Failure to configure internal and external CIDR implementations in the vSphere CCM configuration can cause the vSphere CCM to select the wrong subnet. This situation causes the following error: This configuration can cause new nodes that associate with a MachineSet object with a single subnet to become unusable as each new node receives the node.cloudprovider.kubernetes.io/uninitialized taint. These situations can cause communication issues with the Kubernetes API server that can cause installation of the cluster to fail. Prerequisites You created Kubernetes manifest files for your OpenShift Container Platform cluster. Procedure From the directory where you store your OpenShift Container Platform cluster manifest files, open the manifests/cluster-infrastructure-02-config.yml manifest file. Add a nodeNetworking object to the file and specify internal and external network subnet CIDR implementations for the object. Tip For most networking situations, consider setting the standard multiple-subnet configuration. This configuration requires that you set the same IP address ranges in the nodeNetworking.internal.networkSubnetCidr and nodeNetworking.external.networkSubnetCidr parameters. Example of a configured cluster-infrastructure-02-config.yml manifest file apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: name: cluster spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: VSphere vsphere: failureDomains: - name: generated-failure-domain ... nodeNetworking: external: networkSubnetCidr: - <machine_network_cidr_ipv4> - <machine_network_cidr_ipv6> internal: networkSubnetCidr: - <machine_network_cidr_ipv4> - <machine_network_cidr_ipv6> # ... Additional resources Cluster Network Operator configuration .spec.platformSpec.vsphere.nodeNetworking 6.14. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 6.14.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 6.18. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 6.19. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN network plugin. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OpenShift SDN network plugin The following table describes the configuration fields for the OpenShift SDN network plugin: Table 6.20. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 6.21. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. v4InternalSubnet If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . This field cannot be changed after installation. The default value is 100.64.0.0/16 . v6InternalSubnet If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . Table 6.22. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 6.23. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 6.24. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 6.15. Creating the Ignition config files Because you must manually start the cluster machines, you must generate the Ignition config files that the cluster needs to make its machines. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Obtain the Ignition config files: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important If you created an install-config.yaml file, specify the directory that contains it. Otherwise, specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. The following files are generated in the directory: 6.16. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in VMware vSphere. If you plan to use the cluster identifier as the name of your virtual machine folder, you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 6.17. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on user-provisioned infrastructure on VMware vSphere, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on vSphere hosts. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. Prerequisites You have obtained the Ignition config files for your cluster. You have access to an HTTP server that you can access from your computer and that the machines that you create can access. You have created a vSphere cluster . Procedure Upload the bootstrap Ignition config file, which is named <installation_directory>/bootstrap.ign , that the installation program created to your HTTP server. Note the URL of this file. Save the following secondary Ignition config file for your bootstrap node to your computer as <installation_directory>/merge-bootstrap.ign : { "ignition": { "config": { "merge": [ { "source": "<bootstrap_ignition_config_url>", 1 "verification": {} } ] }, "timeouts": {}, "version": "3.2.0" }, "networkd": {}, "passwd": {}, "storage": {}, "systemd": {} } 1 Specify the URL of the bootstrap Ignition config file that you hosted. When you create the virtual machine (VM) for the bootstrap machine, you use this Ignition config file. Locate the following Ignition config files that the installation program created: <installation_directory>/master.ign <installation_directory>/worker.ign <installation_directory>/merge-bootstrap.ign Convert the Ignition config files to Base64 encoding. Later in this procedure, you must add these files to the extra configuration parameter guestinfo.ignition.config.data in your VM. For example, if you use a Linux operating system, you can use the base64 command to encode the files. USD base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64 USD base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64 USD base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64 Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Obtain the RHCOS OVA image. Images are available from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The filename contains the OpenShift Container Platform version number in the format rhcos-vmware.<architecture>.ova . In the vSphere Client, create a folder in your datacenter to store your VMs. Click the VMs and Templates view. Right-click the name of your datacenter. Click New Folder New VM and Template Folder . In the window that is displayed, enter the folder name. If you did not specify an existing folder in the install-config.yaml file, then create a folder with the same name as the infrastructure ID. You use this folder name so vCenter dynamically provisions storage in the appropriate location for its Workspace configuration. In the vSphere Client, create a template for the OVA image and then clone the template as needed. Note In the following steps, you create a template and then clone the template for all of your cluster machines. You then provide the location for the Ignition config file for that cloned machine type when you provision the VMs. From the Hosts and Clusters tab, right-click your cluster name and select Deploy OVF Template . On the Select an OVF tab, specify the name of the RHCOS OVA file that you downloaded. On the Select a name and folder tab, set a Virtual machine name for your template, such as Template-RHCOS . Click the name of your vSphere cluster and select the folder you created in the step. On the Select a compute resource tab, click the name of your vSphere cluster. On the Select storage tab, configure the storage options for your VM. Select Thin Provision or Thick Provision , based on your storage preferences. Select the datastore that you specified in your install-config.yaml file. If you want to encrypt your virtual machines, select Encrypt this virtual machine . See the section titled "Requirements for encrypting virtual machines" for more information. On the Select network tab, specify the network that you configured for the cluster, if available. When creating the OVF template, do not specify values on the Customize template tab or configure the template any further. Important Do not start the original VM template. The VM template must remain off and must be cloned for new RHCOS machines. Starting the VM template configures the VM template as a VM on the platform, which prevents it from being used as a template that compute machine sets can apply configurations to. Optional: Update the configured virtual hardware version in the VM template, if necessary. Follow Upgrading a virtual machine to the latest hardware version in the VMware documentation for more information. Important It is recommended that you update the hardware version of the VM template to version 15 before creating VMs from it, if necessary. Using hardware version 13 for your cluster nodes running on vSphere is now deprecated. If your imported template defaults to hardware version 13, you must ensure that your ESXi host is on 6.7U3 or later before upgrading the VM template to hardware version 15. If your vSphere version is less than 6.7U3, you can skip this upgrade step; however, a future version of OpenShift Container Platform is scheduled to remove support for hardware version 13 and vSphere versions less than 6.7U3. After the template deploys, deploy a VM for a machine in the cluster. Right-click the template name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as control-plane-0 or compute-1 . Note Ensure that all virtual machine names across a vSphere installation are unique. On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. On the Select clone options tab, select Customize this virtual machine's hardware . On the Customize hardware tab, click Advanced Parameters . Important The following configuration suggestions are for example purposes only. As a cluster administrator, you must configure resources according to the resource demands placed on your cluster. To best manage cluster resources, consider creating a resource pool from the cluster's root resource pool. Optional: Override default DHCP networking in vSphere. To enable static IP networking: Set your static IP configuration: Example command USD export IPCFG="ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]" Example command USD export IPCFG="ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8" Set the guestinfo.afterburn.initrd.network-kargs property before you boot a VM from an OVA in vSphere: Example command USD govc vm.change -vm "<vm_name>" -e "guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}" Add the following configuration parameter names and values by specifying data in the Attribute and Values fields. Ensure that you select the Add button for each parameter that you create. guestinfo.ignition.config.data : Locate the base-64 encoded files that you created previously in this procedure, and paste the contents of the base64-encoded Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . stealclock.enable : If this parameter was not defined, add it and specify TRUE . Create a child resource pool from the cluster's root resource pool. Perform resource allocation in this child resource pool. In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Complete the remaining configuration steps. On clicking the Finish button, you have completed the cloning operation. From the Virtual Machines tab, right-click on your VM and then select Power Power On . Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied steps Create the rest of the machines for your cluster by following the preceding steps for each machine. Important You must create the bootstrap and control plane machines at this time. Because some pods are deployed on compute machines by default, also create at least two compute machines before you install the cluster. 6.18. Adding more compute machines to a cluster in vSphere You can add more compute machines to a user-provisioned OpenShift Container Platform cluster on VMware vSphere. After your vSphere template deploys in your OpenShift Container Platform cluster, you can deploy a virtual machine (VM) for a machine in that cluster. Prerequisites Obtain the base64-encoded Ignition file for your compute machines. You have access to the vSphere template that you created for your cluster. Procedure Right-click the template's name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as compute-1 . Note Ensure that all virtual machine names across a vSphere installation are unique. On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. On the Select storage tab, select storage for your configuration and disk files. On the Select clone options tab, select Customize this virtual machine's hardware . On the Customize hardware tab, click Advanced Parameters . Add the following configuration parameter names and values by specifying data in the Attribute and Values fields. Ensure that you select the Add button for each parameter that you create. guestinfo.ignition.config.data : Paste the contents of the base64-encoded compute Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. If many networks exist, select Add New Device > Network Adapter , and then enter your network information in the fields provided by the New Network menu item. Complete the remaining configuration steps. On clicking the Finish button, you have completed the cloning operation. From the Virtual Machines tab, right-click on your VM and then select Power Power On . steps Continue to create more compute machines for your cluster. 6.19. Disk partitioning In most cases, data partitions are originally created by installing RHCOS, rather than by installing another operating system. In such cases, the OpenShift Container Platform installer should be allowed to configure your disk partitions. However, there are two cases where you might want to intervene to override the default partitioning when installing an OpenShift Container Platform node: Create separate partitions: For greenfield installations on an empty disk, you might want to add separate storage to a partition. This is officially supported for making /var or a subdirectory of /var , such as /var/lib/etcd , a separate partition, but not both. Important For disk sizes larger than 100GB, and especially disk sizes larger than 1TB, create a separate /var partition. See "Creating a separate /var partition" and this Red Hat Knowledgebase article for more information. Important Kubernetes supports only two file system partitions. If you add more than one partition to the original configuration, Kubernetes cannot monitor all of them. Retain existing partitions: For a brownfield installation where you are reinstalling OpenShift Container Platform on an existing node and want to retain data partitions installed from your operating system, there are both boot arguments and options to coreos-installer that allow you to retain existing data partitions. Creating a separate /var partition In general, disk partitioning for OpenShift Container Platform should be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Important For disk sizes larger than 100GB, and especially larger than 1TB, create a separate /var partition. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ... USD ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.13.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the vSphere installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 6.20. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.26.0 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 6.21. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 6.22. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 6.22.1. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m Configure the Operators that are not available. 6.22.2. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 6.22.3. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 6.22.3.1. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 6.23. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation machine configuration tasks documentation for more information. You can add extra compute machines after the cluster installation is completed by following Adding compute machines to vSphere . 6.24. Configuring vSphere DRS anti-affinity rules for control plane nodes vSphere Distributed Resource Scheduler (DRS) anti-affinity rules can be configured to support higher availability of OpenShift Container Platform Control Plane nodes. Anti-affinity rules ensure that the vSphere Virtual Machines for the OpenShift Container Platform Control Plane nodes are not scheduled to the same vSphere Host. Important The following information applies to compute DRS only and does not apply to storage DRS. The govc command is an open-source command available from VMware; it is not available from Red Hat. The govc command is not supported by the Red Hat support. Instructions for downloading and installing govc are found on the VMware documentation website. Create an anti-affinity rule by running the following command: Example command USD govc cluster.rule.create \ -name openshift4-control-plane-group \ -dc MyDatacenter -cluster MyCluster \ -enable \ -anti-affinity master-0 master-1 master-2 After creating the rule, your control plane nodes are automatically migrated by vSphere so they are not running on the same hosts. This might take some time while vSphere reconciles the new rule. Successful command completion is shown in the following procedure. Note The migration occurs automatically and might cause brief OpenShift API outage or latency until the migration finishes. The vSphere DRS anti-affinity rules need to be updated manually in the event of a control plane VM name change or migration to a new vSphere Cluster. Procedure Remove any existing DRS anti-affinity rule by running the following command: USD govc cluster.rule.remove \ -name openshift4-control-plane-group \ -dc MyDatacenter -cluster MyCluster Example Output [13-10-22 09:33:24] Reconfigure /MyDatacenter/host/MyCluster...OK Create the rule again with updated names by running the following command: USD govc cluster.rule.create \ -name openshift4-control-plane-group \ -dc MyDatacenter -cluster MyOtherCluster \ -enable \ -anti-affinity master-0 master-1 master-2 6.25. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 6.26. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 6.27. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. Optional: if you created encrypted virtual machines, create an encrypted storage class .
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "platform: vsphere:", "platform: vsphere: apiVIPs:", "platform: vsphere: diskType:", "platform: vsphere: failureDomains: region:", "platform: vsphere: failureDomains: server:", "platform: vsphere: failureDomains: zone:", "platform: vsphere: failureDomains: topology: datacenter:", "platform: vsphere: failureDomains: topology: datastore:", "platform: vsphere: failureDomains: topology: folder:", "platform: vsphere: failureDomains: topology: networks:", "platform: vsphere: failureDomains: topology: resourcePool:", "platform: vsphere: ingressVIPs:", "platform: vsphere: vcenters:", "platform: vsphere: vcenters: datacenters:", "platform: vsphere: vcenters: password:", "platform: vsphere: vcenters: port:", "platform: vsphere: vcenters: server:", "platform: vsphere: vcenters: user:", "platform: vsphere: apiVIP:", "platform: vsphere: cluster:", "platform: vsphere: datacenter:", "platform: vsphere: defaultDatastore:", "platform: vsphere: folder:", "platform: vsphere: ingressVIP:", "platform: vsphere: network:", "platform: vsphere: password:", "platform: vsphere: resourcePool:", "platform: vsphere: username:", "platform: vsphere: vCenter:", "additionalTrustBundlePolicy: Proxyonly apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 0 3 controlPlane: 4 architecture: amd64 name: <parent_node> platform: {} replicas: 3 5 metadata: creationTimestamp: null name: test 6 networking: --- platform: vsphere: failureDomains: 7 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: \"/<datacenter>/host/<cluster>\" datacenter: <datacenter> 8 datastore: \"/<datacenter>/datastore/<datastore>\" 9 networks: - <VM_Network_name> resourcePool: \"/<datacenter>/host/<cluster>/Resources/<resourcePool>\" 10 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 11 zone: <default_zone_name> vcenters: - datacenters: - <datacenter> password: <password> 12 port: 443 server: <fully_qualified_domain_name> 13 user: [email protected] diskType: thin 14 fips: false 15 pullSecret: '{\"auths\": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "govc tags.category.create -d \"OpenShift region\" openshift-region", "govc tags.category.create -d \"OpenShift zone\" openshift-zone", "govc tags.create -c <region_tag_category> <region_tag>", "govc tags.create -c <zone_tag_category> <zone_tag>", "govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1>", "govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1", "--- compute: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- controlPlane: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- platform: vsphere: vcenters: --- datacenters: - <datacenter1_name> - <datacenter2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <datacenter1> computeCluster: \"/<datacenter1>/host/<cluster1>\" networks: - <VM_Network1_name> datastore: \"/<datacenter1>/datastore/<datastore1>\" resourcePool: \"/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>\" folder: \"/<datacenter1>/vm/<folder1>\" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <datacenter2> computeCluster: \"/<datacenter2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<datacenter2>/datastore/<datastore2>\" resourcePool: \"/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<datacenter2>/vm/<folder2>\" ---", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}", "rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "ERROR Bootstrap failed to complete: timed out waiting for the condition ERROR Failed to wait for bootstrapping to complete. This error usually happens when there is a problem with control plane hosts that prevents the control plane operators from creating the control plane.", "apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: name: cluster spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: VSphere vsphere: failureDomains: - name: generated-failure-domain nodeNetworking: external: networkSubnetCidr: - <machine_network_cidr_ipv4> - <machine_network_cidr_ipv6> internal: networkSubnetCidr: - <machine_network_cidr_ipv4> - <machine_network_cidr_ipv6>", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }", "base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64", "base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64", "base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64", "export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"", "export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"", "govc vm.change -vm \"<vm_name>\" -e \"guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}\"", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.13.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.26.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster -enable -anti-affinity master-0 master-1 master-2", "govc cluster.rule.remove -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster", "[13-10-22 09:33:24] Reconfigure /MyDatacenter/host/MyCluster...OK", "govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyOtherCluster -enable -anti-affinity master-0 master-1 master-2" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_vsphere/installing-vsphere-network-customizations
7.2. Desktop Environments and Window Managers
7.2. Desktop Environments and Window Managers Once an X server is running, X client applications can connect to it and create a GUI for the user. A range of GUIs are possible with Red Hat Enterprise Linux, from the rudimentary Tab Window Manager to the highly developed and interactive GNOME desktop environment that most Red Hat Enterprise Linux users are familiar with. To create the latter, more advanced GUI, two main classes of X client applications must connect to the X server: a desktop environment and a window manager . 7.2.1. Desktop Environments A desktop environment brings together assorted X clients which, when used together, create a common graphical user environment and development platform. Desktop environments have advanced features allowing X clients and other running processes to communicate with one another, while also allowing all applications written to work in that environment to perform advanced tasks, such as drag and drop operations. Red Hat Enterprise Linux provides two desktop environments: GNOME - The default desktop environment for Red Hat Enterprise Linux based on the GTK+ 2 graphical toolkit. KDE - An alternative desktop environment based on the Qt 3 graphical toolkit. Both GNOME and KDE have advanced productivity applications, such as word processors, spreadsheets, and Web browsers, and provide tools to customize the look and feel of the GUI. Additionally, if both the GTK+ 2 and the Qt libraries are present, KDE applications can run in GNOME and visa versa.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-x-clients
Chapter 37. Resizing an Online Logical Unit
Chapter 37. Resizing an Online Logical Unit In most cases, fully resizing an online logical unit involves two things: resizing the logical unit itself and reflecting the size change in the corresponding multipath device (if multipathing is enabled on the system). To resize the online logical unit, start by modifying the logical unit size through the array management interface of your storage device. This procedure differs with each array; as such, consult your storage array vendor documentation for more information on this. Note In order to resize an online file system, the file system must not reside on a partitioned device. 37.1. Resizing Fibre Channel Logical Units After modifying the online logical unit size, re-scan the logical unit to ensure that the system detects the updated size. To do this for Fibre Channel logical units, use the following command: Important To re-scan fibre channel logical units on a system that uses multipathing, execute the aforementioned command for each sd device (i.e. sd1 , sd2 , and so on) that represents a path for the multipathed logical unit. To determine which devices are paths for a multipath logical unit, use multipath -ll ; then, find the entry that matches the logical unit being resized. It is advisable that you refer to the WWID of each entry to make it easier to find which one matches the logical unit being resized.
[ "echo 1 > /sys/block/sd X /device/rescan" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/online-iscsi-resizing
Chapter 6. Uninstalling a cluster on Azure
Chapter 6. Uninstalling a cluster on Azure You can remove a cluster that you deployed to Microsoft Azure. 6.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. 6.2. Deleting Microsoft Azure resources with the Cloud Credential Operator utility After uninstalling an OpenShift Container Platform cluster that uses short-term credentials managed outside the cluster, you can use the CCO utility ( ccoctl ) to remove the Microsoft Azure (Azure) resources that ccoctl created during installation. Prerequisites Extract and prepare the ccoctl binary. Uninstall an OpenShift Container Platform cluster on Azure that uses short-term credentials. Procedure Delete the Azure resources that ccoctl created by running the following command: USD ccoctl azure delete \ --name=<name> \ 1 --region=<azure_region> \ 2 --subscription-id=<azure_subscription_id> \ 3 --delete-oidc-resource-group 1 <name> matches the name that was originally used to create and tag the cloud resources. 2 <azure_region> is the Azure region in which to delete cloud resources. 3 <azure_subscription_id> is the Azure subscription ID for which to delete cloud resources. Verification To verify that the resources are deleted, query Azure. For more information, refer to Azure documentation.
[ "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2", "ccoctl azure delete --name=<name> \\ 1 --region=<azure_region> \\ 2 --subscription-id=<azure_subscription_id> \\ 3 --delete-oidc-resource-group" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_azure/uninstalling-cluster-azure
Appendix B. Revision History
Appendix B. Revision History Revision History Revision 0.1-5 Fri May 12 2017 Lenka Spackova Moved the fence_sanlock agent and checkquorum.wdmd from Technology Previews to Deprecated Functionality. Revision 0.1-3 Thu Apr 27 2017 Lenka Spackova Added the deprecated zerombr yes Kickstart command to Deprecated Functionality. Revision 0.1-2 Fri Mar 31 2017 Lenka Spackova Added a bug fix to Virtualization. Revision 0.1-1 Tue Mar 28 2017 Lenka Spackova Minor edits in accordance with the updated Red Hat Enterprise Linux 6.9 Release Notes. Revision 0.0-9 Tue Mar 21 2017 Lenka Spackova Release of the Red Hat Enterprise Linux 6.9 Technical Notes. Revision 0.0-5 Thu Jan 05 2017 Lenka Spackova Release of the Red Hat Enterprise Linux 6.9 Beta Technical Notes.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.9_technical_notes/appe-6.9_technical_notes-revision_history
Chapter 2. Differences between java and alt-java
Chapter 2. Differences between java and alt-java Similarities exist between alt-java and java binaries, with the exception of the SSB mitigation. Although the SBB mitigation patch exists only for x86-64 architecture, Intel and AMD, the alt-java exists on all architectures. For non-x86 architectures, the alt-java binary is identical to java binary, except alt-java has no patches. Additional resources For more information about similarities between alt-java and java , see RH1750419 in the Red Hat Bugzilla documentation.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/using_alt-java/diff-java-and-altjava
Chapter 10. Installing and running the headless Process Automation Manager controller
Chapter 10. Installing and running the headless Process Automation Manager controller You can configure KIE Server to run in managed or unmanaged mode. If KIE Server is unmanaged, you must manually create and maintain KIE containers (deployment units). If KIE Server is managed, the Process Automation Manager controller manages the KIE Server configuration and you interact with the Process Automation Manager controller to create and maintain KIE containers. Business Central has an embedded Process Automation Manager controller. If you install Business Central, use the Execution Server page to create and maintain KIE containers. If you want to automate KIE Server management without Business Central, you can use the headless Process Automation Manager controller. 10.1. Using the installer to configure KIE Server with the Process Automation Manager controller KIE Server can be managed by the Process Automation Manager controller or it can be unmanaged. If KIE Server is unmanaged, you must manually create and maintain KIE containers (deployment units). If KIE Server is managed, the Process Automation Manager controller manages the KIE Server configuration and you interact with the Process Automation Manager controller to create and maintain KIE containers. The Process Automation Manager controller is integrated with Business Central. If you install Business Central, you can use the Execution Server page in Business Central to interact with the Process Automation Manager controller. You can use the installer in interactive or CLI mode to install Business Central and KIE Server, and then configure KIE Server with the Process Automation Manager controller. Prerequisites Two computers with backed-up Red Hat JBoss EAP 7.4 server installations are available. Sufficient user permissions to complete the installation are granted. Procedure On the first computer, run the installer in interactive mode or CLI mode. See Installing and configuring Red Hat Process Automation Manager on Red Hat JBoss EAP 7.4 for more information. On the Component Selection page, clear the KIE Server box. Complete the Business Central installation. On the second computer, run the installer in interactive mode or CLI mode. On the Component Selection page, clear the Business Central box. On the Configure Runtime Environment page, select Perform Advanced Configuration . Select Customize KIE Server properties and click . Enter the controller URL for Business Central and configure additional properties for KIE Server. The controller URL has the following form where <HOST:PORT> is the address of Business Central on the second computer: Complete the installation. To verify that the Process Automation Manager controller is now integrated with Business Central, go to the Execution Servers page in Business Central and confirm that the KIE Server that you configured appears under REMOTE SERVERS . 10.2. Installing the headless Process Automation Manager controller You can install the headless Process Automation Manager controller and use the REST API or the KIE Server Java Client API to interact with it. Prerequisites A backed-up Red Hat JBoss EAP installation version 7.4 is available. The base directory of the Red Hat JBoss EAP installation is referred to as EAP_HOME . Sufficient user permissions to complete the installation are granted. Procedure Navigate to the Software Downloads page in the Red Hat Customer Portal (login required), and select the product and version from the drop-down options: Product: Process Automation Manager Version: 7.13.5 Download Red Hat Process Automation Manager 7.13.5 Add Ons (the rhpam-7.13.5-add-ons.zip file). Extract the rhpam-7.13.5-add-ons.zip file. The rhpam-7.13.5-controller-ee7.zip file is in the extracted directory. Extract the rhpam-7.13.5-controller-ee7.zip archive to a temporary directory. In the following examples this directory is called TEMP_DIR . Copy the TEMP_DIR /rhpam-7.13.5-controller-ee7/controller.war directory to EAP_HOME /standalone/deployments/ . Warning Ensure that the names of the headless Process Automation Manager controller deployments you copy do not conflict with your existing deployments in the Red Hat JBoss EAP instance. Copy the contents of the TEMP_DIR /rhpam-7.13.5-controller-ee7/SecurityPolicy/ directory to EAP_HOME /bin . When prompted to overwrite files, select Yes . In the EAP_HOME /standalone/deployments/ directory, create an empty file named controller.war.dodeploy . This file ensures that the headless Process Automation Manager controller is automatically deployed when the server starts. 10.2.1. Creating a headless Process Automation Manager controller user Before you can use the headless Process Automation Manager controller, you must create a user that has the kie-server role. Prerequisites The headless Process Automation Manager controller is installed in the base directory of the Red Hat JBoss EAP installation ( EAP_HOME ). Procedure In a terminal application, navigate to the EAP_HOME /bin directory. Enter the following command and replace <USERNAME> and <PASSWORD> with the user name and password of your choice. USD ./bin/jboss-cli.sh --commands="embed-server --std-out=echo,/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity(identity=<USERNAME>),/subsystem=elytron/filesystem-realm=ApplicationRealm:set-password(identity=<USERNAME>, clear={password='<PASSWORD>'}),/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity-attribute(identity=<USERNAME>, name=role, value=['kie-server'])" Note Make sure that the specified user name is not the same as an existing user, role, or group. For example, do not create a user with the user name admin . The password must have at least eight characters and must contain at least one number and one non-alphanumeric character, but not & (ampersand). Make a note of your user name and password. 10.2.2. Configuring KIE Server and the headless Process Automation Manager controller If KIE Server will be managed by the headless Process Automation Manager controller, you must edit the standalone-full.xml file in KIE Server installation and the standalone.xml file in the headless Process Automation Manager controller installation. Prerequisites KIE Server is installed in an EAP_HOME . The headless Process Automation Manager controller is installed in an EAP_HOME . Note You should install KIE Server and the headless Process Automation Manager controller on different servers in production environments. However, if you install KIE Server and the headless Process Automation Manager controller on the same server, for example in a development environment, make these changes in the shared standalone-full.xml file. On KIE Server nodes, a user with the kie-server role exists. On the server nodes, a user with the kie-server role exists. Procedure In the EAP_HOME /standalone/configuration/standalone-full.xml file, add the following properties to the <system-properties> section and replace <USERNAME> and <USER_PWD> with the credentials of a user with the kie-server role: <property name="org.kie.server.user" value="<USERNAME>"/> <property name="org.kie.server.pwd" value="<USER_PWD>"/> In the KIE Server EAP_HOME /standalone/configuration/standalone-full.xml file, add the following properties to the <system-properties> section: <property name="org.kie.server.controller.user" value="<CONTROLLER_USER>"/> <property name="org.kie.server.controller.pwd" value="<CONTROLLER_PWD>"/> <property name="org.kie.server.id" value="<KIE_SERVER_ID>"/> <property name="org.kie.server.location" value="http://<HOST>:<PORT>/kie-server/services/rest/server"/> <property name="org.kie.server.controller" value="<CONTROLLER_URL>"/> In this file, replace the following values: Replace <CONTROLLER_USER> and <CONTROLLER_PWD> with the credentials of a user with the kie-server role. Replace <KIE_SERVER_ID> with the ID or name of the KIE Server installation, for example, rhpam-7.13.5-kie-server-1 . Replace <HOST> with the ID or name of the KIE Server host, for example, localhost or 192.7.8.9 . Replace <PORT> with the port of the KIE Server host, for example, 8080 . Note The org.kie.server.location property specifies the location of KIE Server. Replace <CONTROLLER_URL> with the URL of the headless Process Automation Manager controller. KIE Server connects to this URL during startup. 10.3. Running the headless Process Automation Manager controller After you have installed the headless Process Automation Manager controller on Red Hat JBoss EAP, use this procedure to run the headless Process Automation Manager controller. Prerequisites The headless Process Automation Manager controller is installed and configured in the base directory of the Red Hat JBoss EAP installation ( EAP_HOME ). Procedure In a terminal application, navigate to EAP_HOME /bin . If you installed the headless Process Automation Manager controller on the same Red Hat JBoss EAP instance as the Red Hat JBoss EAP instance where you installed KIE Server, enter one of the following commands: On Linux or UNIX-based systems: USD ./standalone.sh -c standalone-full.xml On Windows: standalone.bat -c standalone-full.xml If you installed the headless Process Automation Manager controller on a separate Red Hat JBoss EAP instance from the Red Hat JBoss EAP instance where you installed KIE Server, start the headless Process Automation Manager controller with the standalone.sh script: Note In this case, ensure that you made all required configuration changes to the standalone.xml file. On Linux or UNIX-based systems: USD ./standalone.sh On Windows: standalone.bat To verify that the headless Process Automation Manager controller is working on Red Hat JBoss EAP, enter the following command where <CONTROLLER> and <CONTROLLER_PWD> is the user name and password. The output of this command provides information about the KIE Server instance. Note Alternatively, you can use the KIE Server Java API Client to access the headless Process Automation Manager controller. 10.4. Clustering KIE Servers with the headless Process Automation Manager controller The Process Automation Manager controller is integrated with Business Central. However, if you do not install Business Central, you can install the headless Process Automation Manager controller and use the REST API or the KIE Server Java Client API to interact with it. Prerequisites A backed-up Red Hat JBoss EAP installation version 7.4 or later is available. The base directory of the Red Hat JBoss EAP installation is referred to as EAP_HOME . Sufficient user permissions to complete the installation are granted. An NFS server with a shared folder is available as described in Installing and configuring Red Hat Process Automation Manager in a Red Hat JBoss EAP clustered environment . Procedure Navigate to the Software Downloads page in the Red Hat Customer Portal (login required), and select the product and version from the drop-down options: PRODUCT: Process Automation Manager Version: 7.13.5 Download Red Hat Process Automation Manager 7.13.5 Add Ons (the rhpam-7.13.5-add-ons.zip file). Extract the rhpam-7.13.5-add-ons.zip file. The rhpam-7.13.5-controller-ee7.zip file is in the extracted directory. Extract the rhpam-7.13.5-controller-ee7.zip archive to a temporary directory. In the following examples this directory is called TEMP_DIR . Copy the TEMP_DIR /rhpam-7.13.5-controller-ee7/controller.war directory to EAP_HOME /standalone/deployments/ . Warning Ensure that the names of the headless Process Automation Manager controller deployments you copy do not conflict with your existing deployments in the Red Hat JBoss EAP instance. Copy the contents of the TEMP_DIR /rhpam-7.13.5-controller-ee7/SecurityPolicy/ directory to EAP_HOME /bin . When prompted to overwrite files, click Yes . In the EAP_HOME /standalone/deployments/ directory, create an empty file named controller.war.dodeploy . This file ensures that the headless Process Automation Manager controller is automatically deployed when the server starts. Open the EAP_HOME /standalone/configuration/standalone.xml file in a text editor. Add the following properties to the <system-properties> element and replace <NFS_STORAGE> with the absolute path to the NFS storage where the template configuration is stored: Template files contain default configurations for specific deployment scenarios. If the value of the org.kie.server.controller.templatefile.watcher.enabled property is set to true, a separate thread is started to watch for modifications of the template file. The default interval for these checks is 30000 milliseconds and can be further controlled by the org.kie.server.controller.templatefile.watcher.interval system property. If the value of this property is set to false, changes to the template file are detected only when the server restarts. To start the headless Process Automation Manager controller, navigate to EAP_HOME /bin and enter the following command: On Linux or UNIX-based systems: USD ./standalone.sh On Windows: standalone.bat For more information about running Red Hat Process Automation Manager in a Red Hat JBoss Enterprise Application Platform clustered environment, see Installing and configuring Red Hat Process Automation Manager in a Red Hat JBoss EAP clustered environment .
[ "<HOST:PORT>/business-central/rest/controller", "./bin/jboss-cli.sh --commands=\"embed-server --std-out=echo,/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity(identity=<USERNAME>),/subsystem=elytron/filesystem-realm=ApplicationRealm:set-password(identity=<USERNAME>, clear={password='<PASSWORD>'}),/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity-attribute(identity=<USERNAME>, name=role, value=['kie-server'])\"", "<property name=\"org.kie.server.user\" value=\"<USERNAME>\"/> <property name=\"org.kie.server.pwd\" value=\"<USER_PWD>\"/>", "<property name=\"org.kie.server.controller.user\" value=\"<CONTROLLER_USER>\"/> <property name=\"org.kie.server.controller.pwd\" value=\"<CONTROLLER_PWD>\"/> <property name=\"org.kie.server.id\" value=\"<KIE_SERVER_ID>\"/> <property name=\"org.kie.server.location\" value=\"http://<HOST>:<PORT>/kie-server/services/rest/server\"/> <property name=\"org.kie.server.controller\" value=\"<CONTROLLER_URL>\"/>", "./standalone.sh -c standalone-full.xml", "standalone.bat -c standalone-full.xml", "./standalone.sh", "standalone.bat", "curl -X GET \"http://<HOST>:<PORT>/controller/rest/controller/management/servers\" -H \"accept: application/xml\" -u '<CONTROLLER>:<CONTROLLER_PWD>'", "<system-properties> <property name=\"org.kie.server.controller.templatefile.watcher.enabled\" value=\"true\"/> <property name=\"org.kie.server.controller.templatefile\" value=\"<NFS_STORAGE>\"/> </system-properties>", "./standalone.sh", "standalone.bat" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/managing_red_hat_process_automation_manager_and_kie_server_settings/controller-con_execution-server
14.2. Committing Changes to an Image
14.2. Committing Changes to an Image Commit any changes recorded in the specified image file ( imgname ) to the file's base image with the qemu-img commit command. Optionally, specify the file's format type ( fmt ).
[ "qemu-img commit [-f fmt ] [-t cache ] imgname" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-using_qemu_img-committing_changes_to_an_image
Part III. Configuring and viewing your data
Part III. Configuring and viewing your data After adding your OpenShift Container Platform and AWS data, cost management displays your cost data by integration, as well as the cost and usage associated with running your OpenShift Container Platform clusters. If you are using an AWS savings plan for the EC2 instances running OpenShift nodes, cost management defaults to using the savings plan cost. On the cost management Overview page, your cost data is sorted into OpenShift and Infrastructure tabs. Select Perspective to toggle through different views of your cost data. You can also use the global navigation menu to view additional details about your costs by cloud provider. To add other types of integrations, see: Integrating OpenShift Container Platform data into cost management Integrating Google Cloud data into cost management Integrating Microsoft Azure data into cost management Integrating Oracle Cloud data into cost management
null
https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/integrating_amazon_web_services_aws_data_into_cost_management/assembly-cost-management-next-steps-aws
1.4. SELinux States and Modes
1.4. SELinux States and Modes SELinux can run in one of three modes: disabled, permissive, or enforcing. Disabled mode is strongly discouraged; not only does the system avoid enforcing the SELinux policy, it also avoids labeling any persistent objects such as files, making it difficult to enable SELinux in the future. In permissive mode, the system acts as if SELinux is enforcing the loaded security policy, including labeling objects and emitting access denial entries in the logs, but it does not actually deny any operations. While not recommended for production systems, permissive mode can be helpful for SELinux policy development. Enforcing mode is the default, and recommended, mode of operation; in enforcing mode SELinux operates normally, enforcing the loaded security policy on the entire system. Use the setenforce utility to change between enforcing and permissive mode. Changes made with setenforce do not persist across reboots. To change to enforcing mode, enter the setenforce 1 command as the Linux root user. To change to permissive mode, enter the setenforce 0 command. Use the getenforce utility to view the current SELinux mode: In Red Hat Enterprise Linux, you can set individual domains to permissive mode while the system runs in enforcing mode. For example, to make the httpd_t domain permissive: See Section 11.3.4, "Permissive Domains" for more information. Note Persistent states and modes changes are covered in Section 4.4, "Permanent Changes in SELinux States and Modes" .
[ "~]# getenforce Enforcing", "~]# setenforce 0 ~]# getenforce Permissive", "~]# setenforce 1 ~]# getenforce Enforcing", "~]# semanage permissive -a httpd_t" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-Security-Enhanced_Linux-Introduction-SELinux_Modes
Chapter 3. Setting up Samba on an IdM domain member
Chapter 3. Setting up Samba on an IdM domain member You can set up Samba on a host that is joined to a Red Hat Identity Management (IdM) domain. Users from IdM and also, if available, from trusted Active Directory (AD) domains, can access shares and printer services provided by Samba. Important Using Samba on an IdM domain member is an unsupported Technology Preview feature and contains certain limitations. For example, IdM trust controllers do not support the Active Directory Global Catalog service, and they do not support resolving IdM groups using the Distributed Computing Environment / Remote Procedure Calls (DCE/RPC) protocols. As a consequence, AD users can only access Samba shares and printers hosted on IdM clients when logged in to other IdM clients; AD users logged into a Windows machine can not access Samba shares hosted on an IdM domain member. Customers deploying Samba on IdM domain members are encouraged to provide feedback to Red Hat. If users from AD domains need to access shares and printer services provided by Samba, ensure the AES encryption type is enabled is AD. For more information, see Enabling the AES encryption type in Active Directory using a GPO . Prerequisites The host is joined as a client to the IdM domain. Both the IdM servers and the client must run on RHEL 9.0 or later. 3.1. Preparing the IdM domain for installing Samba on domain members Before you can set up Samba on an IdM client, you must prepare the IdM domain using the ipa-adtrust-install utility on an IdM server. Note Any system where you run the ipa-adtrust-install command automatically becomes an AD trust controller. However, you must run ipa-adtrust-install only once on an IdM server. Prerequisites IdM server is installed. You have root privileges to install packages and restart IdM services. Procedure Install the required packages: Authenticate as the IdM administrative user: Run the ipa-adtrust-install utility: The DNS service records are created automatically if IdM was installed with an integrated DNS server. If you installed IdM without an integrated DNS server, ipa-adtrust-install prints a list of service records that you must manually add to DNS before you can continue. The script prompts you that the /etc/samba/smb.conf already exists and will be rewritten: The script prompts you to configure the slapi-nis plug-in, a compatibility plug-in that allows older Linux clients to work with trusted users: You are prompted to run the SID generation task to create a SID for any existing users: This is a resource-intensive task, so if you have a high number of users, you can run this at another time. Optional: By default, the Dynamic RPC port range is defined as 49152-65535 for Windows Server 2008 and later. If you need to define a different Dynamic RPC port range for your environment, configure Samba to use different ports and open those ports in your firewall settings. The following example sets the port range to 55000-65000 . Restart the ipa service: Use the smbclient utility to verify that Samba responds to Kerberos authentication from the IdM side: 3.2. Installing and configuring a Samba server on an IdM client You can install and configure Samba on a client enrolled in an IdM domain. Prerequisites Both the IdM servers and the client must run on RHEL 9.0 or later. The IdM domain is prepared as described in Preparing the IdM domain for installing Samba on domain members . If IdM has a trust configured with AD, enable the AES encryption type for Kerberos. For example, use a group policy object (GPO) to enable the AES encryption type. For details, see Enabling AES encryption in Active Directory using a GPO . Procedure Install the ipa-client-samba package: Use the ipa-client-samba utility to prepare the client and create an initial Samba configuration: By default, ipa-client-samba automatically adds the [homes] section to the /etc/samba/smb.conf file that dynamically shares a user's home directory when the user connects. If users do not have home directories on this server, or if you do not want to share them, remove the following lines from /etc/samba/smb.conf : Share directories and printers. For details, see the following sections: Setting up a Samba file share that uses POSIX ACLs Setting up a share that uses Windows ACLs Setting up Samba as a print server Open the ports required for a Samba client in the local firewall: Enable and start the smb and winbind services: Verification Run the following verification step on a different IdM domain member that has the samba-client package installed: List the shares on the Samba server using Kerberos authentication: Additional resources ipa-client-samba(1) man page on your system 3.3. Manually adding an ID mapping configuration if IdM trusts a new domain Samba requires an ID mapping configuration for each domain from which users access resources. On an existing Samba server running on an IdM client, you must manually add an ID mapping configuration after the administrator added a new trust to an Active Directory (AD) domain. Prerequisites You configured Samba on an IdM client. Afterward, a new trust was added to IdM. The DES and RC4 encryption types for Kerberos must be disabled in the trusted AD domain. For security reasons, RHEL 9 does not support these weak encryption types. Procedure Authenticate using the host's keytab: Use the ipa idrange-find command to display both the base ID and the ID range size of the new domain. For example, the following command displays the values for the ad.example.com domain: You need the values from the ipabaseid and ipaidrangesize attributes in the steps. To calculate the highest usable ID, use the following formula: With the values from the step, the highest usable ID for the ad.example.com domain is 1918599999 (1918400000 + 200000 - 1). Edit the /etc/samba/smb.conf file, and add the ID mapping configuration for the domain to the [global] section: Specify the value from ipabaseid attribute as the lowest and the computed value from the step as the highest value of the range. Restart the smb and winbind services: Verification List the shares on the Samba server using Kerberos authentication: 3.4. Additional resources Installing an Identity Management client
[ "dnf install ipa-server-trust-ad samba-client", "kinit admin", "ipa-adtrust-install", "WARNING: The smb.conf already exists. Running ipa-adtrust-install will break your existing Samba configuration. Do you wish to continue? [no]: yes", "Do you want to enable support for trusted domains in Schema Compatibility plugin? This will allow clients older than SSSD 1.9 and non-Linux clients to work with trusted users. Enable trusted domains support in slapi-nis? [no]: yes", "Do you want to run the ipa-sidgen task? [no]: yes", "net conf setparm global 'rpc server dynamic port range' 55000-65000 firewall-cmd --add-port=55000-65000/tcp firewall-cmd --runtime-to-permanent", "ipactl restart", "smbclient -L ipaserver.idm.example.com -U user_name --use-kerberos=required lp_load_ex: changing to config backend registry Sharename Type Comment --------- ---- ------- IPCUSD IPC IPC Service (Samba 4.15.2)", "dnf install ipa-client-samba", "ipa-client-samba Searching for IPA server IPA server: DNS discovery Chosen IPA master: idm_server.idm.example.com SMB principal to be created: cifs/ idm_client.idm.example.com @ IDM.EXAMPLE.COM NetBIOS name to be used: IDM_CLIENT Discovered domains to use: Domain name: idm.example.com NetBIOS name: IDM SID: S-1-5-21-525930803-952335037-206501584 ID range: 212000000 - 212199999 Domain name: ad.example.com NetBIOS name: AD SID: None ID range: 1918400000 - 1918599999 Continue to configure the system with these values? [no]: yes Samba domain member is configured. Please check configuration at /etc/samba/smb.conf and start smb and winbind services", "[homes] read only = no", "firewall-cmd --permanent --add-service=samba-client firewall-cmd --reload", "systemctl enable --now smb winbind", "smbclient -L idm_client.idm.example.com -U user_name --use-kerberos=required lp_load_ex: changing to config backend registry Sharename Type Comment --------- ---- ------- example Disk IPCUSD IPC IPC Service (Samba 4.15.2)", "kinit -k", "ipa idrange-find --name=\" AD.EXAMPLE.COM _id_range\" --raw --------------- 1 range matched --------------- cn: AD.EXAMPLE.COM _id_range ipabaseid: 1918400000 ipaidrangesize: 200000 ipabaserid: 0 ipanttrusteddomainsid: S-1-5-21-968346183-862388825-1738313271 iparangetype: ipa-ad-trust ---------------------------- Number of entries returned 1 ----------------------------", "maximum_range = ipabaseid + ipaidrangesize - 1", "idmap config AD : range = 1918400000 - 1918599999 idmap config AD : backend = sss", "systemctl restart smb winbind", "smbclient -L idm_client.idm.example.com -U user_name --use-kerberos=required lp_load_ex: changing to config backend registry Sharename Type Comment --------- ---- ------- example Disk IPCUSD IPC IPC Service (Samba 4.15.2)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_external_red_hat_utilities_with_identity_management/setting-up-samba-on-an-idm-domain-member_using-external-red-hat-utilities-with-idm
Chapter 3. Additional automated rule functions
Chapter 3. Additional automated rule functions From the Cryostat web console, you access additional automated rule capabilities, such as deleting an automated rule or copying JFR. If you created Cryostat 2.4, and then upgraded from Cryostat 2.4 to Cryostat 3.0, Cryostat 3.0 automatically detects these automated rules. 3.1. Copying JFR data You can copy information from a JVM application's memory to Cryostat's archive storage location on the OpenShift Container Platform (OCP). During the creation of an automated rule through the Cryostat web console, you can set a value in the Archival Period field. You can specify a numerical value in seconds, minutes, or hours. After you create the automated rule with a specified archival period, Cryostat re-connects with any targeted JVM applications that match the rule. Cryostat then copies any generated JFR recording data from the application's memory to Cryostat's archive storage location. Additionally, you can populate the Preserved Archives field with a value. This field sets a limit to the amount of copies of a JFR recording that Cryostat can move from an application's memory to Cryostat's archive storage location. For example, if you set a value of 10 in the Preserved Archives field, Cryostat will not store more than 10 copies of the file in the archive storage location. When Cryostat generates a new copy of the file that exceeds the limit, Cryostat replaces the oldest version with the newest version of the file. You can also set a size limit for a JFR recording file and specify a time limit for how long a file is stored in the target JVM application's memory by specifying values for the Maximum Size and Maximum Age parameters. Prerequisites Created a Cryostat instance in your Red Hat OpenShift project. Created a Java application. Logged in to your Cryostat web console. Procedure In the navigation menu on the Cryostat web console, click Automated Rules . The Automated Rules window opens. Click Create . The Create window opens. Enter values in any mandatory fields, such as the Match Expression field. In the Archival Period field, specify a value in seconds, minutes, or hours. In the Preserved Archives field, enter the number of archived recording copies to preserve. To create the automated rule, click Create . The Automated Rules window opens and displays your automated rule in a table. 3.2. Deleting an automated rule The Cryostat web console that runs on the OpenShift Container Platform (OCP) provides a simplified method for deleting a rule definition. You can also use the curl tool to delete an automated rule. The curl tool communicates with your Cryostat instance by using the DELETE endpoint. In the request, you can specify the clean=true parameter , which stops all active Java Flight Recordings (JFRs) started by the selected rule. Prerequisites Logged in to the Cryostat web console. Created an automated rule. Procedure In the navigation menu on the Cryostat web console, click Automated Rules . The Automated Rules window opens and displays all existing automated rules in a table. Note If you have not created an automated rule, only a Create button appears on the Automated Rules window. In the table, select the automated rule that you want to delete. Click the more options icon (...) , then click Delete . Figure 3.1. Delete option from the Automated Rules table The Permanently delete your Automated Rule window opens. To delete the selected automated rule, click Delete . If you want to also stop any active recordings that were created by the selected rule, select Clean then click Delete . Cryostat deletes your automated rule permanently. Revised on 2024-07-02 13:35:50 UTC
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/using_automated_rules_on_cryostat/additional-automated-rule-functions_con_metadata-labels-auto-rules
Chapter 4. Creating the traffic violations project in Business Central
Chapter 4. Creating the traffic violations project in Business Central For this example, create a new project called traffic-violation . A project is a container for assets such as data objects, DMN assets, and test scenarios. This example project that you are creating is similar to the existing Traffic_Violation sample project in Business Central. Procedure In Business Central, go to Menu Design Projects . Red Hat Process Automation Manager provides a default space called MySpace . You can use the default space to create and test example projects. Click Add Project . Enter traffic-violation in the Name field. Click Add . Figure 4.1. Add Project window The Assets view of the project opens.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/getting_started_with_red_hat_process_automation_manager/dmn-gs-new-project-creating-proc_getting-started-decision-services
5.6. Load Balancing Policy: None
5.6. Load Balancing Policy: None If no load balancing policy is selected, virtual machines are started on the host within a cluster with the lowest CPU utilization and available memory. To determine CPU utilization a combined metric is used that takes into account the virtual CPU count and the CPU usage percent. This approach is the least dynamic, as the only host selection point is when a new virtual machine is started. Virtual machines are not automatically migrated to reflect increased demand on a host. An administrator must decide which host is an appropriate migration target for a given virtual machine. Virtual machines can also be associated with a particular host using pinning. Pinning prevents a virtual machine from being automatically migrated to other hosts. For environments where resources are highly consumed, manual migration is the best approach.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/technical_reference/load_balancing_policy_none
Chapter 1. Messaging Concepts
Chapter 1. Messaging Concepts 1.1. Messaging Systems Messaging systems allow you to loosely couple heterogeneous systems together with added reliability. Unlike systems based on a Remote Procedure Call (RPC) pattern, messaging systems primarily use an asynchronous message passing pattern with no tight relationship between requests and responses. Most messaging systems are flexible enough to also support a request-response mode if needed, but this is not a primary feature of messaging systems. Messaging systems decouple a message's sender of messages from its consumers. In fact, the senders and consumers of messages are completely independent and know nothing of each other, which allows you to create flexible, loosely coupled systems. Large enterprises often use a messaging system to implement a message bus which loosely couples heterogeneous systems together. Message buses can form the core of an Enterprise Service Bus (ESB). Using a message bus to decouple disparate systems allows the system to grow and adapt more easily. It also allows more flexibility to add new systems or retire old ones since they do not have brittle dependencies on each other. Messaging systems can also incorporate concepts such as delivery guarantees to ensure reliable messaging, transactions to aggregate the sending or consuming of multiple message as a single unit of work, and durability to allow messages to survive server failure or restart. 1.2. Messaging Styles There are two kinds of messaging styles that most messaging systems support: the point-to-point pattern and the publish-subscribe pattern. Point-to-Point Pattern The point-to-point pattern involves sending a message to a single consumer listening on a queue. Once in the queue, the message is usually made persistent to guarantee delivery. Once the message has moved through the queue, the messaging system delivers it to a consumer. The consumer acknowledges the delivery of the message once it is processed. There can be multiple consumers listening on the same queue for the same message, but only one consumer will receive each message. Publish-Subscribe Pattern The publish-subscribe pattern allow senders to send messages to multiple consumers using a single destination. This destination is often known as a topic . Each topic can have multiple consumers, or subscribers, and unlike point-to-point messaging, every subscriber receives any message published to the topic. Another interesting distinction is that subscribers can be durable. Durable subscriptions pass the server a unique identifier when connecting, which allows the server to identify and send any messages published to the topic since the last time the subscriber made a connection. Such messages are typically retained by the server even after a restart. 1.3. Jakarta Messaging Jakarta Messaging 2.0 is defined in Jakarta Messaging . Jakarta Messaging is a Java API that provides both point-to-point and publish-subscriber messaging styles. Jakarta Messaging also incorporates the use of transactions. Jakarta Messaging does not define a standard wire format so while vendors of Jakarta Messaging providers may all use the standard APIs, they may use different internal wire protocols to communicate between their clients and servers. 1.4. Jakarta Messaging Destinations Jakarta Messaging destinations, along with Jakarta Messaging connection factories, are administrative objects. Destinations are used by Jakarta Messaging clients for both producing and consuming messages. The destination allows clients to specify the target when it produces messages and the source of messages when consuming messages. When using a publish-subscribe pattern, destinations are referred to as topics. When using a point-to-point pattern, destinations are referred to as queues. Applications may use many different Jakarta Messaging destinations which are configured on the server side and usually accessed via JNDI.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuring_messaging/messaging_concepts
4.5. Directory Tree Design Examples
4.5. Directory Tree Design Examples The following sections provide examples of directory trees designed to support a flat hierarchy as well as several examples of more complex hierarchies. 4.5.1. Directory Tree for an International Enterprise To support an international enterprise, use the Internet domain name as the root point for the directory tree, then branch the tree immediately below that root point for each country where the enterprise has operations. Avoid using a country designator as the root point for the directory tree, as mentioned in Section 4.2.1.1, "Suffix Naming Conventions" , especially if the enterprise is international. Because LDAP places no restrictions on the order of the attributes in the DNs, the c attribute can represent each country branch: Figure 4.17. Using the c Attribute to Represent Different Countries However, some administrators feel that this is stylistically awkward, so instead use the l attribute to represent different countries: Figure 4.18. Using the l Attribute to Represent Different Countries 4.5.2. Directory Tree for an ISP Internet service providers (ISPs) may support multiple enterprises with their directories. ISP should consider each of the customers as a unique enterprise and design their directory trees accordingly. For security reasons, each account should be provided a unique directory tree with a unique suffix and an independent security policy. An ISP should consider assigning each customer a separate database and storing these databases on separate servers. Placing each directory tree in its own database allows data to be backed up and restored for each directory tree without affecting the other customers. In addition, partitioning helps reduce performance problems caused by disk contention and reduces the number of accounts potentially affected by a disk outage. Figure 4.19. Directory tree for Example ISP
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/designing_the_directory_tree-directory_tree_design_examples
Chapter 5. Multiple JFR recordings based on the same custom trigger definition
Chapter 5. Multiple JFR recordings based on the same custom trigger definition The Cryostat 2.4 agent can dynamically start a JFR recording for each custom trigger definition only once. In this release, the Cryostat agent cannot start multiple JFR recordings for the same custom trigger condition on a recurring basis. Once the Cryostat agent starts a JFR recording for a specific custom trigger definition, the agent then ignores this trigger definition for the rest of the agent session. In this situation, if you want to enable the Cryostat agent to start new JFR recordings based on custom trigger conditions that previously triggered a recording, you must restart the Cryostat agent.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/enabling_dynamic_jfr_recordings_based_on_mbean_custom_triggers/con_multiple-recordings-based-on-same-custom-trigger_cryostat
Chapter 6. Known Issues
Chapter 6. Known Issues 6.1. Installation anaconda component To automatically create an appropriate partition table on disks that are uninitialized or contain unrecognized formatting, use the zerombr kickstart command. The --initlabel option of the clearpart command is not intended to serve this purpose. anaconda component On s390x systems, you cannot use automatic partitioning and encryption. If you want to use storage encryption, you must perform custom partitioning. Do not place the /boot volume on an encrypted volume. anaconda component The order of device names assigned to USB attached storage devices is not guaranteed. Certain USB attached storage devices may take longer to initialize than others, which can result in the device receiving a different name than you expect (for example, sdc instead of sda ). During installation, verify the storage device size, name, and type when configuring partitions and file systems. anaconda component The kdump default on feature currently depends on Anaconda to insert the crashkernel= parameter to the kernel parameter list in the boot loader's configuration file. anaconda component, BZ# 623261 In some circumstances, disks that contain a whole disk format (for example, an LVM Physical Volume populating a whole disk) are not cleared correctly using the clearpart --initlabel kickstart command. Adding the --all switch-as in clearpart --initlabel --all -ensures disks are cleared correctly. anaconda component When installing on the IBM System z architecture, if the installation is being performed over SSH, avoid resizing the terminal window containing the SSH session. If the terminal window is resized during the installation, the installer will exit and the installation will terminate. yaboot component, BZ# 613929 The kernel image provided on the CD/DVD is too large for Open Firmware. Consequently, on the POWER architecture, directly booting the kernel image over a network from the CD/DVD is not possible. Instead, use yaboot to boot from a network. anaconda component The Anaconda partition editing interface includes a button labeled Resize . This feature is intended for users wishing to shrink an existing file system and an underlying volume to make room for an installation of a new system. Users performing manual partitioning cannot use the Resize button to change sizes of partitions as they create them. If you determine a partition needs to be larger than you initially created it, you must delete the first one in the partitioning editor and create a new one with the larger size. system-config-kickstart component Channel IDs (read, write, data) for network devices are required for defining and configuring network devices on IBM S/390 systems. However, system-config-kickstart -the graphical user interface for generating a kickstart configuration-cannot define channel IDs for a network device. To work around this issue, manually edit the kickstart configuration that system-config-kickstart generates to include the desired network devices.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/known_issues
5.7. Viewing Virtual Machines Pinned to a Host
5.7. Viewing Virtual Machines Pinned to a Host You can view virtual machines pinned to a host even while the virtual machines are offline. Use the Pinned to Host list to see which virtual machines will be affected and which virtual machines will require a manual restart after the host becomes active again. Viewing Virtual Machines Pinned to a Host Click Compute Hosts . Click a host name to go to the details view. Click the Virtual Machines tab. Click Pinned to Host .
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/viewing_virtual_machines_pinned_to_a_host
12.3. Customizing Notification Messages
12.3. Customizing Notification Messages The email notifications are constructed using a template for each type of message. This allows messages to be informative, easily reproducible, and easily customizable. The CA uses templates for its notification messages. Separate templates exist for HTML and plain text messages. 12.3.1. Customizing CA Notification Messages Each type of CA notification message has an HTML template and a plain text template associated with it. Messages are constructed from text, tokens, and, for the HTML templates, HTML markup. Tokens are variables, identified by a dollar sign ( USD ), in the message that are replaced by the current value when the message is constructed. See Table 12.3, "Notification Variables" for a list of available tokens. The contents of any message type can be modified by changing the text and tokens in the message template. The appearance of the HTML messages can be changed by modifying the HTML commands in the HTML message template. The default text version of the certificate-issuance-notification message is as follows: This template can be customized as desired, by rearranging, adding, or removing tokens and text, as shown: Notification message templates are located in the /var/lib/pki/ instance_name /ca/emails directory. The name and location of these messages can be changed; make the appropriate changes when configuring the notification. All template names can be changed except for the certificate rejected templates; these names must remain the same. The templates associated with certificate issuance and certificate rejection must be located in the same directory and must use the same extension. Table 12.1, "Notification Templates" lists the default template files provided for creating notification messages. Table 12.2, "Job Notification Email Templates" lists the default template files provided for creating job summary messages. Table 12.1. Notification Templates Filename Description certIssued_CA Template for plain text notification emails to end entities when certificates are issued. certIssued_CA.html Template for HTML-based notification emails to end entities when certificates are issued. certRequestRejected.html Template for HTML-based notification emails to end entities when certificate requests are rejected. certRequestRevoked_CA Template for plain text notification emails to end entities when a certificate is revoked. certRequestRevoked_CA.html Template for HTML-based notification emails to end entities when a certificate is revoked. reqInQueue_CA Template for plain text notification emails to agents when a request enters the queue. reqInQueue_CA.html Template for HTML-based notification emails to agents when a request enters the queue. Table 12.2. Job Notification Email Templates Filename Description rnJob1.txt Template for formulating the message content sent to end entities to inform them that their certificates are about to expire and that the certificates should be renewed or replaced before they expire. rnJob1Summary.txt Template for constructing the summary report to be sent to agents and administrators. Uses the rnJob1Item.txt template to format items in the message. rnJob1Item.txt Template for formatting the items included in the summary report. riq1Item.html Template for formatting the items included in the summary table, which is constructed using the riq1Summary.html template. riq1Summary.html Template for formulating the report or table that summarizes how many requests are pending in the agent queue of a Certificate Manager. publishCerts Template for the report or table that summarizes the certificates to be published to the directory. Uses the publishCertsItem.html template to format the items in the table. publishCertsItem.html Template for formatting the items included in the summary table. ExpiredUnpublishJob Template for the report or table that summarizes removal of expired certificates from the directory. Uses the ExpiredUnpublishJobItem template to format the items in the table. ExpiredUnpublishJobItem Template for formatting the items included in the summary table. Table 12.3, "Notification Variables" lists and defines the variables that can be used in the notification message templates. Table 12.3. Notification Variables Token Description USDCertType Specifies the type of certificate; these can be any of the following: TLS client ( client ) TLS server ( server ) CA signing certificate ( ca ) other ( other ). USDExecutionTime Gives the time the job was run. USDHexSerialNumber Gives the serial number of the certificate that was issued in hexadecimal format. USDHttpHost Gives the fully qualified host name of the Certificate Manager to which end entities should connect to retrieve their certificates. USDHttpPort Gives the Certificate Manager's end-entities (non-TLS) port number. USDInstanceID Gives the ID of the subsystem that sent the notification. USDIssuerDN Gives the DN of the CA that issued the certificate. USDNotAfter Gives the end date of the validity period. USDNotBefore Gives the beginning date of the validity period. USDRecipientEmail Gives the email address of the recipient. USDRequestId Gives the request ID. USDRequestorEmail Gives the email address of the requester. USDRequestType Gives the type of request that was made. USDRevocationDate Gives the date the certificate was revoked. USDSenderEmail Gives the email address of the sender; this is the same as the one specified in the Sender's E-mail Address field in the notification configuration. USDSerialNumber Gives the serial number of the certificate that has been issued; the serial number is displayed as a hexadecimal value in the resulting message. USDStatus Gives the request status. USDSubjectDN Gives the DN of the certificate subject. USDSummaryItemList Lists the items in the summary notification. Each item corresponds to a certificate the job detects for renewal or removal from the publishing directory. USDSummaryTotalFailure Gives the total number of items in the summary report that failed. USDSummaryTotalNum Gives the total number of certificate requests that are pending in the queue or the total number of certificates to be renewed or removed from the directory in the summary report. USDSummaryTotalSuccess Shows how many of the total number of items in the summary report succeeded.
[ "Your certificate request has been processed successfully. SubjectDN= USDSubjectDN IssuerDN= USDIssuerDN notAfter= USDNotAfter notBefore= USDNotBefore Serial Number= 0xUSDHexSerialNumber To get your certificate, please follow this URL: https://USDHttpHost:USDHttpPort/displayBySerial?op=displayBySerial& serialNumber=USDSerialNumber Please contact your admin if there is any problem. And, of course, this is just a \\USDSAMPLE\\USD email notification form.", "THE EXAMPLE COMPANY CERTIFICATE ISSUANCE CENTER Your certificate has been issued! You can pick up your new certificate at the following website: https://USDHttpHost:USDHttpPort/displayBySerial?op=displayBySerial& serialNumber=USDSerialNumber This certificate has been issued with the following information: Serial Number= 0xUSDHexSerialNumber Name of Certificate Holder = USDSubjectDN Name of Issuer = USDIssuerDN Certificate Expiration Date = USDNotAfter Certificate Validity Date = USDNotBefore Contact IT by calling X1234, or going to the IT website http://IT if you have any problems." ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/customizing_notification_messages
Chapter 1. Introduction to Web Services
Chapter 1. Introduction to Web Services Web services provide a standard means of interoperating among different software applications. Each application can run on a variety of platforms and frameworks. Web services facilitate internal, heterogeneous subsystem communication. The interoperability increases service reuse because functions do not need to be rewritten for various environments.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/developing_web_services_applications/con_introduction-to-web-services_default
Chapter 1. Preface
Chapter 1. Preface This document provides an overview of differences between two major versions of Red Hat Enterprise Linux: RHEL 8 and RHEL 9. It provides a list of changes relevant for evaluating an upgrade to RHEL 9 rather than an exhaustive list of all alterations. For details regarding RHEL 9 usage, see the RHEL 9 product documentation . For guidance regarding an in-place upgrade from RHEL 8 to RHEL 9, see Upgrading from RHEL 8 to RHEL 9 . For information about major differences between RHEL 7 and RHEL 8, see Considerations in adopting RHEL 8 . Capabilities and limits of Red Hat Enterprise Linux 9 as compared to other versions of the system are available in the Knowledgebase article Red Hat Enterprise Linux technology capabilities and limits . Information regarding the Red Hat Enterprise Linux life cycle is provided in the Red Hat Enterprise Linux Life Cycle document. The Package manifest document provides a package listing for RHEL 9, including licenses and application compatibility levels. Application compatibility levels are explained in the Red Hat Enterprise Linux 9: Application Compatibility Guide document.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/considerations_in_adopting_rhel_9/preface_considerations-in-adopting-rhel-9
Chapter 84. ExternalConfigurationVolumeSource schema reference
Chapter 84. ExternalConfigurationVolumeSource schema reference Used in: ExternalConfiguration Property Description configMap Reference to a key in a ConfigMap. Exactly one Secret or ConfigMap has to be specified. For more information, see the external documentation for core/v1 configmapvolumesource . ConfigMapVolumeSource name Name of the volume which will be added to the Kafka Connect pods. string secret Reference to a key in a Secret. Exactly one Secret or ConfigMap has to be specified. For more information, see the external documentation for core/v1 secretvolumesource . SecretVolumeSource
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-externalconfigurationvolumesource-reference
Chapter 15. Changing resources for the OpenShift Data Foundation components
Chapter 15. Changing resources for the OpenShift Data Foundation components When you install OpenShift Data Foundation, it comes with pre-defined resources that the OpenShift Data Foundation pods can consume. In some situations with higher I/O load, it might be required to increase these limits. To change the CPU and memory resources on the rook-ceph pods, see Section 15.1, "Changing the CPU and memory resources on the rook-ceph pods" . To tune the resources for the Multicloud Object Gateway (MCG), see Section 15.2, "Tuning the resources for the MCG" . 15.1. Changing the CPU and memory resources on the rook-ceph pods When you install OpenShift Data Foundation, it comes with pre-defined CPU and memory resources for the rook-ceph pods. You can manually increase these values according to the requirements. You can change the CPU and memory resources on the following pods: mgr mds rgw The following example illustrates how to change the CPU and memory resources on the rook-ceph pods. In this example, the existing MDS pod values of cpu and memory are increased from 1 and 4Gi to 2 and 8Gi respectively. Edit the storage cluster: <storagecluster_name> Specify the name of the storage cluster. For example: Add the following lines to the storage cluster Custom Resource (CR): Save the changes and exit the editor. Alternatively, run the oc patch command to change the CPU and memory value of the mds pod: <storagecluster_name> Specify the name of the storage cluster. For example: 15.2. Tuning the resources for the MCG The default configuration for the Multicloud Object Gateway (MCG) is optimized for low resource consumption and not performance. For more information on how to tune the resources for the MCG, see the Red Hat Knowledgebase solution Performance tuning guide for Multicloud Object Gateway (NooBaa) .
[ "oc edit storagecluster -n openshift-storage <storagecluster_name>", "oc edit storagecluster -n openshift-storage ocs-storagecluster", "spec: resources: mds: limits: cpu: 2 memory: 8Gi requests: cpu: 2 memory: 8Gi", "oc patch -n openshift-storage storagecluster <storagecluster_name> --type merge --patch '{\"spec\": {\"resources\": {\"mds\": {\"limits\": {\"cpu\": \"2\",\"memory\": \"8Gi\"},\"requests\": {\"cpu\": \"2\",\"memory\": \"8Gi\"}}}}}'", "oc patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch ' {\"spec\": {\"resources\": {\"mds\": {\"limits\": {\"cpu\": \"2\",\"memory\": \"8Gi\"},\"requests\": {\"cpu\": \"2\",\"memory\": \"8Gi\"}}}}} '" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/troubleshooting_openshift_data_foundation/changing-resources-for-the-openshift-data-foundation-components_rhodf
Chapter 1. Overview of Builds
Chapter 1. Overview of Builds Builds is an extensible build framework based on the Shipwright project , which you can use to build container images on an OpenShift Container Platform cluster. You can build container images from source code and Dockerfiles by using image build tools, such as Source-to-Image (S2I) and Buildah. You can create and apply build resources, view logs of build runs, and manage builds in your OpenShift Container Platform namespaces. Builds includes the following capabilities: Standard Kubernetes-native API for building container images from source code and Dockerfiles Support for Source-to-Image (S2I) and Buildah build strategies Extensibility with your own custom build strategies Execution of builds from source code in a local directory Shipwright CLI for creating and viewing logs, and managing builds on the cluster Integrated user experience with the Developer perspective of the OpenShift Container Platform web console Note Because Builds releases on a different cadence from OpenShift Container Platform, the Builds documentation is now available as a separate documentation set at builds for Red Hat OpenShift .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/builds_using_shipwright/overview-openshift-builds
Chapter 3. Red Hat Virtualization 4.4 Batch Update 1 (ovirt-4.4.2)
Chapter 3. Red Hat Virtualization 4.4 Batch Update 1 (ovirt-4.4.2) 3.1. Red Hat Virtualization Manager 4.4 for RHEL 8 x86_64 (RPMs) The following table outlines the packages included in the rhv-4.4-manager-for-rhel-8-x86_64-rpms repository. Table 3.1. Red Hat Virtualization Manager 4.4 for RHEL 8 x86_64 (RPMs) Name Version Advisory ansible-runner-service 1.0.5-1.el8ev RHSA-2020:3807 ovirt-ansible-hosted-engine-setup 1.1.8-1.el8ev RHBA-2020:3820 ovirt-ansible-infra 1.2.2-1.el8ev RHBA-2020:3820 ovirt-engine 4.4.2.3-6 RHSA-2020:3807 ovirt-engine-backend 4.4.2.3-6 RHSA-2020:3807 ovirt-engine-dbscripts 4.4.2.3-6 RHSA-2020:3807 ovirt-engine-dwh 4.4.2.1-1.el8ev RHSA-2020:3807 ovirt-engine-dwh-grafana-integration-setup 4.4.2.1-1.el8ev RHSA-2020:3807 ovirt-engine-dwh-setup 4.4.2.1-1.el8ev RHSA-2020:3807 ovirt-engine-extension-aaa-ldap 1.4.1-1.el8ev RHSA-2020:3807 ovirt-engine-extension-aaa-ldap-setup 1.4.1-1.el8ev RHSA-2020:3807 ovirt-engine-health-check-bundler 4.4.2.3-6 RHSA-2020:3807 ovirt-engine-restapi 4.4.2.3-6 RHSA-2020:3807 ovirt-engine-setup 4.4.2.3-6 RHSA-2020:3807 ovirt-engine-setup-base 4.4.2.3-6 RHSA-2020:3807 ovirt-engine-setup-plugin-cinderlib 4.4.2.3-6 RHSA-2020:3807 ovirt-engine-setup-plugin-imageio 4.4.2.3-6 RHSA-2020:3807 ovirt-engine-setup-plugin-ovirt-engine 4.4.2.3-6 RHSA-2020:3807 ovirt-engine-setup-plugin-ovirt-engine-common 4.4.2.3-6 RHSA-2020:3807 ovirt-engine-setup-plugin-vmconsole-proxy-helper 4.4.2.3-6 RHSA-2020:3807 ovirt-engine-setup-plugin-websocket-proxy 4.4.2.3-6 RHSA-2020:3807 ovirt-engine-tools 4.4.2.3-6 RHSA-2020:3807 ovirt-engine-tools-backup 4.4.2.3-6 RHSA-2020:3807 ovirt-engine-ui-extensions 1.2.3-1.el8ev RHSA-2020:3807 ovirt-engine-vmconsole-proxy-helper 4.4.2.3-6 RHSA-2020:3807 ovirt-engine-webadmin-portal 4.4.2.3-6 RHSA-2020:3807 ovirt-engine-websocket-proxy 4.4.2.3-6 RHSA-2020:3807 ovirt-imageio-client 2.0.10-1.el8ev RHBA-2020:3820 ovirt-imageio-common 2.0.10-1.el8ev RHBA-2020:3820 ovirt-imageio-daemon 2.0.10-1.el8ev RHBA-2020:3820 ovirt-log-collector 4.4.3-1.el8ev RHSA-2020:3807 ovirt-web-ui 1.6.4-1.el8ev RHSA-2020:3807 python3-ovirt-engine-lib 4.4.2.3-6 RHSA-2020:3807 rhvm 4.4.2.3-6 RHSA-2020:3807 rhvm-branding-rhv 4.4.5-1.el8ev RHSA-2020:3807 rhvm-dependencies 4.4.1-1.el8ev RHSA-2020:3807 vdsm-jsonrpc-java 1.5.5-1.el8ev RHSA-2020:3807 3.2. Red Hat Virtualization 4 Management Agents for RHEL 8 Power, little endian (RPMs) The following table outlines the packages included in the rhv-4-mgmt-agent-for-rhel-8-ppc64le-rpms repository. Table 3.2. Red Hat Virtualization 4 Management Agents for RHEL 8 Power, little endian (RPMs) Name Version Advisory ovirt-imageio-client 2.0.10-1.el8ev RHBA-2020:3820 ovirt-imageio-common 2.0.10-1.el8ev RHBA-2020:3820 ovirt-imageio-daemon 2.0.10-1.el8ev RHBA-2020:3820 vdsm 4.40.26-1.el8ev RHBA-2020:3822 vdsm-api 4.40.26-1.el8ev RHBA-2020:3822 vdsm-client 4.40.26-1.el8ev RHBA-2020:3822 vdsm-common 4.40.26-1.el8ev RHBA-2020:3822 vdsm-hook-checkips 4.40.26-1.el8ev RHBA-2020:3822 vdsm-hook-cpuflags 4.40.26-1.el8ev RHBA-2020:3822 vdsm-hook-ethtool-options 4.40.26-1.el8ev RHBA-2020:3822 vdsm-hook-extra-ipv4-addrs 4.40.26-1.el8ev RHBA-2020:3822 vdsm-hook-fcoe 4.40.26-1.el8ev RHBA-2020:3822 vdsm-hook-localdisk 4.40.26-1.el8ev RHBA-2020:3822 vdsm-hook-nestedvt 4.40.26-1.el8ev RHBA-2020:3822 vdsm-hook-openstacknet 4.40.26-1.el8ev RHBA-2020:3822 vdsm-hook-vhostmd 4.40.26-1.el8ev RHBA-2020:3822 vdsm-hook-vmfex-dev 4.40.26-1.el8ev RHBA-2020:3822 vdsm-http 4.40.26-1.el8ev RHBA-2020:3822 vdsm-jsonrpc 4.40.26-1.el8ev RHBA-2020:3822 vdsm-network 4.40.26-1.el8ev RHBA-2020:3822 vdsm-python 4.40.26-1.el8ev RHBA-2020:3822 vdsm-yajsonrpc 4.40.26-1.el8ev RHBA-2020:3822 3.3. Red Hat Virtualization 4 Management Agents for RHEL 8 x86_64 (RPMs) The following table outlines the packages included in the rhv-4-mgmt-agent-for-rhel-8-x86_64-rpms repository. Table 3.3. Red Hat Virtualization 4 Management Agents for RHEL 8 x86_64 (RPMs) Name Version Advisory ovirt-ansible-hosted-engine-setup 1.1.8-1.el8ev RHBA-2020:3820 ovirt-hosted-engine-setup 2.4.6-1.el8ev RHBA-2020:3822 ovirt-imageio-client 2.0.10-1.el8ev RHBA-2020:3820 ovirt-imageio-common 2.0.10-1.el8ev RHBA-2020:3820 ovirt-imageio-daemon 2.0.10-1.el8ev RHBA-2020:3820 vdsm 4.40.26-1.el8ev RHBA-2020:3822 vdsm-api 4.40.26-1.el8ev RHBA-2020:3822 vdsm-client 4.40.26-1.el8ev RHBA-2020:3822 vdsm-common 4.40.26-1.el8ev RHBA-2020:3822 vdsm-gluster 4.40.26-1.el8ev RHBA-2020:3822 vdsm-hook-checkips 4.40.26-1.el8ev RHBA-2020:3822 vdsm-hook-cpuflags 4.40.26-1.el8ev RHBA-2020:3822 vdsm-hook-ethtool-options 4.40.26-1.el8ev RHBA-2020:3822 vdsm-hook-extra-ipv4-addrs 4.40.26-1.el8ev RHBA-2020:3822 vdsm-hook-fcoe 4.40.26-1.el8ev RHBA-2020:3822 vdsm-hook-localdisk 4.40.26-1.el8ev RHBA-2020:3822 vdsm-hook-nestedvt 4.40.26-1.el8ev RHBA-2020:3822 vdsm-hook-openstacknet 4.40.26-1.el8ev RHBA-2020:3822 vdsm-hook-vhostmd 4.40.26-1.el8ev RHBA-2020:3822 vdsm-hook-vmfex-dev 4.40.26-1.el8ev RHBA-2020:3822 vdsm-http 4.40.26-1.el8ev RHBA-2020:3822 vdsm-jsonrpc 4.40.26-1.el8ev RHBA-2020:3822 vdsm-network 4.40.26-1.el8ev RHBA-2020:3822 vdsm-python 4.40.26-1.el8ev RHBA-2020:3822 vdsm-yajsonrpc 4.40.26-1.el8ev RHBA-2020:3822 3.4. Red Hat Virtualization 4 Tools for RHEL 8 Power, little endian (RPMs) The following table outlines the packages included in the rhv-4-tools-for-rhel-8-ppc64le-rpms repository. Table 3.4. Red Hat Virtualization 4 Tools for RHEL 8 Power, little endian (RPMs) Name Version Advisory ovirt-ansible-hosted-engine-setup 1.1.8-1.el8ev RHBA-2020:3820 ovirt-ansible-infra 1.2.2-1.el8ev RHBA-2020:3820 3.5. Red Hat Virtualization 4 Tools for RHEL 8 x86_64 (RPMs) The following table outlines the packages included in the rhv-4-tools-for-rhel-8-x86_64-rpms repository. Table 3.5. Red Hat Virtualization 4 Tools for RHEL 8 x86_64 (RPMs) Name Version Advisory ovirt-ansible-hosted-engine-setup 1.1.8-1.el8ev RHBA-2020:3820 ovirt-ansible-infra 1.2.2-1.el8ev RHBA-2020:3820 3.6. Red Hat Virtualization Host for RHEL 8 x86_64 (RPMs) The following table outlines the packages included in the rhvh-4-for-rhel-8-x86_64-rpms repository. Table 3.6. Red Hat Virtualization Host for RHEL 8 x86_64 (RPMs) Name Version Advisory ovirt-ansible-hosted-engine-setup 1.1.8-1.el8ev RHBA-2020:3820 vdsm-hook-nestedvt 4.40.26-1.el8ev RHBA-2020:3822
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/package_manifest/ovirt-4.4.2
Chapter 7. Using quotas and limit ranges
Chapter 7. Using quotas and limit ranges A resource quota, defined by a ResourceQuota object, provides constraints that limit aggregate resource consumption per project. It can limit the quantity of objects that can be created in a project by type, as well as the total amount of compute resources and storage that may be consumed by resources in that project. Using quotas and limit ranges, cluster administrators can set constraints to limit the number of objects or amount of compute resources that are used in your project. This helps cluster administrators better manage and allocate resources across all projects, and ensure that no projects are using more than is appropriate for the cluster size. Important Quotas are set by cluster administrators and are scoped to a given project. OpenShift Container Platform project owners can change quotas for their project, but not limit ranges. OpenShift Container Platform users cannot modify quotas or limit ranges. The following sections help you understand how to check on your quota and limit range settings, what sorts of things they can constrain, and how you can request or limit compute resources in your own pods and containers. 7.1. Resources managed by quota A resource quota, defined by a ResourceQuota object, provides constraints that limit aggregate resource consumption per project. It can limit the quantity of objects that can be created in a project by type, as well as the total amount of compute resources and storage that may be consumed by resources in that project. The following describes the set of compute resources and object types that may be managed by a quota. Note A pod is in a terminal state if status.phase is Failed or Succeeded . Table 7.1. Compute resources managed by quota Resource Name Description cpu The sum of CPU requests across all pods in a non-terminal state cannot exceed this value. cpu and requests.cpu are the same value and can be used interchangeably. memory The sum of memory requests across all pods in a non-terminal state cannot exceed this value. memory and requests.memory are the same value and can be used interchangeably. ephemeral-storage The sum of local ephemeral storage requests across all pods in a non-terminal state cannot exceed this value. ephemeral-storage and requests.ephemeral-storage are the same value and can be used interchangeably. This resource is available only if you enabled the ephemeral storage technology preview. This feature is disabled by default. requests.cpu The sum of CPU requests across all pods in a non-terminal state cannot exceed this value. cpu and requests.cpu are the same value and can be used interchangeably. requests.memory The sum of memory requests across all pods in a non-terminal state cannot exceed this value. memory and requests.memory are the same value and can be used interchangeably. requests.ephemeral-storage The sum of ephemeral storage requests across all pods in a non-terminal state cannot exceed this value. ephemeral-storage and requests.ephemeral-storage are the same value and can be used interchangeably. This resource is available only if you enabled the ephemeral storage technology preview. This feature is disabled by default. limits.cpu The sum of CPU limits across all pods in a non-terminal state cannot exceed this value. limits.memory The sum of memory limits across all pods in a non-terminal state cannot exceed this value. limits.ephemeral-storage The sum of ephemeral storage limits across all pods in a non-terminal state cannot exceed this value. This resource is available only if you enabled the ephemeral storage technology preview. This feature is disabled by default. Table 7.2. Storage resources managed by quota Resource Name Description requests.storage The sum of storage requests across all persistent volume claims in any state cannot exceed this value. persistentvolumeclaims The total number of persistent volume claims that can exist in the project. <storage-class-name>.storageclass.storage.k8s.io/requests.storage The sum of storage requests across all persistent volume claims in any state that have a matching storage class, cannot exceed this value. <storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims The total number of persistent volume claims with a matching storage class that can exist in the project. Table 7.3. Object counts managed by quota Resource Name Description pods The total number of pods in a non-terminal state that can exist in the project. replicationcontrollers The total number of replication controllers that can exist in the project. resourcequotas The total number of resource quotas that can exist in the project. services The total number of services that can exist in the project. secrets The total number of secrets that can exist in the project. configmaps The total number of ConfigMap objects that can exist in the project. persistentvolumeclaims The total number of persistent volume claims that can exist in the project. openshift.io/imagestreams The total number of image streams that can exist in the project. You can configure an object count quota for these standard namespaced resource types using the count/<resource>.<group> syntax. USD oc create quota <name> --hard=count/<resource>.<group>=<quota> 1 1 <resource> is the name of the resource, and <group> is the API group, if applicable. Use the kubectl api-resources command for a list of resources and their associated API groups. 7.1.1. Setting resource quota for extended resources Overcommitment of resources is not allowed for extended resources, so you must specify requests and limits for the same extended resource in a quota. Currently, only quota items with the prefix requests. are allowed for extended resources. The following is an example scenario of how to set resource quota for the GPU resource nvidia.com/gpu . Procedure To determine how many GPUs are available on a node in your cluster, use the following command: USD oc describe node ip-172-31-27-209.us-west-2.compute.internal | egrep 'Capacity|Allocatable|gpu' Example output openshift.com/gpu-accelerator=true Capacity: nvidia.com/gpu: 2 Allocatable: nvidia.com/gpu: 2 nvidia.com/gpu: 0 0 In this example, 2 GPUs are available. Use this command to set a quota in the namespace nvidia . In this example, the quota is 1 : USD cat gpu-quota.yaml Example output apiVersion: v1 kind: ResourceQuota metadata: name: gpu-quota namespace: nvidia spec: hard: requests.nvidia.com/gpu: 1 Create the quota with the following command: USD oc create -f gpu-quota.yaml Example output resourcequota/gpu-quota created Verify that the namespace has the correct quota set using the following command: USD oc describe quota gpu-quota -n nvidia Example output Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 0 1 Run a pod that asks for a single GPU with the following command: USD oc create pod gpu-pod.yaml Example output apiVersion: v1 kind: Pod metadata: generateName: gpu-pod-s46h7 namespace: nvidia spec: restartPolicy: OnFailure containers: - name: rhel7-gpu-pod image: rhel7 env: - name: NVIDIA_VISIBLE_DEVICES value: all - name: NVIDIA_DRIVER_CAPABILITIES value: "compute,utility" - name: NVIDIA_REQUIRE_CUDA value: "cuda>=5.0" command: ["sleep"] args: ["infinity"] resources: limits: nvidia.com/gpu: 1 Verify that the pod is running bwith the following command: USD oc get pods Example output NAME READY STATUS RESTARTS AGE gpu-pod-s46h7 1/1 Running 0 1m Verify that the quota Used counter is correct by running the following command: USD oc describe quota gpu-quota -n nvidia Example output Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 1 1 Using the following command, attempt to create a second GPU pod in the nvidia namespace. This is technically available on the node because it has 2 GPUs: USD oc create -f gpu-pod.yaml Example output Error from server (Forbidden): error when creating "gpu-pod.yaml": pods "gpu-pod-f7z2w" is forbidden: exceeded quota: gpu-quota, requested: requests.nvidia.com/gpu=1, used: requests.nvidia.com/gpu=1, limited: requests.nvidia.com/gpu=1 This Forbidden error message occurs because you have a quota of 1 GPU and this pod tried to allocate a second GPU, which exceeds its quota. 7.1.2. Quota scopes Each quota can have an associated set of scopes . A quota only measures usage for a resource if it matches the intersection of enumerated scopes. Adding a scope to a quota restricts the set of resources to which that quota can apply. Specifying a resource outside of the allowed set results in a validation error. Scope Description Terminating Match pods where spec.activeDeadlineSeconds >= 0 . NotTerminating Match pods where spec.activeDeadlineSeconds is nil . BestEffort Match pods that have best effort quality of service for either cpu or memory . otBestEffort Match pods that do not have best effort quality of service for cpu and memory . A BestEffort scope restricts a quota to limiting the following resources: pods A Terminating , NotTerminating , and NotBestEffort scope restricts a quota to tracking the following resources: pods memory requests.memory limits.memory cpu requests.cpu limits.cpu ephemeral-storage requests.ephemeral-storage limits.ephemeral-storage Note Ephemeral storage requests and limits apply only if you enabled the ephemeral storage technology preview. This feature is disabled by default. Additional resources See Resources managed by quotas for more on compute resources. See Quality of Service Classes for more on committing compute resources. 7.2. Admin quota usage 7.2.1. Quota enforcement After a resource quota for a project is first created, the project restricts the ability to create any new resources that can violate a quota constraint until it has calculated updated usage statistics. After a quota is created and usage statistics are updated, the project accepts the creation of new content. When you create or modify resources, your quota usage is incremented immediately upon the request to create or modify the resource. When you delete a resource, your quota use is decremented during the full recalculation of quota statistics for the project. A configurable amount of time determines how long it takes to reduce quota usage statistics to their current observed system value. If project modifications exceed a quota usage limit, the server denies the action, and an appropriate error message is returned to the user explaining the quota constraint violated, and what their currently observed usage stats are in the system. 7.2.2. Requests compared to limits When allocating compute resources by quota, each container can specify a request and a limit value each for CPU, memory, and ephemeral storage. Quotas can restrict any of these values. If the quota has a value specified for requests.cpu or requests.memory , then it requires that every incoming container make an explicit request for those resources. If the quota has a value specified for limits.cpu or limits.memory , then it requires that every incoming container specify an explicit limit for those resources. 7.2.3. Sample resource quota definitions Example core-object-counts.yaml apiVersion: v1 kind: ResourceQuota metadata: name: core-object-counts spec: hard: configmaps: "10" 1 persistentvolumeclaims: "4" 2 replicationcontrollers: "20" 3 secrets: "10" 4 services: "10" 5 1 The total number of ConfigMap objects that can exist in the project. 2 The total number of persistent volume claims (PVCs) that can exist in the project. 3 The total number of replication controllers that can exist in the project. 4 The total number of secrets that can exist in the project. 5 The total number of services that can exist in the project. Example openshift-object-counts.yaml apiVersion: v1 kind: ResourceQuota metadata: name: openshift-object-counts spec: hard: openshift.io/imagestreams: "10" 1 1 The total number of image streams that can exist in the project. Example compute-resources.yaml apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources spec: hard: pods: "4" 1 requests.cpu: "1" 2 requests.memory: 1Gi 3 requests.ephemeral-storage: 2Gi 4 limits.cpu: "2" 5 limits.memory: 2Gi 6 limits.ephemeral-storage: 4Gi 7 1 The total number of pods in a non-terminal state that can exist in the project. 2 Across all pods in a non-terminal state, the sum of CPU requests cannot exceed 1 core. 3 Across all pods in a non-terminal state, the sum of memory requests cannot exceed 1Gi. 4 Across all pods in a non-terminal state, the sum of ephemeral storage requests cannot exceed 2Gi. 5 Across all pods in a non-terminal state, the sum of CPU limits cannot exceed 2 cores. 6 Across all pods in a non-terminal state, the sum of memory limits cannot exceed 2Gi. 7 Across all pods in a non-terminal state, the sum of ephemeral storage limits cannot exceed 4Gi. Example besteffort.yaml apiVersion: v1 kind: ResourceQuota metadata: name: besteffort spec: hard: pods: "1" 1 scopes: - BestEffort 2 1 The total number of pods in a non-terminal state with BestEffort quality of service that can exist in the project. 2 Restricts the quota to only matching pods that have BestEffort quality of service for either memory or CPU. Example compute-resources-long-running.yaml apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-long-running spec: hard: pods: "4" 1 limits.cpu: "4" 2 limits.memory: "2Gi" 3 limits.ephemeral-storage: "4Gi" 4 scopes: - NotTerminating 5 1 The total number of pods in a non-terminal state. 2 Across all pods in a non-terminal state, the sum of CPU limits cannot exceed this value. 3 Across all pods in a non-terminal state, the sum of memory limits cannot exceed this value. 4 Across all pods in a non-terminal state, the sum of ephemeral storage limits cannot exceed this value. 5 Restricts the quota to only matching pods where spec.activeDeadlineSeconds is set to nil . Build pods will fall under NotTerminating unless the RestartNever policy is applied. Example compute-resources-time-bound.yaml apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-time-bound spec: hard: pods: "2" 1 limits.cpu: "1" 2 limits.memory: "1Gi" 3 limits.ephemeral-storage: "1Gi" 4 scopes: - Terminating 5 1 The total number of pods in a non-terminal state. 2 Across all pods in a non-terminal state, the sum of CPU limits cannot exceed this value. 3 Across all pods in a non-terminal state, the sum of memory limits cannot exceed this value. 4 Across all pods in a non-terminal state, the sum of ephemeral storage limits cannot exceed this value. 5 Restricts the quota to only matching pods where spec.activeDeadlineSeconds >=0 . For example, this quota would charge for build pods, but not long running pods such as a web server or database. Example storage-consumption.yaml apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption spec: hard: persistentvolumeclaims: "10" 1 requests.storage: "50Gi" 2 gold.storageclass.storage.k8s.io/requests.storage: "10Gi" 3 silver.storageclass.storage.k8s.io/requests.storage: "20Gi" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: "5" 5 bronze.storageclass.storage.k8s.io/requests.storage: "0" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "0" 7 1 The total number of persistent volume claims in a project 2 Across all persistent volume claims in a project, the sum of storage requested cannot exceed this value. 3 Across all persistent volume claims in a project, the sum of storage requested in the gold storage class cannot exceed this value. 4 Across all persistent volume claims in a project, the sum of storage requested in the silver storage class cannot exceed this value. 5 Across all persistent volume claims in a project, the total number of claims in the silver storage class cannot exceed this value. 6 Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this is set to 0 , it means bronze storage class cannot request storage. 7 Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this is set to 0 , it means bronze storage class cannot create claims. 7.2.4. Creating a quota To create a quota, first define the quota in a file. Then use that file to apply it to a project. See the Additional resources section for a link describing this. USD oc create -f <resource_quota_definition> [-n <project_name>] Here is an example using the core-object-counts.yaml resource quota definition and the demoproject project name: USD oc create -f core-object-counts.yaml -n demoproject 7.2.5. Creating object count quotas You can create an object count quota for all OpenShift Container Platform standard namespaced resource types, such as BuildConfig , and DeploymentConfig . An object quota count places a defined quota on all standard namespaced resource types. When using a resource quota, an object is charged against the quota if it exists in server storage. These types of quotas are useful to protect against exhaustion of storage resources. To configure an object count quota for a resource, run the following command: USD oc create quota <name> --hard=count/<resource>.<group>=<quota>,count/<resource>.<group>=<quota> Example showing object count quota: USD oc create quota test --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4 resourcequota "test" created USD oc describe quota test Name: test Namespace: quota Resource Used Hard -------- ---- ---- count/deployments.extensions 0 2 count/pods 0 3 count/replicasets.extensions 0 4 count/secrets 0 4 This example limits the listed resources to the hard limit in each project in the cluster. 7.2.6. Viewing a quota You can view usage statistics related to any hard limits defined in a project's quota by navigating in the web console to the project's Quota page. You can also use the CLI to view quota details: First, get the list of quotas defined in the project. For example, for a project called demoproject : USD oc get quota -n demoproject NAME AGE besteffort 11m compute-resources 2m core-object-counts 29m Describe the quota you are interested in, for example the core-object-counts quota: USD oc describe quota core-object-counts -n demoproject Name: core-object-counts Namespace: demoproject Resource Used Hard -------- ---- ---- configmaps 3 10 persistentvolumeclaims 0 4 replicationcontrollers 3 20 secrets 9 10 services 2 10 7.2.7. Configuring quota synchronization period When a set of resources are deleted, the synchronization time frame of resources is determined by the resource-quota-sync-period setting in the /etc/origin/master/master-config.yaml file. Before quota usage is restored, a user can encounter problems when attempting to reuse the resources. You can change the resource-quota-sync-period setting to have the set of resources regenerate in the needed amount of time (in seconds) for the resources to be once again available: Example resource-quota-sync-period setting kubernetesMasterConfig: apiLevels: - v1beta3 - v1 apiServerArguments: null controllerArguments: resource-quota-sync-period: - "10s" After making any changes, restart the controller services to apply them. USD master-restart api USD master-restart controllers Adjusting the regeneration time can be helpful for creating resources and determining resource usage when automation is used. Note The resource-quota-sync-period setting balances system performance. Reducing the sync period can result in a heavy load on the controller. 7.2.8. Explicit quota to consume a resource If a resource is not managed by quota, a user has no restriction on the amount of resource that can be consumed. For example, if there is no quota on storage related to the gold storage class, the amount of gold storage a project can create is unbounded. For high-cost compute or storage resources, administrators can require an explicit quota be granted to consume a resource. For example, if a project was not explicitly given quota for storage related to the gold storage class, users of that project would not be able to create any storage of that type. In order to require explicit quota to consume a particular resource, the following stanza should be added to the master-config.yaml. admissionConfig: pluginConfig: ResourceQuota: configuration: apiVersion: resourcequota.admission.k8s.io/v1alpha1 kind: Configuration limitedResources: - resource: persistentvolumeclaims 1 matchContains: - gold.storageclass.storage.k8s.io/requests.storage 2 1 The group or resource to whose consumption is limited by default. 2 The name of the resource tracked by quota associated with the group/resource to limit by default. In the above example, the quota system intercepts every operation that creates or updates a PersistentVolumeClaim . It checks what resources controlled by quota would be consumed. If there is no covering quota for those resources in the project, the request is denied. In this example, if a user creates a PersistentVolumeClaim that uses storage associated with the gold storage class and there is no matching quota in the project, the request is denied. Additional resources For examples of how to create the file needed to set quotas, see Resources managed by quotas . A description of how to allocate compute resources managed by quota . For information on managing limits and quota on project resources, see Working with projects . If a quota has been defined for your project, see Understanding deployments for considerations in cluster configurations. 7.3. Setting limit ranges A limit range, defined by a LimitRange object, defines compute resource constraints at the pod, container, image, image stream, and persistent volume claim level. The limit range specifies the amount of resources that a pod, container, image, image stream, or persistent volume claim can consume. All requests to create and modify resources are evaluated against each LimitRange object in the project. If the resource violates any of the enumerated constraints, the resource is rejected. If the resource does not set an explicit value, and if the constraint supports a default value, the default value is applied to the resource. For CPU and memory limits, if you specify a maximum value but do not specify a minimum limit, the resource can consume more CPU and memory resources than the maximum value. Core limit range object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "core-resource-limits" 1 spec: limits: - type: "Pod" max: cpu: "2" 2 memory: "1Gi" 3 min: cpu: "200m" 4 memory: "6Mi" 5 - type: "Container" max: cpu: "2" 6 memory: "1Gi" 7 min: cpu: "100m" 8 memory: "4Mi" 9 default: cpu: "300m" 10 memory: "200Mi" 11 defaultRequest: cpu: "200m" 12 memory: "100Mi" 13 maxLimitRequestRatio: cpu: "10" 14 1 The name of the limit range object. 2 The maximum amount of CPU that a pod can request on a node across all containers. 3 The maximum amount of memory that a pod can request on a node across all containers. 4 The minimum amount of CPU that a pod can request on a node across all containers. If you do not set a min value or you set min to 0 , the result is no limit and the pod can consume more than the max CPU value. 5 The minimum amount of memory that a pod can request on a node across all containers. If you do not set a min value or you set min to 0 , the result is no limit and the pod can consume more than the max memory value. 6 The maximum amount of CPU that a single container in a pod can request. 7 The maximum amount of memory that a single container in a pod can request. 8 The minimum amount of CPU that a single container in a pod can request. If you do not set a min value or you set min to 0 , the result is no limit and the pod can consume more than the max CPU value. 9 The minimum amount of memory that a single container in a pod can request. If you do not set a min value or you set min to 0 , the result is no limit and the pod can consume more than the max memory value. 10 The default CPU limit for a container if you do not specify a limit in the pod specification. 11 The default memory limit for a container if you do not specify a limit in the pod specification. 12 The default CPU request for a container if you do not specify a request in the pod specification. 13 The default memory request for a container if you do not specify a request in the pod specification. 14 The maximum limit-to-request ratio for a container. OpenShift Container Platform Limit range object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "openshift-resource-limits" spec: limits: - type: openshift.io/Image max: storage: 1Gi 1 - type: openshift.io/ImageStream max: openshift.io/image-tags: 20 2 openshift.io/images: 30 3 - type: "Pod" max: cpu: "2" 4 memory: "1Gi" 5 ephemeral-storage: "1Gi" 6 min: cpu: "1" 7 memory: "1Gi" 8 1 The maximum size of an image that can be pushed to an internal registry. 2 The maximum number of unique image tags as defined in the specification for the image stream. 3 The maximum number of unique image references as defined in the specification for the image stream status. 4 The maximum amount of CPU that a pod can request on a node across all containers. 5 The maximum amount of memory that a pod can request on a node across all containers. 6 The maximum amount of ephemeral storage that a pod can request on a node across all containers. 7 The minimum amount of CPU that a pod can request on a node across all containers. See the Supported Constraints table for important information. 8 The minimum amount of memory that a pod can request on a node across all containers. If you do not set a min value or you set min to 0 , the result` is no limit and the pod can consume more than the max memory value. You can specify both core and OpenShift Container Platform resources in one limit range object. 7.3.1. Container limits Supported Resources: CPU Memory Supported Constraints Per container, the following must hold true if specified: Container Constraint Behavior Min Min[<resource>] less than or equal to container.resources.requests[<resource>] (required) less than or equal to container/resources.limits[<resource>] (optional) If the configuration defines a min CPU, the request value must be greater than the CPU value. If you do not set a min value or you set min to 0 , the result is no limit and the pod can consume more of the resource than the max value. Max container.resources.limits[<resource>] (required) less than or equal to Max[<resource>] If the configuration defines a max CPU, you do not need to define a CPU request value. However, you must set a limit that satisfies the maximum CPU constraint that is specified in the limit range. MaxLimitRequestRatio MaxLimitRequestRatio[<resource>] less than or equal to ( container.resources.limits[<resource>] / container.resources.requests[<resource>] ) If the limit range defines a maxLimitRequestRatio constraint, any new containers must have both a request and a limit value. Additionally, OpenShift Container Platform calculates a limit-to-request ratio by dividing the limit by the request . The result should be an integer greater than 1. For example, if a container has cpu: 500 in the limit value, and cpu: 100 in the request value, the limit-to-request ratio for cpu is 5 . This ratio must be less than or equal to the maxLimitRequestRatio . Supported Defaults: Default[<resource>] Defaults container.resources.limit[<resource>] to specified value if none. Default Requests[<resource>] Defaults container.resources.requests[<resource>] to specified value if none. 7.3.2. Pod limits Supported Resources: CPU Memory Supported Constraints: Across all containers in a pod, the following must hold true: Table 7.4. Pod Constraint Enforced Behavior Min Min[<resource>] less than or equal to container.resources.requests[<resource>] (required) less than or equal to container.resources.limits[<resource>] . If you do not set a min value or you set min to 0 , the result is no limit and the pod can consume more of the resource than the max value. Max container.resources.limits[<resource>] (required) less than or equal to Max[<resource>] . MaxLimitRequestRatio MaxLimitRequestRatio[<resource>] less than or equal to ( container.resources.limits[<resource>] / container.resources.requests[<resource>] ). 7.3.3. Image limits Supported Resources: Storage Resource type name: openshift.io/Image Per image, the following must hold true if specified: Table 7.5. Image Constraint Behavior Max image.dockerimagemetadata.size less than or equal to Max[<resource>] Note To prevent blobs that exceed the limit from being uploaded to the registry, the registry must be configured to enforce quota. The REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_ENFORCEQUOTA environment variable must be set to true . By default, the environment variable is set to true for new deployments. 7.3.4. Image stream limits Supported Resources: openshift.io/image-tags openshift.io/images Resource type name: openshift.io/ImageStream Per image stream, the following must hold true if specified: Table 7.6. ImageStream Constraint Behavior Max[openshift.io/image-tags] length( uniqueimagetags( imagestream.spec.tags ) ) less than or equal to Max[openshift.io/image-tags] uniqueimagetags returns unique references to images of given spec tags. Max[openshift.io/images] length( uniqueimages( imagestream.status.tags ) ) less than or equal to Max[openshift.io/images] uniqueimages returns unique image names found in status tags. The name is equal to the digest for the image. 7.3.5. Counting of image references The openshift.io/image-tags resource represents unique stream limits. Possible references are an ImageStreamTag , an ImageStreamImage , or a DockerImage . Tags can be created by using the oc tag and oc import-image commands or by using image streams. No distinction is made between internal and external references. However, each unique reference that is tagged in an image stream specification is counted just once. It does not restrict pushes to an internal container image registry in any way, but is useful for tag restriction. The openshift.io/images resource represents unique image names that are recorded in image stream status. It helps to restrict several images that can be pushed to the internal registry. Internal and external references are not distinguished. 7.3.6. PersistentVolumeClaim limits Supported Resources: Storage Supported Constraints: Across all persistent volume claims in a project, the following must hold true: Table 7.7. Pod Constraint Enforced Behavior Min Min[<resource>] <= claim.spec.resources.requests[<resource>] (required) Max claim.spec.resources.requests[<resource>] (required) <= Max[<resource>] Limit Range Object Definition { "apiVersion": "v1", "kind": "LimitRange", "metadata": { "name": "pvcs" 1 }, "spec": { "limits": [{ "type": "PersistentVolumeClaim", "min": { "storage": "2Gi" 2 }, "max": { "storage": "50Gi" 3 } } ] } } 1 The name of the limit range object. 2 The minimum amount of storage that can be requested in a persistent volume claim. 3 The maximum amount of storage that can be requested in a persistent volume claim. Additional resources For information on stream limits, see managing images streams . For information on stream limits . For more information on compute resource constraints . For more information on how CPU and memory are measured, see Recommended control plane practices . You can specify limits and requests for ephemeral storage. For more information on this feature, see Understanding ephemeral storage . 7.4. Limit range operations 7.4.1. Creating a limit range Shown here is an example procedure to follow for creating a limit range. Procedure Create the object: USD oc create -f <limit_range_file> -n <project> 7.4.2. View the limit You can view any limit ranges that are defined in a project by navigating in the web console to the Quota page for the project. You can also use the CLI to view limit range details by performing the following steps: Procedure Get the list of limit range objects that are defined in the project. For example, a project called demoproject : USD oc get limits -n demoproject Example Output NAME AGE resource-limits 6d Describe the limit range. For example, for a limit range called resource-limits : USD oc describe limits resource-limits -n demoproject Example Output Name: resource-limits Namespace: demoproject Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Pod cpu 200m 2 - - - Pod memory 6Mi 1Gi - - - Container cpu 100m 2 200m 300m 10 Container memory 4Mi 1Gi 100Mi 200Mi - openshift.io/Image storage - 1Gi - - - openshift.io/ImageStream openshift.io/image - 12 - - - openshift.io/ImageStream openshift.io/image-tags - 10 - - - 7.4.3. Deleting a limit range To remove a limit range, run the following command: + USD oc delete limits <limit_name> S Additional resources For information about enforcing different limits on the number of projects that your users can create, managing limits, and quota on project resources, see Resource quotas per projects .
[ "oc create quota <name> --hard=count/<resource>.<group>=<quota> 1", "oc describe node ip-172-31-27-209.us-west-2.compute.internal | egrep 'Capacity|Allocatable|gpu'", "openshift.com/gpu-accelerator=true Capacity: nvidia.com/gpu: 2 Allocatable: nvidia.com/gpu: 2 nvidia.com/gpu: 0 0", "cat gpu-quota.yaml", "apiVersion: v1 kind: ResourceQuota metadata: name: gpu-quota namespace: nvidia spec: hard: requests.nvidia.com/gpu: 1", "oc create -f gpu-quota.yaml", "resourcequota/gpu-quota created", "oc describe quota gpu-quota -n nvidia", "Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 0 1", "oc create pod gpu-pod.yaml", "apiVersion: v1 kind: Pod metadata: generateName: gpu-pod-s46h7 namespace: nvidia spec: restartPolicy: OnFailure containers: - name: rhel7-gpu-pod image: rhel7 env: - name: NVIDIA_VISIBLE_DEVICES value: all - name: NVIDIA_DRIVER_CAPABILITIES value: \"compute,utility\" - name: NVIDIA_REQUIRE_CUDA value: \"cuda>=5.0\" command: [\"sleep\"] args: [\"infinity\"] resources: limits: nvidia.com/gpu: 1", "oc get pods", "NAME READY STATUS RESTARTS AGE gpu-pod-s46h7 1/1 Running 0 1m", "oc describe quota gpu-quota -n nvidia", "Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 1 1", "oc create -f gpu-pod.yaml", "Error from server (Forbidden): error when creating \"gpu-pod.yaml\": pods \"gpu-pod-f7z2w\" is forbidden: exceeded quota: gpu-quota, requested: requests.nvidia.com/gpu=1, used: requests.nvidia.com/gpu=1, limited: requests.nvidia.com/gpu=1", "apiVersion: v1 kind: ResourceQuota metadata: name: core-object-counts spec: hard: configmaps: \"10\" 1 persistentvolumeclaims: \"4\" 2 replicationcontrollers: \"20\" 3 secrets: \"10\" 4 services: \"10\" 5", "apiVersion: v1 kind: ResourceQuota metadata: name: openshift-object-counts spec: hard: openshift.io/imagestreams: \"10\" 1", "apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources spec: hard: pods: \"4\" 1 requests.cpu: \"1\" 2 requests.memory: 1Gi 3 requests.ephemeral-storage: 2Gi 4 limits.cpu: \"2\" 5 limits.memory: 2Gi 6 limits.ephemeral-storage: 4Gi 7", "apiVersion: v1 kind: ResourceQuota metadata: name: besteffort spec: hard: pods: \"1\" 1 scopes: - BestEffort 2", "apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-long-running spec: hard: pods: \"4\" 1 limits.cpu: \"4\" 2 limits.memory: \"2Gi\" 3 limits.ephemeral-storage: \"4Gi\" 4 scopes: - NotTerminating 5", "apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-time-bound spec: hard: pods: \"2\" 1 limits.cpu: \"1\" 2 limits.memory: \"1Gi\" 3 limits.ephemeral-storage: \"1Gi\" 4 scopes: - Terminating 5", "apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption spec: hard: persistentvolumeclaims: \"10\" 1 requests.storage: \"50Gi\" 2 gold.storageclass.storage.k8s.io/requests.storage: \"10Gi\" 3 silver.storageclass.storage.k8s.io/requests.storage: \"20Gi\" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: \"5\" 5 bronze.storageclass.storage.k8s.io/requests.storage: \"0\" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: \"0\" 7", "oc create -f <resource_quota_definition> [-n <project_name>]", "oc create -f core-object-counts.yaml -n demoproject", "oc create quota <name> --hard=count/<resource>.<group>=<quota>,count/<resource>.<group>=<quota>", "oc create quota test --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4 resourcequota \"test\" created oc describe quota test Name: test Namespace: quota Resource Used Hard -------- ---- ---- count/deployments.extensions 0 2 count/pods 0 3 count/replicasets.extensions 0 4 count/secrets 0 4", "oc get quota -n demoproject NAME AGE besteffort 11m compute-resources 2m core-object-counts 29m", "oc describe quota core-object-counts -n demoproject Name: core-object-counts Namespace: demoproject Resource Used Hard -------- ---- ---- configmaps 3 10 persistentvolumeclaims 0 4 replicationcontrollers 3 20 secrets 9 10 services 2 10", "kubernetesMasterConfig: apiLevels: - v1beta3 - v1 apiServerArguments: null controllerArguments: resource-quota-sync-period: - \"10s\"", "master-restart api master-restart controllers", "admissionConfig: pluginConfig: ResourceQuota: configuration: apiVersion: resourcequota.admission.k8s.io/v1alpha1 kind: Configuration limitedResources: - resource: persistentvolumeclaims 1 matchContains: - gold.storageclass.storage.k8s.io/requests.storage 2", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"core-resource-limits\" 1 spec: limits: - type: \"Pod\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"200m\" 4 memory: \"6Mi\" 5 - type: \"Container\" max: cpu: \"2\" 6 memory: \"1Gi\" 7 min: cpu: \"100m\" 8 memory: \"4Mi\" 9 default: cpu: \"300m\" 10 memory: \"200Mi\" 11 defaultRequest: cpu: \"200m\" 12 memory: \"100Mi\" 13 maxLimitRequestRatio: cpu: \"10\" 14", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"openshift-resource-limits\" spec: limits: - type: openshift.io/Image max: storage: 1Gi 1 - type: openshift.io/ImageStream max: openshift.io/image-tags: 20 2 openshift.io/images: 30 3 - type: \"Pod\" max: cpu: \"2\" 4 memory: \"1Gi\" 5 ephemeral-storage: \"1Gi\" 6 min: cpu: \"1\" 7 memory: \"1Gi\" 8", "{ \"apiVersion\": \"v1\", \"kind\": \"LimitRange\", \"metadata\": { \"name\": \"pvcs\" 1 }, \"spec\": { \"limits\": [{ \"type\": \"PersistentVolumeClaim\", \"min\": { \"storage\": \"2Gi\" 2 }, \"max\": { \"storage\": \"50Gi\" 3 } } ] } }", "oc create -f <limit_range_file> -n <project>", "oc get limits -n demoproject", "NAME AGE resource-limits 6d", "oc describe limits resource-limits -n demoproject", "Name: resource-limits Namespace: demoproject Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Pod cpu 200m 2 - - - Pod memory 6Mi 1Gi - - - Container cpu 100m 2 200m 300m 10 Container memory 4Mi 1Gi 100Mi 200Mi - openshift.io/Image storage - 1Gi - - - openshift.io/ImageStream openshift.io/image - 12 - - - openshift.io/ImageStream openshift.io/image-tags - 10 - - -", "oc delete limits <limit_name>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/scalability_and_performance/compute-resource-quotas
Chapter 353. Twitter Direct Message Component
Chapter 353. Twitter Direct Message Component Available as of Camel version 2.10 The Twitter Direct Message Component consumes/produces a user's direct messages. 353.1. Component Options The Twitter Direct Message component supports 9 options, which are listed below. Name Description Default Type accessToken (security) The access token String accessTokenSecret (security) The access token secret String consumerKey (security) The consumer key String consumerSecret (security) The consumer secret String httpProxyHost (proxy) The http proxy host which can be used for the camel-twitter. String httpProxyUser (proxy) The http proxy user which can be used for the camel-twitter. String httpProxyPassword (proxy) The http proxy password which can be used for the camel-twitter. String httpProxyPort (proxy) The http proxy port which can be used for the camel-twitter. int resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean 353.2. Endpoint Options The Twitter Direct Message endpoint is configured using URI syntax: with the following path and query parameters: 353.2.1. Path Parameters (1 parameters): Name Description Default Type user Required The user name to send a direct message. This will be ignored for consumer. String 353.2.2. Query Parameters (42 parameters): Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean type (consumer) Endpoint type to use. Only streaming supports event type. polling EndpointType distanceMetric (consumer) Used by the non-stream geography search, to search by radius using the configured metrics. The unit can either be mi for miles, or km for kilometers. You need to configure all the following options: longitude, latitude, radius, and distanceMetric. km String exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern extendedMode (consumer) Used for enabling full text from twitter (eg receive tweets that contains more than 140 characters). true boolean latitude (consumer) Used by the non-stream geography search to search by latitude. You need to configure all the following options: longitude, latitude, radius, and distanceMetric. Double locations (consumer) Bounding boxes, created by pairs of lat/lons. Can be used for streaming/filter. A pair is defined as lat,lon. And multiple paris can be separated by semi colon. String longitude (consumer) Used by the non-stream geography search to search by longitude. You need to configure all the following options: longitude, latitude, radius, and distanceMetric. Double pollStrategy (consumer) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPoll Strategy radius (consumer) Used by the non-stream geography search to search by radius. You need to configure all the following options: longitude, latitude, radius, and distanceMetric. Double twitterStream (consumer) To use a custom instance of TwitterStream TwitterStream synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean count (filter) Limiting number of results per page. 5 Integer filterOld (filter) Filter out old tweets, that has previously been polled. This state is stored in memory only, and based on last tweet id. true boolean lang (filter) The lang string ISO_639-1 which will be used for searching String numberOfPages (filter) The number of pages result which you want camel-twitter to consume. 1 Integer sinceId (filter) The last tweet id which will be used for pulling the tweets. It is useful when the camel route is restarted after a long running. 1 long userIds (filter) To filter by user ids for streaming/filter. Multiple values can be separated by comma. String backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. 30000 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 1000 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutor Service scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz2 component none ScheduledPollConsumer Scheduler schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz2, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean sortById (sort) Sorts by id, so the oldest are first, and newest last. true boolean httpProxyHost (proxy) The http proxy host which can be used for the camel-twitter. Can also be configured on the TwitterComponent level instead. String httpProxyPassword (proxy) The http proxy password which can be used for the camel-twitter. Can also be configured on the TwitterComponent level instead. String httpProxyPort (proxy) The http proxy port which can be used for the camel-twitter. Can also be configured on the TwitterComponent level instead. Integer httpProxyUser (proxy) The http proxy user which can be used for the camel-twitter. Can also be configured on the TwitterComponent level instead. String accessToken (security) The access token. Can also be configured on the TwitterComponent level instead. String accessTokenSecret (security) The access secret. Can also be configured on the TwitterComponent level instead. String consumerKey (security) The consumer key. Can also be configured on the TwitterComponent level instead. String consumerSecret (security) The consumer secret. Can also be configured on the TwitterComponent level instead. String 353.3. Spring Boot Auto-Configuration The component supports 10 options, which are listed below. Name Description Default Type camel.component.twitter-directmessage.access-token The access token String camel.component.twitter-directmessage.access-token-secret The access token secret String camel.component.twitter-directmessage.consumer-key The consumer key String camel.component.twitter-directmessage.consumer-secret The consumer secret String camel.component.twitter-directmessage.enabled Whether to enable auto configuration of the twitter-directmessage component. This is enabled by default. Boolean camel.component.twitter-directmessage.http-proxy-host The http proxy host which can be used for the camel-twitter. String camel.component.twitter-directmessage.http-proxy-password The http proxy password which can be used for the camel-twitter. String camel.component.twitter-directmessage.http-proxy-port The http proxy port which can be used for the camel-twitter. Integer camel.component.twitter-directmessage.http-proxy-user The http proxy user which can be used for the camel-twitter. String camel.component.twitter-directmessage.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean
[ "twitter-directmessage:user" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/twitter-directmessage-component
Chapter 10. Interoperability
Chapter 10. Interoperability This chapter discusses how to use AMQ .NET in combination with other AMQ components. For an overview of the compatibility of AMQ components, see the product introduction . 10.1. Interoperating with other AMQP clients AMQP messages are composed using the AMQP type system . This common format is one of the reasons AMQP clients in different languages are able to interoperate with each other. When sending messages, AMQ .NET automatically converts language-native types to AMQP-encoded data. When receiving messages, the reverse conversion takes place. Note More information about AMQP types is available at the interactive type reference maintained by the Apache Qpid project. Table 10.1. AMQP types AMQP type Description null An empty value boolean A true or false value char A single Unicode character string A sequence of Unicode characters binary A sequence of bytes byte A signed 8-bit integer short A signed 16-bit integer int A signed 32-bit integer long A signed 64-bit integer ubyte An unsigned 8-bit integer ushort An unsigned 16-bit integer uint An unsigned 32-bit integer ulong An unsigned 64-bit integer float A 32-bit floating point number double A 64-bit floating point number array A sequence of values of a single type list A sequence of values of variable type map A mapping from distinct keys to values uuid A universally unique identifier symbol A 7-bit ASCII string from a constrained domain timestamp An absolute point in time Table 10.2. AMQ .NET types before encoding and after decoding AMQP type AMQ .NET type before encoding AMQ .NET type after decoding null null null boolean System.Boolean System.Boolean char System.Char System.Char string System.String System.String binary System.Byte[] System.Byte[] byte System.SByte System.SByte short System.Int16 System.Int16 int System.Int32 System.Int32 long System.Int64 System.Int64 ubyte System.Byte System.Byte ushort System.UInt16 System.UInt16 uint System.UInt32 System.UInt32 ulong System.UInt64 System.UInt64 float System.Single System.Single double System.Double System.Double list Amqp.List Amqp.List map Amqp.Map Amqp.Map uuid System.Guid System.Guid symbol Amqp.Symbol Amqp.Symbol timestamp System.DateTime System.DateTime Table 10.3. AMQ .NET and other AMQ client types (1 of 2) AMQ .NET type before encoding AMQ C++ type AMQ JavaScript type null nullptr null System.Boolean bool boolean System.Char wchar_t number System.String std::string string System.Byte[] proton::binary string System.SByte int8_t number System.Int16 int16_t number System.Int32 int32_t number System.Int64 int64_t number System.Byte uint8_t number System.UInt16 uint16_t number System.UInt32 uint32_t number System.UInt64 uint64_t number System.Single float number System.Double double number Amqp.List std::vector Array Amqp.Map std::map object System.Guid proton::uuid number Amqp.Symbol proton::symbol string System.DateTime proton::timestamp number Table 10.4. AMQ .NET and other AMQ client types (2 of 2) AMQ .NET type before encoding AMQ Python type AMQ Ruby type null None nil System.Boolean bool true, false System.Char unicode String System.String unicode String System.Byte[] bytes String System.SByte int Integer System.Int16 int Integer System.Int32 long Integer System.Int64 long Integer System.Byte long Integer System.UInt16 long Integer System.UInt32 long Integer System.UInt64 long Integer System.Single float Float System.Double float Float Amqp.List list Array Amqp.Map dict Hash System.Guid - - Amqp.Symbol str Symbol System.DateTime long Time 10.2. Interoperating with AMQ JMS AMQP defines a standard mapping to the JMS messaging model. This section discusses the various aspects of that mapping. For more information, see the AMQ JMS Interoperability chapter. JMS message types AMQ .NET provides a single message type whose body type can vary. By contrast, the JMS API uses different message types to represent different kinds of data. The table below indicates how particular body types map to JMS message types. For more explicit control of the resulting JMS message type, you can set the x-opt-jms-msg-type message annotation. See the AMQ JMS Interoperability chapter for more information. Table 10.5. AMQ .NET and JMS message types AMQ .NET body type JMS message type System.String TextMessage null TextMessage System.Byte[] BytesMessage Any other type ObjectMessage 10.3. Connecting to AMQ Broker AMQ Broker is designed to interoperate with AMQP 1.0 clients. Check the following to ensure the broker is configured for AMQP messaging: Port 5672 in the network firewall is open. The AMQ Broker AMQP acceptor is enabled. See Default acceptor settings . The necessary addresses are configured on the broker. See Addresses, Queues, and Topics . The broker is configured to permit access from your client, and the client is configured to send the required credentials. See Broker Security . 10.4. Connecting to AMQ Interconnect AMQ Interconnect works with any AMQP 1.0 client. Check the following to ensure the components are configured correctly: Port 5672 in the network firewall is open. The router is configured to permit access from your client, and the client is configured to send the required credentials. See Securing network connections .
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_.net_client/interoperability
Chapter 1. APIs
Chapter 1. APIs You can access APIs to create and manage application resources, channels, subscriptions, and to query information. User required access: You can only perform actions that your role is assigned. Learn about access requirements from the Role-based access control documentation. You can also access all APIs from the integrated console. From the local-cluster view, navigate to Home > API Explorer to explore API groups. For more information, review the API documentation for each of the following resources: Clusters API ClusterInstance API ClusterSets API (v1beta2) ClusterSetBindings API (v1beta2) Channels API Subscriptions API PlacementRules API (deprecated) Applications API Helm API Policy API Observability API Search query API MultiClusterHub API Placements API (v1beta1) PlacementDecisions API (v1beta1) DiscoveryConfig API DiscoveredCluster API AddOnDeploymentConfig API (v1alpha1) ClusterManagementAddOn API (v1alpha1) ManagedClusterAddOn API (v1alpha1) ManagedClusterSet API KlusterletConfig API (v1alpha1) Policy compliance API (Technology Preview) 1.1. Clusters API 1.1.1. Overview This documentation is for the cluster resource for Red Hat Advanced Cluster Management for Kubernetes. Cluster resource has four possible requests: create, query, delete and update. ManagedCluster represents the desired state and current status of a managed cluster. ManagedCluster is a cluster-scoped resource. 1.1.1.1. Version information Version : 2.12.0 1.1.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.1.1.3. Tags cluster.open-cluster-management.io : Create and manage clusters 1.1.2. Paths 1.1.2.1. Query all clusters 1.1.2.1.1. Description Query your clusters for more details. 1.1.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string 1.1.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.1.2.1.4. Consumes cluster/yaml 1.1.2.1.5. Tags cluster.open-cluster-management.io 1.1.2.2. Create a cluster 1.1.2.2.1. Description Create a cluster 1.1.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Body body required Parameters describing the cluster to be created. Cluster 1.1.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.1.2.2.4. Consumes cluster/yaml 1.1.2.2.5. Tags cluster.open-cluster-management.io 1.1.2.2.6. Example HTTP request 1.1.2.2.6.1. Request body { "apiVersion" : "cluster.open-cluster-management.io/v1", "kind" : "ManagedCluster", "metadata" : { "labels" : { "vendor" : "OpenShift" }, "name" : "cluster1" }, "spec": { "hubAcceptsClient": true, "managedClusterClientConfigs": [ { "caBundle": "test", "url": "https://test.com" } ] }, "status" : { } } 1.1.2.3. Query a single cluster 1.1.2.3.1. Description Query a single cluster for more details. 1.1.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path cluster_name required Name of the cluster that you want to query. string 1.1.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.1.2.3.4. Tags cluster.open-cluster-management.io 1.1.2.4. Delete a cluster 1.1.2.4.1. Description Delete a single cluster 1.1.2.4.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path cluster_name required Name of the cluster that you want to delete. string 1.1.2.4.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.1.2.4.4. Tags cluster.open-cluster-management.io 1.1.3. Definitions 1.1.3.1. Cluster Name Description Schema apiVersion required The versioned schema of the ManagedCluster . string kind required String value that represents the REST resource. string metadata required The metadata of the ManagedCluster . object spec required The specification of the ManagedCluster . spec spec Name Description Schema hubAcceptsClient required Specifies whether the hub can establish a connection with the klusterlet agent on the managed cluster. The default value is false , and can only be changed to true when you have an RBAC rule configured on the hub cluster that allows you to make updates to the virtual subresource of managedclusters/accept . bool managedClusterClientConfigs optional Lists the apiserver addresses of the managed cluster. managedClusterClientConfigs array leaseDurationSeconds optional Specifies the lease update time interval of the klusterlet agents on the managed cluster. By default, the klusterlet agent updates its lease every 60 seconds. integer (int32) taints optional Prevents a managed cluster from being assigned to one or more managed cluster sets during scheduling. taint array managedClusterClientConfigs Name Description Schema URL required string CABundle optional Pattern : "^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?USD" string (byte) taint Name Description Schema key required The taint key that is applied to a cluster. string value optional The taint value that corresponds to the taint key. string effect optional Effect of the taint on placements that do not tolerate the taint. Valid values are NoSelect , PreferNoSelect , and NoSelectIfNew . string 1.2. ClusterInstance API 1.2.1. Overview This documentation is for the ClusterInstance resource for Red Hat Advanced Cluster Management for Kubernetes. The ClusterInstance resource has four possible requests: create, query, delete and update. 1.2.1.1. Version information Version : 2.12.0 1.2.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.2.1.3. Tags siteconfig.open-cluster-management.io : Create and manage clusters 1.2.2. Paths 1.2.2.1. Query all clusters 1.2.2.1.1. Description Query your clusters for more details. 1.2.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path clusterinstance_namespace required The namespace of the ClusterInstance that you want to query. string Path clusterinstance_name required The name of the ClusterInstance that you want to query. string 1.2.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.2.2.1.4. Consumes clusterinstance/json 1.2.2.1.5. Tags siteconfig.open-cluster-management.io 1.2.2.2. Create installation manifests 1.2.2.2.1. Description Create installation manifests with the SiteConfig operator for your choice of installation method. 1.2.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Body body required Parameters describing the installation manifests to be created. ClusterInstance Path clusterinstance_namespace required The namespace of the ClusterInstance that you want to use. string Path clusterinstance_name required The name of the ClusterInstance that you want to use. string 1.2.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.2.2.2.4. Consumes clusterinstance/json 1.2.2.2.5. Tags siteconfig.open-cluster-management.io 1.2.2.2.6. Example HTTP request 1.2.2.2.6.1. Request body { "apiVersion": "siteconfig.open-cluster-management.io/v1alpha1", "kind": "ClusterInstance", "metadata": { "name": "site-sno-du-1", "namespace": "site-sno-du-1" }, "spec": { "baseDomain": "example.com", "pullSecretRef": { "name": "pullSecretName" }, "sshPublicKey": "ssh-rsa ", "clusterName": "site-sno-du-1", "proxy": { "noProxys": "foobar" }, "caBundleRef": { "name": "my-bundle-ref" }, "extraManifestsRefs": [ { "name": "foobar1" }, { "name": "foobar2" } ], "networkType": "OVNKubernetes", "installConfigOverrides": "{\"capabilities\":{\"baselineCapabilitySet\": \"None\", \"additionalEnabledCapabilities\": [ \"marketplace\", \"NodeTuning\" ] }}", "extraLabels": { "ManagedCluster": { "group-du-sno": "test", "common": "true", "sites": "site-sno-du-1" } }, "clusterNetwork": [ { "cidr": "203.0.113.0/24", "hostPrefix": 23 } ], "machineNetwork": [ { "cidr": "203.0.113.0/24" } ], "serviceNetwork": [ { "cidr": "203.0.113.0/24" } ], "additionalNTPSources": [ "NTP.server1", "198.51.100.100" ], "ignitionConfigOverride": "{\"ignition\": {\"version\": \"3.1.0\"}, \"storage\": {\"files\": [{\"path\": \"/etc/containers/registries.conf\", \"overwrite\": true, \"contents\": {\"source\": \"data:text/plain;base64,foobar==\"}}]}}", "diskEncryption": { "type": "nbde", "tang": [ { "url": "http://192.0.2.5:7500", "thumbprint": "1234567890" } ] }, "clusterType": "SNO", "templateRefs": [ { "name": "ai-cluster-templates-v1", "namespace": "rhacm" } ], "nodes": [ { "hostName": "node1", "role": "master", "templateRefs": [ { "name": "ai-node-templates-v1", "namespace": "rhacm" } ], "ironicInspect": "", "bmcAddress": "idrac-virtualmedia+https://203.0.113.100/redfish/v1/Systems/System.Embedded.1", "bmcCredentialsName": { "name": "<bmcCredentials_secre_name>" }, "bootMACAddress": "00:00:5E:00:53:00", "bootMode": "UEFI", "installerArgs": "[\"--append-karg\", \"nameserver=8.8.8.8\", \"-n\"]", "ignitionConfigOverride": "{\"ignition\": {\"version\": \"3.1.0\"}, \"storage\": {\"files\": [{\"path\": \"/etc/containers/registries.conf\", \"overwrite\": true, \"contents\": {\"source\": \"data:text/plain;base64,foobar==\"}}]}}", "nodeNetwork": { "interfaces": [ { "name": "eno1", "macAddress": "00:00:5E:00:53:01" } ], "config": { "interfaces": [ { "name": "eno1", "type": "ethernet", "ipv4": { "enabled": true, "dhcp": false, "address": [ { "ip": "192.0.2.1", "prefix-length": 24 } ] }, "ipv6": { "enabled": true, "dhcp": false, "address": [ { "ip": "2001:0DB8:0:0:0:0:0:1", "prefix-length": 32 } ] } } ], "dns-resolver": { "config": { "server": [ "198.51.100.1" ] } }, "routes": { "config": [ { "destination": "0.0.0.0/0", "-hop-address": "203.0.113.255", "-hop-interface": "eno1", "table-id": 254 } ] } } } } ] } } 1.2.2.3. Query a single cluster 1.2.2.3.1. Description Query a single cluster for more details. 1.2.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path clusterinstance_namespace required The namespace of the ClusterInstance that you want to query. string Path clusterinstance_name required The name of the ClusterInstance that you want to query. string 1.2.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.2.2.3.4. Tags siteconfig.open-cluster-management.io 1.2.3. Definitions 1.2.3.1. ClusterInstance Important: Certain fields are only relevant to a specific installation flow. Verify which fields are applicable to your choice of installation method by checking the relevant documentation. For Assisted Installer and Image Based Install, see the following documentation: Installing an on-premise cluster using the Assisted Installer Image Based Install Operator Name Description Schema apiVersion required The versioned schema of the ClusterInstance . string kind required String value that represents the REST resource. string metadata required The metadata of the ClusterInstance . object spec required The specification of the ClusterInstance . spec status required The status of the ClusterInstance . status object spec Name Description Schema additionalNTPSources optional Specify the NTP sources that needs to be added to all cluster hosts. They are added to any NTP sources that were configured through other means. array baseDomain required Specify the base domain used for the deployed cluster. string caBundleRef optional Reference the ConfigMap object that contains the new bundle of trusted certificates for the host. The tls-ca-bundle.pem entry in the ConfigMap object is written to /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem . string clusterImageSetNameRef required Specify the name of the ClusterImageSet object that indicates the OpenShift Container Platform version to deploy. string clusterName required Specify the name of the cluster. string clusterNetwork optional Specify the list of the IP address pools for pods. array clusterType optional Specify the cluster type. The following values are supported: SNO single-node OpenShift HighlyAvailable Multi-node OpenShift string cpuPartitioningMode optional Determine clusters to be set up for CPU workload partitioning at install time. Configure workload partitioning by setting the value for cpuPartitioningMode to AllNodes . To complete the configuration, specify the isolated and reserved CPUs in the PerformanceProfile CR. The default value is None . string diskEncryption optional Enable or disable disk encryption for the cluster. object extraAnnotations optional Specify additional cluster-level annotations to be applied to the rendered templates by using the following format: extraAnnotations: ClusterDeployment: myClusterAnnotation: success object extraLabels optional Specify additional cluster-level labels to be applied to the rendered templates by using the following format: extraLabels: ManagedCluster: common: "true" label-a : "value-a" object extraManifestsRefs optional Specify the list of the ConfigMap object references that contain additional manifests to be applied to the cluster. array holdInstallation optional Set to true to prevent installation when using the Assisted Installer. You can complete the inspection and validation steps, but after the RequirementsMet condition becomes true , the installation does not begin until the holdInstallation field is set to false . bool ignitionConfigOverride optional Specify the user overrides for the initial Ignition configuration. string installConfigOverrides optional Define install configuration parameters. string machineNetwork optional Specify the list of the IP address pools for machines. array networkType optional Specify the Container Network Interface (CNI) plugin to install. The default value is OpenShiftSDN for IPv4, and OVNKubernetes for IPv6 or single-node OpenShift clusters. string platformType optional Define the name of the specific platform on which you want to install. The following values are supported: BareMetal VSphere Nutanix External "" None string proxy optional Define the proxy settings used for the install configuration of the cluster. object pruneManifests optional Define a list of cluster-level manifests to remove by specifying their apiVersion and Kind values. array pullSecretRef required Configure the pull-secret file for pulling images. When creating the pull-secret file, use the same namespace as the ClusterInstance CR that provisions the host. object serviceNetwork optional Specify the list of the IP address pools for services. array sshPublicKey optional Specify the public Secure Shell (SSH) key to provide access to instances. This key is added to the host to allow SSH access. string nodes required Specify the configuration parameters for each node. nodes array templateRefs required Specify the list of the references to cluster-level templates. A cluster-level template consists of a ConfigMap object, in which the keys of the data field represent the kind of the installation manifests. Cluster-level templates are instantiated once per cluster in the ClusterInstance CR. array nodes Name Description Schema automatedCleaningMode optional Set the value to metadata to enable the removal of the disk's partitioning table only, without fully wiping the disk. The default value is disabled . string bmcAddress required BMC address that you use to access the host. Applies to all cluster types. For more information about BMC addressing, see BMC addressing in Additional resources. Note: In far edge Telco use cases, only virtual media is supported for use with GitOps ZTP. string bmcCredentialsName required Configure the bmh-secret CR that you separately create with the host BMC credentials. When creating the bmh-secret CR, use the same namespace as the ClusterInstance CR that provisions the host. string bootMACAddress required Specify the MAC address that PXE boots. It is required for libvirt VMs driven by virtual BMC. string bootMode optional Set the boot mode for the host to UEFI . The default value is UEFI . Use UEFISecureBoot to enable secure boot on the host. The following values are supported: UEFI UEFISecureBoot legacy string extraAnnotations optional Specify additional node-level annotations to be applied to the rendered templates by using the following format: extraAnnotations: BareMetalHost: myNodeAnnotation: success object extraLabels optional Specify additional node-level labels to be applied to the rendered templates. extraLabels: ManagedCluster: common: "true" label-a : "value-a" object hostName required Define the host name. string installerArgs optional Specify the user overrides for the host's :op-system-first: installer arguments. string ignitionConfigOverride optional Specify the user overrides for the initial Ignition configuration. Use this field to assign partitions for persistent storage. Adjust disk ID and size to the specific hardware. string ironicInspect optional Specify if automatic introspection runs during registration of the bare metal host. string nodeLabels optional Specify custom node labels for your nodes in your managed clusters. These are additional labels that are not used by any Red Hat Advanced Cluster Management components, only by the user. When you add a custom node label, it can be associated with a custom machine config pool that references a specific configuration for that label. Adding custom node labels during installation makes the deployment process more effective and prevents the need for additional reboots after the installation is complete. Note: When used in the BareMetalHost template, the custom labels are appended to the BareMetalHost annotations with the bmac.agent-install.openshift.io prefix. object nodeNetwork optional Configure the network settings for nodes that have static networks. object pruneManifests optional Define a list of node-level manifests to remove by specifying their apiVersion and Kind values. array role optional Configure the role of the node, such as master or worker . string rootDeviceHints optional Specify the device for deployment. Identifiers that are stable across reboots are recommended. For example, wwn: <disk_wwn> or deviceName: /dev/disk/by-path/<device_path> . <by-path> values are preferred. For a detailed list of stable identifiers, see "About root device hints". You can also specify the name, model, size, or vendor of the device. object templateRefs required Specify the list of the references to node-level templates. A node-level template consists of a ConfigMap object, in which the keys of the data field represent the kind of the installation manifests. Node-level templates are instantiated once for each node in the ClusterInstance CR. array status Name Description Schema conditions optional Lists the conditions that pertain to actions performed on the ClusterInstance resource. conditions array deploymentConditions optional Lists the hive status conditions that are associated with the ClusterDeployment resource. deploymentConditions array manifestsRendered optional Lists the manifests that have been rendered and their statuses. array observedGeneration optional Tracks the observed generation of the ClusterInstance object. integer conditions Type Description ClusterInstanceValidated Indicates that the SiteConfig operator validated the ClusterInstance spec fields and verified that the required artifacts, such as secrets and extra manifest ConfigMaps objects are present. RenderedTemplates Indicates that SiteConfig operator successfully validated the referenced Golang cluster templates. RenderedTemplatesValidated Indicates that the SiteConfig operator rendered the installation manifests and the dry run was successful. RenderedTemplatesApplied Indicates that the SiteConfig operator created the installation manifests and the underlying Operators consumed them. Provisioned Indicates that the underlying Operators are provisioning the clusters. deploymentConditions Type Description ClusterInstallRequirementsMet Indicates that the installation can start. ClusterInstallCompleted Indicates that the cluster installation was successful. ClusterInstallFailed Indicates that the cluster installation failed. ClusterInstallStopped Indicates that the cluster installation stopped. Additional resources BMC addressing 1.3. Clustersets API (v1beta2) 1.3.1. Overview This documentation is for the ClusterSet resource for Red Hat Advanced Cluster Management for Kubernetes. The ClusterSet resource has four possible requests: create, query, delete, and update. The ManagedClusterSet defines a group of ManagedClusters. You can assign a ManagedCluster to a specific ManagedClusterSet by adding a label with the name cluster.open-cluster-management.io/clusterset on the ManagedCluster that refers to the ManagedClusterSet. You can only add or remove this label on a ManagedCluster when you have an RBAC rule that allows the create permissions on a virtual subresource of managedclustersets/join . You must have this permission on both the source and the target ManagedClusterSets to update this label. 1.3.1.1. Version information Version : 2.12.0 1.3.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.3.1.3. Tags cluster.open-cluster-management.io : Create and manage Clustersets 1.3.2. Paths 1.3.2.1. Query all clustersets 1.3.2.1.1. Description Query your Clustersets for more details. 1.3.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string 1.3.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.3.2.1.4. Consumes clusterset/yaml 1.3.2.1.5. Tags cluster.open-cluster-management.io 1.3.2.2. Create a clusterset 1.3.2.2.1. Description Create a Clusterset. 1.3.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Body body required Parameters describing the clusterset to be created. Clusterset 1.3.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.3.2.2.4. Consumes clusterset/yaml 1.3.2.2.5. Tags cluster.open-cluster-management.io 1.3.2.2.6. Example HTTP request 1.3.2.2.6.1. Request body { "apiVersion": "cluster.open-cluster-management.io/v1beta2", "kind": "ManagedClusterSet", "metadata": { "name": "clusterset1" }, "spec": { "clusterSelector": { "selectorType": "ExclusiveClusterSetLabel" } }, "status": {} } 1.3.2.3. Query a single clusterset 1.3.2.3.1. Description Query a single clusterset for more details. 1.3.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path clusterset_name required Name of the clusterset that you want to query. string 1.3.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.3.2.3.4. Tags cluster.open-cluster-management.io 1.3.2.4. Delete a clusterset 1.3.2.4.1. Description Delete a single clusterset. 1.3.2.4.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path clusterset_name required Name of the clusterset that you want to delete. string 1.3.2.4.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.3.2.4.4. Tags cluster.open-cluster-management.io 1.3.3. Definitions 1.3.3.1. Clusterset Name Schema apiVersion required string kind required string metadata required object 1.4. Clustersetbindings API (v1beta2) 1.4.1. Overview This documentation is for the ClusterSetBinding resource for Red Hat Advanced Cluster Management for Kubernetes. The ClusterSetBinding resource has four possible requests: create, query, delete, and update. ManagedClusterSetBinding projects a ManagedClusterSet into a certain namespace. You can create a ManagedClusterSetBinding in a namespace and bind it to a ManagedClusterSet if you have an RBAC rule that allows you to create on the virtual subresource of managedclustersets/bind . 1.4.1.1. Version information Version : 2.12.0 1.4.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.4.1.3. Tags cluster.open-cluster-management.io : Create and manage clustersetbindings 1.4.2. Paths 1.4.2.1. Query all clustersetbindings 1.4.2.1.1. Description Query your clustersetbindings for more details. 1.4.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string 1.4.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.4.2.1.4. Consumes clustersetbinding/yaml 1.4.2.1.5. Tags cluster.open-cluster-management.io 1.4.2.2. Create a clustersetbinding 1.4.2.2.1. Description Create a clustersetbinding. 1.4.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default . string Body body required Parameters describing the clustersetbinding to be created. Clustersetbinding 1.4.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.4.2.2.4. Consumes clustersetbinding/yaml 1.4.2.2.5. Tags cluster.open-cluster-management.io 1.4.2.2.6. Example HTTP request 1.4.2.2.6.1. Request body { "apiVersion" : "cluster.open-cluster-management.io/v1beta2", "kind" : "ManagedClusterSetBinding", "metadata" : { "name" : "clusterset1", "namespace" : "ns1" }, "spec": { "clusterSet": "clusterset1" }, "status" : { } } 1.4.2.3. Query a single clustersetbinding 1.4.2.3.1. Description Query a single clustersetbinding for more details. 1.4.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default . string Path clustersetbinding_name required Name of the clustersetbinding that you want to query. string 1.4.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.4.2.3.4. Tags cluster.open-cluster-management.io 1.4.2.4. Delete a clustersetbinding 1.4.2.4.1. Description Delete a single clustersetbinding. 1.4.2.4.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default . string Path clustersetbinding_name required Name of the clustersetbinding that you want to delete. string 1.4.2.4.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.4.2.4.4. Tags cluster.open-cluster-management.io 1.4.3. Definitions 1.4.3.1. Clustersetbinding Name Description Schema apiVersion required Versioned schema of the ManagedClusterSetBinding . string kind required String value that represents the REST resource. string metadata required Metadata of the ManagedClusterSetBinding . object spec required Specification of the ManagedClusterSetBinding . spec spec Name Description Schema clusterSet required Name of the ManagedClusterSet to bind. It must match the instance name of the ManagedClusterSetBinding and cannot change after it is created. string 1.5. Clusterview API (v1alpha1) 1.5.1. Overview This documentation is for the clusterview resource for Red Hat Advanced Cluster Management for Kubernetes. The clusterview resource provides a CLI command that enables you to view a list of the managed clusters and managed cluster sets that that you can access. The three possible requests are: list, get, and watch. 1.5.1.1. Version information Version : 2.12.0 1.5.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.5.1.3. Tags clusterview.open-cluster-management.io : View a list of managed clusters that your ID can access. 1.5.2. Paths 1.5.2.1. Get managed clusters 1.5.2.1.1. Description View a list of the managed clusters that you can access. 1.5.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string 1.5.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.5.2.1.4. Consumes managedcluster/yaml 1.5.2.1.5. Tags clusterview.open-cluster-management.io 1.5.2.2. List managed clusters 1.5.2.2.1. Description View a list of the managed clusters that you can access. 1.5.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Body body optional Name of the user ID for which you want to list the managed clusters. string 1.5.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.5.2.2.4. Consumes managedcluster/yaml 1.5.2.2.5. Tags clusterview.open-cluster-management.io 1.5.2.2.6. Example HTTP request 1.5.2.2.6.1. Request body { "apiVersion" : "clusterview.open-cluster-management.io/v1alpha1", "kind" : "ClusterView", "metadata" : { "name" : "<user_ID>" }, "spec": { }, "status" : { } } 1.5.2.3. Watch the managed cluster sets 1.5.2.3.1. Description Watch the managed clusters that you can access. 1.5.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path clusterview_name optional Name of the user ID that you want to watch. string 1.5.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.5.2.4. List the managed cluster sets. 1.5.2.4.1. Description List the managed clusters that you can access. 1.5.2.4.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path clusterview_name optional Name of the user ID that you want to watch. string 1.5.2.4.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.5.2.5. List the managed cluster sets. 1.5.2.5.1. Description List the managed clusters that you can access. 1.5.2.5.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path clusterview_name optional Name of the user ID that you want to watch. string 1.5.2.5.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.5.2.6. Watch the managed cluster sets. 1.5.2.6.1. Description Watch the managed clusters that you can access. 1.5.2.6.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path clusterview_name optional Name of the user ID that you want to watch. string 1.5.2.6.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.6. Channels API 1.6.1. Overview This documentation is for the Channel resource for Red Hat Advanced Cluster Management for Kubernetes. The Channel resource has four possible requests: create, query, delete and update. 1.6.1.1. Version information Version : 2.12.0 1.6.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.6.1.3. Tags channels.apps.open-cluster-management.io : Create and manage deployables 1.6.2. Paths 1.6.2.1. Create a channel 1.6.2.1.1. Description Create a channel. 1.6.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string Body body required Parameters describing the deployable to be created. Channel 1.6.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.6.2.1.4. Consumes application/yaml 1.6.2.1.5. Tags channels.apps.open-cluster-management.io 1.6.2.1.6. Example HTTP request 1.6.2.1.6.1. Request body { "apiVersion": "apps.open-cluster-management.io/v1", "kind": "Channel", "metadata": { "name": "sample-channel", "namespace": "default" }, "spec": { "configMapRef": { "kind": "configmap", "name": "bookinfo-resource-filter-configmap" }, "pathname": "https://charts.helm.sh/stable", "type": "HelmRepo" } } 1.6.2.2. Query all channels for the target namespace 1.6.2.2.1. Description Query your channels for more details. 1.6.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string 1.6.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.6.2.2.4. Consumes application/yaml 1.6.2.2.5. Tags channels.apps.open-cluster-management.io 1.6.2.3. Query a single channels of a namespace 1.6.2.3.1. Description Query a single channels for more details. 1.6.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path channel_name required Name of the deployable that you wan to query. string Path namespace required Namespace that you want to use, for example, default. string 1.6.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.6.2.3.4. Tags channels.apps.open-cluster-management.io 1.6.2.4. Delete a Channel 1.6.2.4.1. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path channel_name required Name of the Channel that you want to delete. string Path namespace required Namespace that you want to use, for example, default. string 1.6.2.4.2. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.6.2.4.3. Tags channels.apps.open-cluster-management.io 1.6.3. Definitions 1.6.3.1. Channel Name Schema apiVersion required string kind required string metadata required object spec required spec spec Name Description Schema configMapRef optional ObjectReference contains enough information to let you inspect or modify the referred object. configMapRef gates optional ChannelGate defines criteria for promote to channel gates pathname required string secretRef optional ObjectReference contains enough information to let you inspect or modify the referred object. secretRef sourceNamespaces optional enum (Namespace, HelmRepo, ObjectBucket, Git, namespace, helmrepo, objectbucket, github) array configMapRef Name Description Schema apiVersion optional API version of the referent. string fieldPath optional If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. string kind optional Kind of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/ name optional Name of the referent. More info: Names string namespace optional Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ string resourceVersion optional Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency string uid optional gates Name Description Schema annotations optional typical annotations of k8s annotations labelSelector optional A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. labelSelector name optional string annotations Name Schema key optional string value optional string labelSelector Name Description Schema matchExpressions optional matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions array matchLabels optional matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. string, string map matchExpressions Name Description Schema key required key is the label key that the selector applies to. string operator required operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. string values optional values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. string array secretRef Name Description Schema apiVersion optional API version of the referent. string fieldPath optional If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. string kind optional Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds string name optional Name of the referent. More info: Names string namespace optional Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ string resourceVersion optional Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency string uid optional UID of the referent. More info: UIIDs string 1.7. Subscriptions API 1.7.1. Overview This documentation is for the Subscription resource for Red Hat Advanced Cluster Management for Kubernetes. The Subscription resource has four possible requests: create, query, delete and update. Deprecated: PlacementRule 1.7.1.1. Version information Version : 2.12.0 1.7.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.7.1.3. Tags subscriptions.apps.open-cluster-management.io : Create and manage subscriptions 1.7.2. Paths 1.7.2.1. Create a subscription 1.7.2.1.1. Description Create a subscription. 1.7.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string Body body required Parameters describing the subscription to be created. Subscription 1.7.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.7.2.1.4. Consumes subscription/yaml 1.7.2.1.5. Tags subscriptions.apps.open-cluster-management.io 1.7.2.1.6. Example HTTP request 1.7.2.1.6.1. Request body { "apiVersion" : "apps.open-cluster-management.io/v1", "kind" : "Subscription", "metadata" : { "name" : "sample_subscription", "namespace" : "default", "labels" : { "app" : "sample_subscription-app" }, "annotations" : { "apps.open-cluster-management.io/git-path" : "apps/sample/", "apps.open-cluster-management.io/git-branch" : "sample_branch" } }, "spec" : { "channel" : "channel_namespace/sample_channel", "packageOverrides" : [ { "packageName" : "my-sample-application", "packageAlias" : "the-sample-app", "packageOverrides" : [ { "path" : "spec", "value" : { "persistence" : { "enabled" : false, "useDynamicProvisioning" : false }, "license" : "accept", "tls" : { "hostname" : "my-mcm-cluster.icp" }, "sso" : { "registrationImage" : { "pullSecret" : "hub-repo-docker-secret" } } } } ] } ], "placement" : { "placementRef" : { "kind" : "PlacementRule", "name" : "demo-clusters" } } } } 1.7.2.2. Query all subscriptions 1.7.2.2.1. Description Query your subscriptions for more details. 1.7.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string 1.7.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.7.2.2.4. Consumes subscription/yaml 1.7.2.2.5. Tags subscriptions.apps.open-cluster-management.io 1.7.2.3. Query a single subscription 1.7.2.3.1. Description Query a single subscription for more details. 1.7.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string Path subscription_name required Name of the subscription that you wan to query. string 1.7.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.7.2.3.4. Tags subscriptions.apps.open-cluster-management.io 1.7.2.4. Delete a subscription 1.7.2.4.1. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string Path subscription_name required Name of the subscription that you want to delete. string 1.7.2.4.2. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.7.2.4.3. Tags subscriptions.apps.open-cluster-management.io 1.7.3. Definitions 1.7.3.1. Subscription Name Schema apiVersion required string kind required string metadata required metadata spec required spec status optional status metadata Name Schema annotations optional object labels optional object name optional string namespace optional string spec Name Schema channel required string name optional string overrides optional overrides array packageFilter optional packageFilter packageOverrides optional packageOverrides array placement optional placement timewindow optional timewindow overrides Name Schema clusterName required string clusterOverrides required object array packageFilter Name Description Schema annotations optional string, string map filterRef optional filterRef labelSelector optional labelSelector version optional Pattern : "( )((\\.[0-9] )(\\. )|(\\.[0-9] )?(\\.[xX]))USD" string filterRef Name Schema name optional string labelSelector Name Schema matchExpressions optional matchExpressions array matchLabels optional string, string map matchExpressions Name Schema key required string operator required string values optional string array packageOverrides Name Schema packageAlias optional string packageName required string packageOverrides optional object array placement Name Schema clusterSelector optional clusterSelector clusters optional clusters array local optional boolean placementRef optional placementRef clusterSelector Name Schema matchExpressions optional matchExpressions array matchLabels optional string, string map matchExpressions Name Schema key required string operator required string values optional string array clusters Name Schema name required string placementRef Name Schema apiVersion optional string fieldPath optional string kind optional string name optional string namespace optional string resourceVersion optional string uid optional string timewindow Name Schema daysofweek optional string array hours optional hours array location optional string windowtype optional enum (active, blocked, Active, Blocked) hours Name Schema end optional string start optional string status Name Schema lastUpdateTime optional string (date-time) message optional string phase optional string reason optional string statuses optional object 1.8. PlacementRules API (deprecated) 1.8.1. Overview This documentation is for the PlacementRule resource for Red Hat Advanced Cluster Management for Kubernetes. The PlacementRule resource has four possible requests: create, query, delete and update. 1.8.1.1. Version information Version : 2.12.0 1.8.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.8.1.3. Tags placementrules.apps.open-cluster-management.io : Create and manage placement rules 1.8.2. Paths 1.8.2.1. Create a placement rule 1.8.2.1.1. Description Create a placement rule. 1.8.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string Body body required Parameters describing the placement rule to be created. PlacementRule 1.8.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.8.2.1.4. Consumes application/yaml 1.8.2.1.5. Tags placementrules.apps.open-cluster-management.io 1.8.2.1.6. Example HTTP request 1.8.2.1.6.1. Request body { "apiVersion" : "apps.open-cluster-management.io/v1", "kind" : "PlacementRule", "metadata" : { "name" : "towhichcluster", "namespace" : "ns-sub-1" }, "spec" : { "clusterConditions" : [ { "type": "ManagedClusterConditionAvailable", "status": "True" } ], "clusterSelector" : { } } } 1.8.2.2. Query all placement rules 1.8.2.2.1. Description Query your placement rules for more details. 1.8.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string 1.8.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.8.2.2.4. Consumes application/yaml 1.8.2.2.5. Tags placementrules.apps.open-cluster-management.io 1.8.2.3. Query a single placementrule 1.8.2.3.1. Description Query a single placement rule for more details. 1.8.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string Path placementrule_name required Name of the placementrule that you want to query. string 1.8.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.8.2.3.4. Tags placementrules.apps.open-cluster-management.io 1.8.2.4. Delete a placementrule 1.8.2.4.1. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string Path placementrule_name required Name of the placementrule that you want to delete. string 1.8.2.4.2. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.8.2.4.3. Tags placementrules.apps.open-cluster-management.io 1.8.3. Definitions 1.8.3.1. Placementrule Name Schema apiVersion required string kind required string metadata required object spec required spec spec Name Schema clusterConditions optional clusterConditions array clusterReplicas optional integer clusterSelector optional clusterSelector clusters optional clusters array policies optional policies array resourceHint optional resourceHint schedulerName optional string clusterConditions Name Schema status optional string type optional string clusterSelector Name Schema matchExpressions optional matchExpressions array matchLabels optional string, string map matchExpressions Name Schema key optional string operator optional string values optional string array clusters Name Schema name optional string policies Name Schema apiVersion optional string fieldPath optional string kind optional string name optional string namespace optional string resourceVersion optional string uid optional string resourceHint Name Schema order optional string type optional string 1.9. Applications API 1.9.1. Overview This documentation is for the Application resource for Red Hat Advanced Cluster Management for Kubernetes. Application resource has four possible requests: create, query, delete and update. 1.9.1.1. Version information Version : 2.12.0 1.9.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.9.1.3. Tags applications.app.k8s.io : Create and manage applications 1.9.2. Paths 1.9.2.1. Create a application 1.9.2.1.1. Description Create a application. 1.9.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string Body body required Parameters describing the application to be created. Application 1.9.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.9.2.1.4. Consumes application/yaml 1.9.2.1.5. Tags applications.app.k8s.io 1.9.2.1.6. Example HTTP request 1.9.2.1.6.1. Request body { "apiVersion" : "app.k8s.io/v1beta1", "kind" : "Application", "metadata" : { "labels" : { "app" : "nginx-app-details" }, "name" : "nginx-app-3", "namespace" : "ns-sub-1" }, "spec" : { "componentKinds" : [ { "group" : "apps.open-cluster-management.io", "kind" : "Subscription" } ] }, "selector" : { "matchLabels" : { "app" : "nginx-app-details" } }, "status" : { } } 1.9.2.2. Query all applications 1.9.2.2.1. Description Query your applications for more details. 1.9.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string 1.9.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.9.2.2.4. Consumes application/yaml 1.9.2.2.5. Tags applications.app.k8s.io 1.9.2.3. Query a single application 1.9.2.3.1. Description Query a single application for more details. 1.9.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path application_name required Name of the application that you wan to query. string Path namespace required Namespace that you want to use, for example, default. string 1.9.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.9.2.3.4. Tags applications.app.k8s.io 1.9.2.4. Delete a application 1.9.2.4.1. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path application_name required Name of the application that you want to delete. string Path namespace required Namespace that you want to use, for example, default. string 1.9.2.4.2. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.9.2.4.3. Tags applications.app.k8s.io 1.9.3. Definitions 1.9.3.1. Application Name Schema apiVersion required string kind required string metadata required object spec required spec spec Name Schema assemblyPhase optional string componentKinds optional object array descriptor optional descriptor info optional info array selector optional object descriptor Name Schema description optional string icons optional icons array keywords optional string array links optional links array maintainers optional maintainers array notes optional string owners optional owners array type optional string version optional string icons Name Schema size optional string src required string type optional string links Name Schema description optional string url optional string maintainers Name Schema email optional string name optional string url optional string owners Name Schema email optional string name optional string url optional string info Name Schema name optional string type optional string value optional string valueFrom optional valueFrom valueFrom Name Schema configMapKeyRef optional configMapKeyRef ingressRef optional ingressRef secretKeyRef optional secretKeyRef serviceRef optional serviceRef type optional string configMapKeyRef Name Schema apiVersion optional string fieldPath optional string key optional string kind optional string name optional string namespace optional string resourceVersion optional string uid optional string ingressRef Name Schema apiVersion optional string fieldPath optional string host optional string kind optional string name optional string namespace optional string path optional string resourceVersion optional string uid optional string secretKeyRef Name Schema apiVersion optional string fieldPath optional string key optional string kind optional string name optional string namespace optional string resourceVersion optional string uid optional string serviceRef Name Schema apiVersion optional string fieldPath optional string kind optional string name optional string namespace optional string path optional string port optional integer (int32) resourceVersion optional string uid optional string 1.10. Helm API 1.10.1. Overview This documentation is for the HelmRelease resource for Red Hat Advanced Cluster Management for Kubernetes. The HelmRelease resource has four possible requests: create, query, delete and update. 1.10.1.1. Version information Version : 2.12.0 1.10.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.10.1.3. Tags helmreleases.apps.open-cluster-management.io : Create and manage helmreleases 1.10.2. Paths 1.10.2.1. Create a helmrelease 1.10.2.1.1. Description Create a helmrelease. 1.10.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string Body body required Parameters describing the helmrelease to be created. HelmRelease 1.10.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.2.1.4. Consumes application/yaml 1.10.2.1.5. Tags helmreleases.apps.open-cluster-management.io 1.10.2.1.6. Example HTTP request 1.10.2.1.6.1. Request body { "apiVersion" : "apps.open-cluster-management.io/v1", "kind" : "HelmRelease", "metadata" : { "name" : "nginx-ingress", "namespace" : "default" }, "repo" : { "chartName" : "nginx-ingress", "source" : { "helmRepo" : { "urls" : [ "https://kubernetes-charts.storage.googleapis.com/nginx-ingress-1.26.0.tgz" ] }, "type" : "helmrepo" }, "version" : "1.26.0" }, "spec" : { "defaultBackend" : { "replicaCount" : 3 } } } 1.10.2.2. Query all helmreleases 1.10.2.2.1. Description Query your helmreleases for more details. 1.10.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string 1.10.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.2.2.4. Consumes application/yaml 1.10.2.2.5. Tags helmreleases.apps.open-cluster-management.io 1.10.2.3. Query a single helmrelease 1.10.2.3.1. Description Query a single helmrelease for more details. 1.10.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path helmrelease_name required Name of the helmrelease that you wan to query. string Path namespace required Namespace that you want to use, for example, default. string 1.10.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.2.3.4. Tags helmreleases.apps.open-cluster-management.io 1.10.2.4. Delete a helmrelease 1.10.2.4.1. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path helmrelease_name required Name of the helmrelease that you want to delete. string Path namespace required Namespace that you want to use, for example, default. string 1.10.2.4.2. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.10.2.4.3. Tags helmreleases.apps.open-cluster-management.io 1.10.3. Definitions 1.10.3.1. HelmRelease Name Schema apiVersion required string kind required string metadata required object repo required repo spec required object status required status repo Name Schema chartName optional string configMapRef optional configMapRef secretRef optional secretRef source optional source version optional string configMapRef Name Schema apiVersion optional string fieldPath optional string kind optional string name optional string namespace optional string resourceVersion optional string uid optional string secretRef Name Schema apiVersion optional string fieldPath optional string kind optional string name optional string namespace optional string resourceVersion optional string uid optional string source Name Schema github optional github helmRepo optional helmRepo type optional string github Name Schema branch optional string chartPath optional string urls optional string array helmRepo Name Schema urls optional string array status Name Schema conditions required conditions array deployedRelease optional deployedRelease conditions Name Schema lastTransitionTime optional string (date-time) message optional string reason optional string status required string type required string deployedRelease Name Schema manifest optional string name optional string 1.11. Policy API 1.11.1. Overview This documentation is for the Policy resource for Red Hat Advanced Cluster Management for Kubernetes. The Policy resource has four possible requests: create, query, delete and update. 1.11.1.1. Version information Version : 2.12.0 1.11.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.11.1.3. Tags policy.open-cluster-management.io/v1 : Create and manage policies 1.11.2. Paths 1.11.2.1. Create a policy 1.11.2.1.1. Description Create a policy. 1.11.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string Body body required Parameters describing the policy to be created. Policy 1.11.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.11.2.1.4. Consumes application/json 1.11.2.1.5. Tags policy.open-cluster-management.io 1.11.2.1.6. Example HTTP request 1.11.2.1.6.1. Request body { "apiVersion": "policy.open-cluster-management.io/v1", "kind": "Policy", "metadata": { "name": "test-policy-swagger", }, "spec": { "remediationAction": "enforce", "namespaces": { "include": [ "default" ], "exclude": [ "kube*" ] }, "policy-templates": { "kind": "ConfigurationPolicy", "apiVersion": "policy.open-cluster-management.io/v1", "metadata": { "name": "test-role" }, "spec" : { "object-templates": { "complianceType": "musthave", "metadataComplianceType": "musthave", "objectDefinition": { "apiVersion": "rbac.authorization.k8s.io/v1", "kind": "Role", "metadata": { "name": "role-policy", }, "rules": [ { "apiGroups": [ "extensions", "apps" ], "resources": [ "deployments" ], "verbs": [ "get", "list", "watch", "delete" ] }, { "apiGroups": [ "core" ], "resources": [ "pods" ], "verbs": [ "create", "update", "patch" ] }, { "apiGroups": [ "core" ], "resources": [ "secrets" ], "verbs": [ "get", "watch", "list", "create", "delete", "update", "patch" ], }, ], }, }, }, }, 1.11.2.2. Query all policies 1.11.2.2.1. Description Query your policies for more details. 1.11.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to apply the policy to, for example, default. string 1.11.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.11.2.2.4. Consumes application/json 1.11.2.2.5. Tags policy.open-cluster-management.io 1.11.2.3. Query a single policy 1.11.2.3.1. Description Query a single policy for more details. 1.11.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path policy_name required Name of the policy that you want to query. string Path namespace required Namespace that you want to use, for example, default. string 1.11.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.11.2.3.4. Tags policy.open-cluster-management.io 1.11.2.4. Delete a policy 1.11.2.4.1. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path policy_name required Name of the policy that you want to delete. string Path namespace required Namespace that you want to use, for example, default. string 1.11.2.4.2. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.11.2.4.3. Tags policy.open-cluster-management.io 1.11.3. Definitions 1.11.3.1. Policy Name Description Schema apiVersion required The versioned schema of Policy. string kind required String value that represents the REST resource. string metadata required Describes rules that define the policy. object spec Name Description Schema remediationAction optional Value that represents how violations are handled as defined in the resource. string policy-templates Name Description Schema apiVersion required The versioned schema of Policy. string kind optional String value that represents the REST resource. string metadata required Describes rules that define the policy. object rules optional string rules Name Description Schema apiGroups required List of APIs that the rule applies to. string resources required A list of resource types. object verbs required A list of verbs. string 1.12. Observability API 1.12.1. Overview This documentation is for the MultiClusterObservability resource for Red Hat Advanced Cluster Management for Kubernetes. The MultiClusterObservability resource has four possible requests: create, query, delete and update. 1.12.1.1. Version information Version : 2.12.0 1.12.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.12.1.3. Tags observability.open-cluster-management.io : Create and manage multiclusterobservabilities 1.12.2. Paths 1.12.2.1. Create a multiclusterobservability resource 1.12.2.1.1. Description Create a MultiClusterObservability resource. 1.12.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Body body required Parameters describing the MultiClusterObservability resource to be created. MultiClusterObservability 1.12.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.12.2.1.4. Consumes application/yaml 1.12.2.1.5. Tags observability.apps.open-cluster-management.io 1.12.2.1.6. Example HTTP request 1.12.2.1.6.1. Request body { "apiVersion": "observability.open-cluster-management.io/v1beta2", "kind": "MultiClusterObservability", "metadata": { "name": "example" }, "spec": { "observabilityAddonSpec": {} "storageConfig": { "metricObjectStorage": { "name": "thanos-object-storage", "key": "thanos.yaml" "writeStorage": { - "key": " ", "name" : " " - "key": " ", "name" : " " } } } } 1.12.2.2. Query all multiclusterobservabilities 1.12.2.2.1. Description Query your MultiClusterObservability resources for more details. 1.12.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string 1.12.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.12.2.2.4. Consumes application/yaml 1.12.2.2.5. Tags observability.apps.open-cluster-management.io 1.12.2.3. Query a single multiclusterobservability 1.12.2.3.1. Description Query a single MultiClusterObservability resource for more details. 1.12.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path multiclusterobservability_name required Name of the multiclusterobservability that you want to query. string 1.12.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.12.2.3.4. Tags observability.apps.open-cluster-management.io 1.12.2.4. Delete a multiclusterobservability resource 1.12.2.4.1. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path multiclusterobservability_name required Name of the multiclusterobservability that you want to delete. string 1.12.2.4.2. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.12.2.4.3. Tags observability.apps.open-cluster-management.io 1.12.3. Definitions 1.12.3.1. MultiClusterObservability Name Description Schema apiVersion required The versioned schema of the MultiClusterObservability. string kind required String value that represents the REST resource, MultiClusterObservability. string metadata required Describes rules that define the policy. object spec Name Description Schema enableDownsampling optional Enable or disable the downsample. Default value is true . If there is no downsample data, the query is unavailable. boolean imagePullPolicy optional Pull policy for the MultiClusterObservability images. The default value is Always . corev1.PullPolicy imagePullSecret optional Pull secret for the MultiClusterObservability images. The default value is multiclusterhub-operator-pull-secret string nodeSelector optional Specification of the node selector. map[string]string observabilityAddonSpec required The global settings for all managed clusters, which have the observability add-on installed. observabilityAddonSpec storageConfig required Specifies the storage configuration to be used by observability. StorageConfig tolerations optional Provided the ability for all components to tolerate any taints. []corev1.Toleration advanced optional The advanced configuration settings for observability. advanced resources optional Compute resources required by MultiClusterObservability. corev1.ResourceRequirements replicas optional Replicas for MultiClusterObservability. integer storageConfig Name Description Schema alertmanagerStorageSize optional The amount of storage applied to the alertmanager stateful sets. Default value is 1Gi . string compactStorageSize optional The amount of storage applied to the thanos compact stateful sets. Default value is 100Gi . string metricObjectStorage required Object store to configure secrets for metrics. metricObjectStorage receiveStorageSize optional The amount of storage applied to thanos receive stateful sets. Default value is 100Gi . string ruleStorageSize optional The amount of storage applied to thanos rule stateful sets. Default value is 1Gi . string storageClass optional Specify the storageClass stateful sets. This storage is used for the object storage if metricObjectStorage is configured for your operating system to create storage. Default value is gp2 . string storeStorageSize optional The amount of storage applied to thanos store stateful sets. Default value is 10Gi . string writeStorage optional A list of endpoint access information. [ ] WriteStorage writeStorage Name Description Schema name required The name of the secret with endpoint access information. string key required The key of the secret to select from. string metricObjectStorage Name Description Schema key required The key of the secret to select from. Must be a valid secret key. See Thanos documentation . string name required Name of the metricObjectStorage . See Kubernetes Names for more information. string observabilityAddonSpec Name Description Schema enableMetrics optional Indicates if the observability add-on sends metrics to the hub cluster. Default value is true . boolean interval optional Interval for when the observability add-on sends metrics to the hub cluster. Default value is 300 seconds ( 300s ). integer resources optional Resource for the metrics collector resource requirement. The default CPU request is 100m , memory request is 100Mi . corev1.ResourceRequirements advanced Name Description Schema retentionConfig optional Specifies the data retention configuration to be used by observability. RetentionConfig rbacQueryProxy optional Specifies the replicas and resources for the rbac-query-proxy deployment. CommonSpec grafana optional Specifies the replicas and resources for the grafana deployment CommonSpec alertmanager optional Specifies the replicas and resources for alertmanager statefulset. CommonSpec observatoriumAPI optional Specifies the replicas and resources for the observatorium-api deployment. CommonSpec queryFrontend optional Specifies the replicas and resources for the query-frontend deployment. CommonSpec query optional Specifies the replicas and resources for the query deployment. CommonSpec receive optional Specifies the replicas and resources for the receive statefulset. CommonSpec rule optional Specifies the replicas and resources for rule statefulset. CommonSpec store optional Specifies the replicas and resources for the store statefulset. CommonSpec CompactSpec optional Specifies the resources for compact statefulset. compact storeMemcached optional Specifies the replicas, resources, etc. for store-memcached. storeMemcached queryFrontendMemcached optional Specifies the replicas, resources, etc for query-frontend-memcached. CacheConfig retentionConfig Name Description Schema blockDuration optional The amount of time to block the duration for Time Series Database (TSDB) block. Default value is 2h . string deleteDelay optional The amount of time until a block marked for deletion is deleted from a bucket. Default value is 48h . string retentionInLocal optional The amount of time to retain raw samples from the local storage. Default value is 24h . string retentionResolutionRaw optional The amount of time to retain raw samples of resolution in a bucket. Default value is 365 days ( 365d ) string retentionResolution5m optional The amount of time to retain samples of resolution 1 (5 minutes) in a bucket. Default value is 365 days ( 365d ). string retentionResolution1h optional The amount of time to retain samples of resolution 2 (1 hour) in a bucket. Default value is 365 days ( 365d ). string CompactSpec Name Description Schema resources optional Compute resources required by thanos compact. corev1.ResourceRequirements serviceAccountAnnotations optional Annotations is an unstructured key value map stored with the compact service account. map[string]string storeMemcached Name Description Schema resources optional Compute resources required by MultiCLusterObservability. corev1.ResourceRequirements replicas optional Replicas for MultiClusterObservability. integer memoryLimitMb optional Memory limit of Memcached in megabytes. integer maxItemSize optional Max item size of Memcached. The default value is 1m, min:1k, max:1024m . string connectionLimit optional Max simultaneous connections of Memcached. The default value is integer status Name Description Schema status optional Status contains the different condition statuses for MultiClusterObservability. metav1.Condition CommonSpec Name Description Schema resources optional Compute resources required by the component. corev1.ResourceRequirements replicas optional Replicas for the component. integer QuerySpec Name Description Schema CommonSpec optional Specifies the replicas and resources for the query deployment. CommonSpec serviceAccountAnnotations optional Annotations is an unstructured key value map stored with the query service account. map[string]string ReceiveSpec Name Description Schema CommonSpec optional Specifies the replicas and resources for the query deployment. CommonSpec serviceAccountAnnotations optional Annotations is an unstructured key value map stored with the query service account. map[string]string StoreSpec Name Description Schema CommonSpec optional Specifies the replicas and resources for the query deployment. CommonSpec serviceAccountAnnotations optional Annotations is an unstructured key value map stored with the query service account. map[string]string RuleSpec Name Description Schema CommonSpec optional Specifies the replicas and resources for the query deployment. CommonSpec evalInterval optional Specifies the evaluation interval for the rules. string serviceAccountAnnotations optional Annotations is an unstructured key value map stored with the query service account. map[string]string 1.13. Search query API The search query API is not a Kubernetes API, therefore is not displayed through the Red Hat OpenShift Container Platform API Explorer. Continue reading to understand the search query API capabilities. 1.13.1. Overview You can expose the search query API with a route and use the API to resolve search queries. The API is a GraphQL endpoint. You can use any client such as curl or Postman. 1.13.1.1. Version information Version : 2.10.0 1.13.1.2. URI scheme BasePath : /searchapi/graphql Schemes : HTTPS 1.13.1.3. Configure API access Create a route to access the Search API external from your cluster with the following command: oc create route passthrough search-api --service=search-search-api -n open-cluster-management Important: You must configure your route to secure your environment. See Route configuration in the OpenShift Container Platform documentation for more details. 1.13.2. Schema design input SearchFilter { property: String! values: [String]! } input SearchInput { keywords: [String] filters: [SearchFilter] limit: Int relatedKinds: [String] } type SearchResult { count: Int items: [Map] related: [SearchRelatedResult] } type SearchRelatedResult { kind: String! count: Int items: [Map] } Parameters with ! indicates that the field is required. 1.13.2.1. Description table of query inputs Type Description Property SearchFilter Defines a key and value to filter results. When you provide many values for a property, the API interpret the values as an "OR" operation. When you provide many filters, results match all filters and the API interprets as an "AND" operation. string SearchInput Enter key words to receive a list of resources. When you provide many keywords, the API interprets it as an "AND" operation. String limit Determine the maximum number of results returned after you enter the query. The default value is 10,000 . A value of -1 means that the limit is removed. Integer 1.13.2.2. Schema example { "query": "type SearchResult {count: Intitems: [Map]related: [SearchRelatedResult]} type SearchRelatedResult {kind: String!count: Intitems: [Map]}", "variables": { "input": [ { "keywords": [], "filters": [ { "property": "kind", "values": [ "Deployment" ] } ], "limit": 10 } ] } } 1.13.3. Generic schema type Query { search(input: [SearchInput]): [SearchResult] searchComplete(property: String!, query: SearchInput, limit: Int): [String] searchSchema: Map messages: [Message] } 1.13.4. Supported queries Continue reading to see the query types that are supported in JSON format. 1.13.4.1. Search for deployments Query: query mySearch(USDinput: [SearchInput]) { search(input: USDinput) { items } } Variables: {"input":[ { "keywords":[], "filters":[ {"property":"kind","values":["Deployment"]}], "limit":10 } ]} 1.13.4.2. Search for pods Query: query mySearch(USDinput: [SearchInput]) { search(input: USDinput) { items } } Variables: {"input":[ { "keywords":[], "filters":[ {"property":"kind","values":["Pod"]}], "limit":10 } ]} 1.14. MultiClusterHub API 1.14.1. Overview This documentation is for the MultiClusterHub resource for Red Hat Advanced Cluster Management for Kubernetes. MultiClusterHub resource has four possible requests: create, query, delete and update. 1.14.1.1. Version information Version : 2.12.0 1.14.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.14.1.3. Tags multiclusterhubs.operator.open-cluster-management.io : Create and manage multicluster hub operators 1.14.2. Paths 1.14.2.1. Create a MultiClusterHub resource 1.14.2.1.1. Description Create a MultiClusterHub resource to define the configuration for an instance of the multicluster hub. 1.14.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string Body body required Parameters describing the multicluster hub to be created. Definitions 1.14.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.14.2.1.4. Consumes multiclusterhubs/yaml 1.14.2.1.5. Tags multiclusterhubs.operator.open-cluster-management.io 1.14.2.1.6. Example HTTP request 1.14.2.1.6.1. Request body { "apiVersion": "apiextensions.k8s.io/v1", "kind": "CustomResourceDefinition", "metadata": { "name": "multiclusterhubs.operator.open-cluster-management.io" }, "spec": { "group": "operator.open-cluster-management.io", "names": { "kind": "MultiClusterHub", "listKind": "MultiClusterHubList", "plural": "multiclusterhubs", "shortNames": [ "mch" ], "singular": "multiclusterhub" }, "scope": "Namespaced", "versions": [ { "additionalPrinterColumns": [ { "description": "The overall status of the multicluster hub.", "jsonPath": ".status.phase", "name": "Status", "type": "string" }, { "jsonPath": ".metadata.creationTimestamp", "name": "Age", "type": "date" } ], "name": "v1", "schema": { "openAPIV3Schema": { "description": "MultiClusterHub defines the configuration for an instance of the multiCluster hub, a central point for managing multiple Kubernetes-based clusters. The deployment of multicluster hub components is determined based on the configuration that is defined in this resource.", "properties": { "apiVersion": { "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", "type": "string" }, "kind": { "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. The value is in CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", "type": "string" }, "metadata": { "type": "object" }, "spec": { "description": "MultiClusterHubSpec defines the desired state of MultiClusterHub.", "properties": { "availabilityConfig": { "description": "Specifies deployment replication for improved availability. Options are: Basic and High (default).", "type": "string" }, "customCAConfigmap": { "description": "Provide the customized OpenShift default ingress CA certificate to Red Hat Advanced Cluster Management.", } "type": "string" }, "disableHubSelfManagement": { "description": "Disable automatic import of the hub cluster as a managed cluster.", "type": "boolean" }, "disableUpdateClusterImageSets": { "description": "Disable automatic update of ClusterImageSets.", "type": "boolean" }, "hive": { "description": "(Deprecated) Overrides for the default HiveConfig specification.", "properties": { "additionalCertificateAuthorities": { "description": "(Deprecated) AdditionalCertificateAuthorities is a list of references to secrets in the 'hive' namespace that contain an additional Certificate Authority to use when communicating with target clusters. These certificate authorities are used in addition to any self-signed CA generated by each cluster on installation.", "items": { "description": "LocalObjectReference contains the information to let you locate the referenced object inside the same namespace.", "properties": { "name": { "description": "Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names", "type": "string" } }, "type": "object" }, "type": "array" }, "backup": { "description": "(Deprecated) Backup specifies configuration for backup integration. If absent, backup integration is disabled.", "properties": { "minBackupPeriodSeconds": { "description": "(Deprecated) MinBackupPeriodSeconds specifies that a minimum of MinBackupPeriodSeconds occurs in between each backup. This is used to rate limit backups. This potentially batches together multiple changes into one backup. No backups are lost for changes that happen during the interval that is queued up, and results in a backup once the interval has been completed.", "type": "integer" }, "velero": { "description": "(Deprecated) Velero specifies configuration for the Velero backup integration.", "properties": { "enabled": { "description": "(Deprecated) Enabled dictates if the Velero backup integration is enabled. If not specified, the default is disabled.", "type": "boolean" } }, "type": "object" } }, "type": "object" }, "externalDNS": { "description": "(Deprecated) ExternalDNS specifies configuration for external-dns if it is to be deployed by Hive. If absent, external-dns is not deployed.", "properties": { "aws": { "description": "(Deprecated) AWS contains AWS-specific settings for external DNS.", "properties": { "credentials": { "description": "(Deprecated) Credentials reference a secret that is used to authenticate with AWS Route53. It needs permission to manage entries in each of the managed domains for this cluster. Secret should have AWS keys named 'aws_access_key_id' and 'aws_secret_access_key'.", "properties": { "name": { "description": "Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names", "type": "string" } }, "type": "object" } }, "type": "object" }, "gcp": { "description": "(Deprecated) GCP contains Google Cloud Platform specific settings for external DNS.", "properties": { "credentials": { "description": "(Deprecated) Credentials reference a secret that is used to authenticate with GCP DNS. It needs permission to manage entries in each of the managed domains for this cluster. Secret should have a key names 'osServiceAccount.json'. The credentials must specify the project to use.", "properties": { "name": { "description": "Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names", "type": "string" } }, "type": "object" } }, "type": "object" } }, "type": "object" }, "failedProvisionConfig": { "description": "(Deprecated) FailedProvisionConfig is used to configure settings related to handling provision failures.", "properties": { "skipGatherLogs": { "description": "(Deprecated) SkipGatherLogs disables functionality that attempts to gather full logs from the cluster if an installation fails for any reason. The logs are stored in a persistent volume for up to seven days.", "type": "boolean" } }, "type": "object" }, "globalPullSecret": { "description": "(Deprecated) GlobalPullSecret is used to specify a pull secret that is used globally by all of the cluster deployments. For each cluster deployment, the contents of GlobalPullSecret are merged with the specific pull secret for a cluster deployment(if specified), with precedence given to the contents of the pull secret for the cluster deployment.", "properties": { "name": { "description": "Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names", "type": "string" } }, "type": "object" }, "maintenanceMode": { "description": "(Deprecated) MaintenanceMode can be set to true to disable the Hive controllers in situations where you need to ensure nothing is running that adds or act upon finalizers on Hive types. This should rarely be needed. Sets replicas to zero for the 'hive-controllers' deployment to accomplish this.", "type": "boolean" } }, "required": [ "failedProvisionConfig" ], "type": "object" }, "imagePullSecret": { "description": "Override pull secret for accessing MultiClusterHub operand and endpoint images.", "type": "string" }, "ingress": { "description": "Configuration options for ingress management.", "properties": { "sslCiphers": { "description": "List of SSL ciphers enabled for management ingress. Defaults to full list of supported ciphers.", "items": { "type": "string" }, "type": "array" } }, "type": "object" }, "nodeSelector": { "additionalProperties": { "type": "string" }, "description": "Set the node selectors..", "type": "object" }, "overrides": { "description": "Developer overrides.", "properties": { "imagePullPolicy": { "description": "Pull policy of the multicluster hub images.", "type": "string" } }, "type": "object" }, "separateCertificateManagement": { "description": "(Deprecated) Install cert-manager into its own namespace.", "type": "boolean" } }, "type": "object" }, "status": { "description": "MulticlusterHubStatus defines the observed state of MultiClusterHub.", "properties": { "components": { "additionalProperties": { "description": "StatusCondition contains condition information.", "properties": { "lastTransitionTime": { "description": "LastTransitionTime is the last time the condition changed from one status to another.", "format": "date-time", "type": "string" }, "message": { "description": "Message is a human-readable message indicating\ndetails about the last status change.", "type": "string" }, "reason": { "description": "Reason is a (brief) reason for the last status change of the condition.", "type": "string" }, "status": { "description": "Status is the status of the condition. One of True, False, Unknown.", "type": "string" }, "type": { "description": "Type is the type of the cluster condition.", "type": "string" } }, "type": "object" }, "description": "Components []ComponentCondition `json:\"manifests,omitempty\"`", "type": "object" }, "conditions": { "description": "Conditions contain the different condition statuses for the MultiClusterHub.", "items": { "description": "StatusCondition contains condition information.", "properties": { "lastTransitionTime": { "description": "LastTransitionTime is the last time the condition changed from one status to another.", "format": "date-time", "type": "string" }, "lastUpdateTime": { "description": "The last time this condition was updated.", "format": "date-time", "type": "string" }, "message": { "description": "Message is a human-readable message indicating details about the last status change.", "type": "string" }, "reason": { "description": "Reason is a (brief) reason for the last status change of the condition.", "type": "string" }, "status": { "description": "Status is the status of the condition. One of True, False, Unknown.", "type": "string" }, "type": { "description": "Type is the type of the cluster condition.", "type": "string" } }, "type": "object" }, "type": "array" }, "currentVersion": { "description": "CurrentVersion indicates the current version..", "type": "string" }, "desiredVersion": { "description": "DesiredVersion indicates the desired version.", "type": "string" }, "phase": { "description": "Represents the running phase of the MultiClusterHub", "type": "string" } }, "type": "object" } }, "type": "object" } }, "served": true, "storage": true, "subresources": { "status": {} } } ] } } 1.14.2.2. Query all MultiClusterHubs 1.14.2.2.1. Description Query your multicluster hub operator for more details. 1.14.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string 1.14.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.14.2.2.4. Consumes operator/yaml 1.14.2.2.5. Tags multiclusterhubs.operator.open-cluster-management.io 1.14.2.3. Query a MultiClusterHub operator 1.14.2.3.1. Description Query a single multicluster hub operator for more details. 1.14.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path application_name required Name of the application that you want to query. string Path namespace required Namespace that you want to use, for example, default. string 1.14.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.14.2.3.4. Tags multiclusterhubs.operator.open-cluster-management.io 1.14.2.4. Delete a MultiClusterHub operator 1.14.2.4.1. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path application_name required Name of the multicluster hub operator that you want to delete. string Path namespace required Namespace that you want to use, for example, default. string 1.14.2.4.2. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.14.2.4.3. Tags multiclusterhubs.operator.open-cluster-management.io 1.14.3. Definitions 1.14.3.1. Multicluster hub operator Name Description Schema apiVersion required The versioned schema of the MultiClusterHub. string kind required String value that represents the REST resource. string metadata required Describes rules that define the resource. object spec required The resource specification. spec spec availabilityConfig optional Specifies deployment replication for improved availability. The default value is High . string customCAConfigmap optional Provide the customized OpenShift default ingress CA certificate to Red Hat Advanced Cluster Management. string disableHubSelfManagement optional Disable automatic import of the hub cluster as a managed cluster. boolean disableUpdateClusterImageSets optional Disable automatic update of ClusterImageSets. boolean hive optional (Deprecated) An object that overrides for the default HiveConfig specification. hive imagePullSecret optional Overrides pull secret for accessing MultiClusterHub operand and endpoint images. string ingress optional Configuration options for ingress management. ingress nodeSelector optional Set the node selectors. string separateCertificateManagement optional (Deprecated) Install cert-manager into its own namespace. boolean hive additionalCertificateAuthorities optional (Deprecated) A list of references to secrets in the hive namespace that contain an additional Certificate Authority to use when communicating with target clusters. These certificate authorities are used in addition to any self-signed CA generated by each cluster on installation. object backup optional (Deprecated) Specifies the configuration for backup integration. If absent, backup integration is disabled. backup externalDNS optional (Deprecated) Specifies configuration for external-dns if it is to be deployed by Hive. If absent, external-dns is not be deployed. object failedProvisionConfig required (Deprecated) Used to configure settings related to handling provision failures. failedProvisionConfig globalPullSecret optional (Deprecated) Used to specify a pull secret that is used globally by all of the cluster deployments. For each cluster deployment, the contents of globalPullSecret are merged with the specific pull secret for a cluster deployment (if specified), with precedence given to the contents of the pull secret for the cluster deployment. object maintenanceMode optional (Deprecated) Can be set to true to disable the hive controllers in situations where you need to ensure nothing is running that adds or acts upon finalizers on Hive types. This should rarely be needed. Sets replicas to 0 for the hive-controllers deployment to accomplish this. boolean ingress sslCiphers optional List of SSL ciphers enabled for management ingress. Defaults to full list of supported ciphers. string backup minBackupPeriodSeconds optional (Deprecated) Specifies that a minimum of MinBackupPeriodSeconds occurs in between each backup. This is used to rate limit backups. This potentially batches together multiple changes into one backup. No backups are lost as changes happen during this interval are queued up and result in a backup happening once the interval has been completed. integer velero optional (Deprecated) Velero specifies configuration for the Velero backup integration. object failedProvisionConfig skipGatherLogs optional (Deprecated) Disables functionality that attempts to gather full logs from the cluster if an installation fails for any reason. The logs are stored in a persistent volume for up to seven days. boolean status components optional The components of the status configuration. object conditions optional Contains the different conditions for the multicluster hub. conditions desiredVersion optional Indicates the desired version. string phase optional Represents the active phase of the MultiClusterHub resource. The values that are used for this parameter are: Pending , Running , Installing , Updating , Uninstalling string conditions lastTransitionTime optional The last time the condition changed from one status to another. string lastUpdateTime optional The last time this condition was updated. string message required Message is a human-readable message indicating details about the last status change. string reason required A brief reason for why the condition status changed. string status required The status of the condition. string type required The type of the cluster condition. string StatusConditions kind required The resource kind that represents this status. string available required Indicates whether this component is properly running. boolean lastTransitionTime optional The last time the condition changed from one status to another. metav1.time lastUpdateTime optional The last time this condition was updated. metav1.time message required Message is a human-readable message indicating details about the last status change. string reason optional A brief reason for why the condition status changed. string status optional The status of the condition. string type optional The type of the cluster condition. string 1.15. Placement API (v1beta1) 1.15.1. Overview This documentation is for the Placement resource for Red Hat Advanced Cluster Management for Kubernetes. The Placement resource has four possible requests: create, query, delete, and update. Placement defines a rule to select a set of ManagedClusters from the ManagedClusterSets that are bound to the placement namespace. A slice of PlacementDecisions with the label cluster.open-cluster-management.io/placement={placement name} is created to represent the ManagedClusters that are selected by this placement. 1.15.1.1. Version information Version : 2.12.0 1.15.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.15.1.3. Tags cluster.open-cluster-management.io : Create and manage Placements 1.15.2. Paths 1.15.2.1. Query all Placements 1.15.2.1.1. Description Query your Placements for more details. 1.15.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string 1.15.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.15.2.1.4. Consumes placement/yaml 1.15.2.1.5. Tags cluster.open-cluster-management.io 1.15.2.2. Create a Placement 1.15.2.2.1. Description Create a Placement. 1.15.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Body body required Parameters describing the placement binding to be created. Placement 1.15.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.15.2.2.4. Consumes placement/yaml 1.15.2.2.5. Tags cluster.open-cluster-management.io 1.15.2.2.6. Example HTTP request 1.15.2.2.6.1. Request body { "apiVersion" : "cluster.open-cluster-management.io/v1beta1", "kind" : "Placement", "metadata" : { "name" : "placement1", "namespace": "ns1" }, "spec": { "predicates": [ { "requiredClusterSelector": { "labelSelector": { "matchLabels": { "vendor": "OpenShift" } } } } ] }, "status" : { } } 1.15.2.3. Query a single Placement 1.15.2.3.1. Description Query a single Placement for more details. 1.15.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path placement_name required Name of the Placement that you want to query. string 1.15.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.15.2.3.4. Tags cluster.open-cluster-management.io 1.15.2.4. Delete a Placement 1.15.2.4.1. Description Delete a single Placement. 1.15.2.4.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path placement_name required Name of the Placement that you want to delete. string 1.15.2.4.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.15.2.4.4. Tags cluster.open-cluster-management.io 1.15.3. Definitions 1.15.3.1. Placement Name Description Schema apiVersion required Versioned schema of the Placement. string kind required String value that represents the REST resource. string metadata required Metadata of the Placement. object spec required Specification of the Placement. spec spec Name Description Schema clusterSets optional A subset of ManagedClusterSets from which the ManagedClusters are selected. If the ManagedClusterSet is empty, ManagedClusters are selected from the ManagedClusterSets that are bound to the Placement namespace. If the ManagedClusterSet contains ManagedClusters , ManagedClusters are selected from the intersection of this subset. The selected ManagedClusterSets are bound to the placement namespace. string array numberOfClusters optional Number of ManagedClusters that you want to be selected. integer (int32) predicates optional Subset of cluster predicates that select ManagedClusters . The conditional logic is OR . clusterPredicate array prioritizerPolicy optional Policy of the prioritizers. prioritizerPolicy tolerations optional Value that allows, but does not require, the managed clusters with certain taints to be selected by placements with matching tolerations. toleration array clusterPredicate Name Description Schema requiredClusterSelector optional A cluster selector to select ManagedClusters with a label and cluster claim. clusterSelector clusterSelector Name Description Schema labelSelector optional Selector of ManagedClusters by label. object claimSelector optional Selector of ManagedClusters by claim. clusterClaimSelector clusterClaimSelector Name Description Schema matchExpressions optional Subset of the cluster claim selector requirements. The conditional logic is AND . < object > array prioritizerPolicy Name Description Schema mode optional Either Exact , Additive , or "". The default value of "" is Additive . string configurations optional Configuration of the prioritizer. prioritizerConfig array prioritizerConfig Name Description Schema scoreCoordinate required Configuration of the prioritizer and score source. scoreCoordinate weight optional Weight of the prioritizer score. The value must be within the range: [-10,10]. int32 scoreCoordinate Name Description Schema type required Type of the prioritizer score. Valid values are "BuiltIn" or "AddOn". string builtIn optional Name of a BuiltIn prioritizer from the following options: 1) Balance: Balance the decisions among the clusters. 2) Steady: Ensure the existing decision is stabilized. 3) ResourceAllocatableCPU & ResourceAllocatableMemory: Sort clusters based on the allocatable resources. 4) Spread: Spread the workload evenly to topologies. string addOn optional When type is AddOn , AddOn defines the resource name and score name. object toleration Name Description Schema key optional Taint key that the toleration applies to. Empty means match all of the taint keys. string operator optional Relationship of a key to the value. Valid operators are Exists and Equal . The default value is Equal . string value optional Taint value that matches the toleration. string effect optional Taint effect to match. Empty means match all of the taint effects. When specified, allowed values are NoSelect , PreferNoSelect , and NoSelectIfNew . string tolerationSeconds optional Length of time that a taint is tolerated, after which the taint is not tolerated. The default value is nil, which indicates that there is no time limit on how long the taint is tolerated. int64 1.16. PlacementDecisions API (v1beta1) 1.16.1. Overview This documentation is for the PlacementDecision resource for Red Hat Advanced Cluster Management for Kubernetes. The PlacementDecision resource has four possible requests: create, query, delete, and update. A PlacementDecision indicates a decision from a placement. A PlacementDecision uses the label cluster.open-cluster-management.io/placement={placement name} to reference a certain placement. 1.16.1.1. Version information Version : 2.12.0 1.16.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.16.1.3. Tags cluster.open-cluster-management.io : Create and manage PlacementDecisions. 1.16.2. Paths 1.16.2.1. Query all PlacementDecisions 1.16.2.1.1. Description Query your PlacementDecisions for more details. 1.16.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string 1.16.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.16.2.1.4. Consumes placementdecision/yaml 1.16.2.1.5. Tags cluster.open-cluster-management.io 1.16.2.2. Create a PlacementDecision 1.16.2.2.1. Description Create a PlacementDecision. 1.16.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Body body required Parameters describing the PlacementDecision to be created. PlacementDecision 1.16.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.16.2.2.4. Consumes placementdecision/yaml 1.16.2.2.5. Tags cluster.open-cluster-management.io 1.16.2.2.6. Example HTTP request 1.16.2.2.6.1. Request body { "apiVersion" : "cluster.open-cluster-management.io/v1beta1", "kind" : "PlacementDecision", "metadata" : { "labels" : { "cluster.open-cluster-management.io/placement" : "placement1" }, "name" : "placement1-decision1", "namespace": "ns1" }, "status" : { } } 1.16.2.3. Query a single PlacementDecision 1.16.2.3.1. Description Query a single PlacementDecision for more details. 1.16.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path placementdecision_name required Name of the PlacementDecision that you want to query. string 1.16.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.16.2.3.4. Tags cluster.open-cluster-management.io 1.16.2.4. Delete a PlacementDecision 1.16.2.4.1. Description Delete a single PlacementDecision. 1.16.2.4.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path placementdecision_name required Name of the PlacementDecision that you want to delete. string 1.16.2.4.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.16.2.4.4. Tags cluster.open-cluster-management.io 1.16.3. Definitions 1.16.3.1. PlacementDecision Name Description Schema apiVersion required Versioned schema of PlacementDecision . string kind required String value that represents the REST resource. string metadata required Metadata of PlacementDecision . object status optional Current status of the PlacementDecision . PlacementStatus PlacementStatus Name Description Schema Decisions required Slice of decisions according to a placement. ClusterDecision array ClusterDecision Name Description Schema clusterName required Name of the ManagedCluster . string reason required Reason why the ManagedCluster is selected. string 1.17. DiscoveryConfig API 1.17.1. Overview This documentation is for the DiscoveryConfig resource for Red Hat Advanced Cluster Management for Kubernetes. The DiscoveryConfig resource has four possible requests: create, query, delete, and update. 1.17.1.1. Version information Version : 2.12.0 1.17.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.17.1.3. Tags discoveryconfigs.discovery.open-cluster-management.io : Create and manage DiscoveryConfigs 1.17.2. Paths 1.17.2.1. Create a DiscoveryConfig 1.17.2.1.1. Description Create a DiscoveryConfig. 1.17.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string Body body required Parameters describing the DiscoveryConfig to be created. DiscoveryConfig 1.17.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.17.2.1.4. Consumes discoveryconfigs/yaml 1.17.2.1.5. Tags discoveryconfigs.discovery.open-cluster-management.io 1.17.2.1.5.1. Request body { "apiVersion": "apiextensions.k8s.io/v1", "kind": "CustomResourceDefinition", "metadata": { "annotations": { "controller-gen.kubebuilder.io/version": "v0.4.1", }, "creationTimestamp": null, "name": "discoveryconfigs.discovery.open-cluster-management.io", }, "spec": { "group": "discovery.open-cluster-management.io", "names": { "kind": "DiscoveryConfig", "listKind": "DiscoveryConfigList", "plural": "discoveryconfigs", "singular": "discoveryconfig" }, "scope": "Namespaced", "versions": [ { "name": "v1", "schema": { "openAPIV3Schema": { "description": "DiscoveryConfig is the Schema for the discoveryconfigs API", "properties": { "apiVersion": { "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", "type": "string" }, "kind": { "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", "type": "string" }, "metadata": { "type": "object" }, "spec": { "description": "DiscoveryConfigSpec defines the desired state of DiscoveryConfig", "properties": { "credential": { "description": "Credential is the secret containing credentials to connect to the OCM api on behalf of a user", "type": "string" }, "filters": { "description": "Sets restrictions on what kind of clusters to discover", "properties": { "lastActive": { "description": "LastActive is the last active in days of clusters to discover, determined by activity timestamp", "type": "integer" }, "openShiftVersions": { "description": "OpenShiftVersions is the list of release versions of OpenShift of the form \"<Major>.<Minor>\"", "items": { "description": "Semver represents a partial semver string with the major and minor version in the form \"<Major>.<Minor>\". For example: \"4.15\"", "pattern": "^(?:0|[1-9]\\d*)\\.(?:0|[1-9]\\d*)USD", "type": "string" }, "type": "array" } }, "type": "object" } }, "required": [ "credential" ], "type": "object" }, "status": { "description": "DiscoveryConfigStatus defines the observed state of DiscoveryConfig", "type": "object" } }, "type": "object" } }, "served": true, "storage": true, "subresources": { "status": {} } } ] }, "status": { "acceptedNames": { "kind": "", "plural": "" }, "conditions": [], "storedVersions": [] } } 1.17.2.2. Query all DiscoveryConfigs 1.17.2.2.1. Description Query your discovery config operator for more details. 1.17.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string 1.17.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.17.2.2.4. Consumes operator/yaml 1.17.2.2.5. Tags discoveryconfigs.discovery.open-cluster-management.io 1.17.2.3. Delete a DiscoveryConfig operator 1.17.2.3.1. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path application_name required Name of the Discovery Config operator that you want to delete. string Path namespace required Namespace that you want to use, for example, default. string 1.17.2.3.2. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.17.2.3.3. Tags discoveryconfigs.operator.open-cluster-management.io 1.17.3. Definitions 1.17.3.1. DiscoveryConfig Name Description Schema apiVersion required The versioned schema of the discoveryconfigs. string kind required String value that represents the REST resource. string metadata required Describes rules that define the resource. object spec required Defines the desired state of DiscoveryConfig. See List of specs 1.17.3.2. List of specs Name Description Schema credential required Credential is the secret containing credentials to connect to the OCM API on behalf of a user. string filters optional Sets restrictions on what kind of clusters to discover. See List of filters 1.17.3.3. List of filters Name Description Schema lastActive required LastActive is the last active in days of clusters to discover, determined by activity timestamp. integer openShiftVersions optional OpenShiftVersions is the list of release versions of OpenShift of the form "<Major>.<Minor>" object 1.18. DiscoveredCluster API 1.18.1. Overview This documentation is for the DiscoveredCluster resource for Red Hat Advanced Cluster Management for Kubernetes. The DiscoveredCluster resource has four possible requests: create, query, delete, and update. 1.18.1.1. Version information Version : 2.12.0 1.18.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.18.1.3. Tags discoveredclusters.discovery.open-cluster-management.io : Create and manage DiscoveredClusters 1.18.2. Paths 1.18.2.1. Create a DiscoveredCluster 1.18.2.1.1. Description Create a DiscoveredCluster. 1.18.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string Body body required Parameters describing the DiscoveredCluster to be created. DiscoveredCluster 1.18.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.18.2.1.4. Consumes discoveredclusters/yaml 1.18.2.1.5. Tags discoveredclusters.discovery.open-cluster-management.io 1.18.2.1.5.1. Request body { "apiVersion": "apiextensions.k8s.io/v1", "kind": "CustomResourceDefinition", "metadata": { "annotations": { "controller-gen.kubebuilder.io/version": "v0.4.1",\ }, "creationTimestamp": null, "name": "discoveredclusters.discovery.open-cluster-management.io", }, "spec": { "group": "discovery.open-cluster-management.io", "names": { "kind": "DiscoveredCluster", "listKind": "DiscoveredClusterList", "plural": "discoveredclusters", "singular": "discoveredcluster" }, "scope": "Namespaced", "versions": [ { "name": "v1", "schema": { "openAPIV3Schema": { "description": "DiscoveredCluster is the Schema for the discoveredclusters API", "properties": { "apiVersion": { "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", "type": "string" }, "kind": { "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", "type": "string" }, "metadata": { "type": "object" }, "spec": { "description": "DiscoveredClusterSpec defines the desired state of DiscoveredCluster", "properties": { "activityTimestamp": { "format": "date-time", "type": "string" }, "apiUrl": { "type": "string" }, "cloudProvider": { "type": "string" }, "console": { "type": "string" }, "creationTimestamp": { "format": "date-time", "type": "string" }, "credential": { "description": "ObjectReference contains enough information to let you inspect or modify the referred object. --- New uses of this type are discouraged because of difficulty describing its usage when embedded in APIs. 1. Ignored fields. It includes many fields which are not generally honored. For instance, ResourceVersion and FieldPath are both very rarely valid in actual usage. 2. Invalid usage help. It is impossible to add specific help for individual usage. In most embedded usages, there are particular restrictions like, \"must refer only to types A and B\" or \"UID not honored\" or \"name must be restricted\". Those cannot be well described when embedded. 3. Inconsistent validation. Because the usages are different, the validation rules are different by usage, which makes it hard for users to predict what will happen. 4. The fields are both imprecise and overly precise. Kind is not a precise mapping to a URL. This can produce ambiguity during interpretation and require a REST mapping. In most cases, the dependency is on the group,resource tuple and the version of the actual struct is irrelevant. 5. We cannot easily change it. Because this type is embedded in many locations, updates to this type will affect numerous schemas. Don't make new APIs embed an underspecified API type they do not control. Instead of using this type, create a locally provided and used type that is well-focused on your reference. For example, ServiceReferences for admission registration: https://github.com/kubernetes/api/blob/release-1.17/admissionregistration/v1/types.go#L533 .", "properties": { "apiVersion": { "description": "API version of the referent.", "type": "string" }, "fieldPath": { "description": "If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: \"spec.containers{name}\" (where \"name\" refers to the name of the container that triggered the event) or if no container name is specified \"spec.containers[2]\" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.", "type": "string" }, "kind": { "description": "Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", "type": "string" }, "name": { "description": "Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names", "type": "string" }, "namespace": { "description": "Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/", "type": "string" }, "resourceVersion": { "description": "Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency", "type": "string" }, "uid": { "description": "UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids", "type": "string" } }, "type": "object" }, "displayName": { "type": "string" }, "isManagedCluster": { "type": "boolean" }, "name": { "type": "string" }, "openshiftVersion": { "type": "string" }, "status": { "type": "string" }, "type": { "type": "string" } }, "required": [ "apiUrl", "displayName", "isManagedCluster", "name", "type" ], "type": "object" }, "status": { "description": "DiscoveredClusterStatus defines the observed state of DiscoveredCluster", "type": "object" } }, "type": "object" } }, "served": true, "storage": true, "subresources": { "status": {} } } ] }, "status": { "acceptedNames": { "kind": "", "plural": "" }, "conditions": [], "storedVersions": [] } } 1.18.2.2. Query all DiscoveredClusters 1.18.2.2.1. Description Query your discovered clusters operator for more details. 1.18.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string 1.18.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.18.2.2.4. Consumes operator/yaml 1.18.2.2.5. Tags discoveredclusters.discovery.open-cluster-management.io 1.18.2.3. Delete a DiscoveredCluster operator 1.18.2.3.1. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path application_name required Name of the Discovered Cluster operator that you want to delete. string Path namespace required Namespace that you want to use, for example, default. string 1.18.2.3.2. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.18.2.3.3. Tags discoveredclusters.operator.open-cluster-management.io 1.18.3. Definitions 1.18.3.1. DiscoveredCluster Name Description Schema apiVersion required The versioned schema of the discoveredclusters. string kind required String value that represents the REST resource. string metadata required Describes rules that define the resource. object spec required DiscoveredClusterSpec defines the desired state of DiscoveredCluster. See List of specs 1.18.3.2. List of specs Name Description Schema activityTimestamp optional Discoveredclusters last available activity timestamp. metav1.time apiUrl required Discoveredclusters API URL endpoint. string cloudProvider optional Cloud provider of discoveredcluster. string console optional Discoveredclusters console URL endpoint. string creationTimestamp optional Discoveredclusters creation timestamp. metav1.time credential optional The reference to the credential from which the cluster was discovered. corev1.ObjectReference displayName required The display name of the discovered cluster. string isManagedCluster required If true, cluster is managed by ACM. boolean name required The name of the discoveredcluster. string openshiftVersion optional The OpenShift version of the discovered cluster. string status optional The status of the discovered cluster. string type required The OpenShift flavor (ex. OCP, ROSA, etc.). string 1.19. AddOnDeploymentConfig API (v1alpha1) 1.19.1. Overview This documentation is for the AddOnDeploymentConfig resource for Red Hat Advanced Cluster Management for Kubernetes. The AddOnDeploymentConfig resource has four possible requests: create, query, delete, and update. AddOnDeploymentConfig represents a deployment configuration for an add-on. 1.19.1.1. Version information Version : 2.12.0 1.19.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.19.1.3. Tags addon.open-cluster-management.io : Create and manage AddOnDeploymentConfigs 1.19.2. Paths 1.19.2.1. Query all AddOnDeploymentConfigs 1.19.2.1.1. Description Query your AddOnDeploymentConfigs for more details. 1.19.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string 1.19.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.19.2.1.4. Consumes addondeploymentconfig/yaml 1.19.2.1.5. Tags addon.open-cluster-management.io 1.19.2.2. Create a AddOnDeploymentConfig 1.19.2.2.1. Description Create a AddOnDeploymentConfig. 1.19.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Body body required Parameters describing the AddOnDeploymentConfig binding to be created. AddOnDeploymentConfig 1.19.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.19.2.2.4. Consumes addondeploymentconfig/yaml 1.19.2.2.5. Tags addon.open-cluster-management.io 1.19.2.2.6. Example HTTP request 1.19.2.2.6.1. Request body { "apiVersion": "addon.open-cluster-management.io/v1alpha1", "kind": "AddOnDeploymentConfig", "metadata": { "name": "deploy-config", "namespace": "open-cluster-management-hub" }, "spec": { "nodePlacement": { "nodeSelector": { "node-dedicated": "acm-addon" }, "tolerations": [ { "effect": "NoSchedule", "key": "node-dedicated", "operator": "Equal", "value": "acm-addon" } ] } } } 1.19.2.3. Query a single AddOnDeploymentConfig 1.19.2.3.1. Description Query a single AddOnDeploymentConfig for more details. 1.19.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path addondeploymentconfig_name required Name of the AddOnDeploymentConfig that you want to query. string 1.19.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.19.2.3.4. Tags addon.open-cluster-management.io 1.19.2.4. Delete a AddOnDeploymentConfig 1.19.2.4.1. Description Delete a single AddOnDeploymentConfig. 1.19.2.4.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path addondeploymentconfig_name required Name of the AddOnDeploymentConfig that you want to delete. string 1.19.2.4.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.19.2.4.4. Tags addon.open-cluster-management.io 1.19.3. Definitions 1.19.3.1. AddOnDeploymentConfig Name Description Schema apiVersion required Versioned schema of the AddOnDeploymentConfig. string kind required String value that represents the REST resource. string metadata required Metadata of the AddOnDeploymentConfig. object spec required Specification of the AddOnDeploymentConfig. spec spec Name Description Schema customizedVariables optional A list of name-value variables for the current add-on deployment. The add-on implementation can use these variables to render its add-on deployment. customizedVariable array nodePlacement required Enables explicit control over the scheduling of the add-on agents on the managed cluster. nodePlacement customizedVariable Name Description Schema name required Name of this variable. string value optional Value of this variable. string nodePlacement Name Description Schema nodeSelector optional Define which nodes the pods are scheduled to run on. When the nodeSelector is empty, the nodeSelector selects all nodes. map[string]string tolerations optional Applied to pods and used to schedule pods to any taint that matches the <key,value,effect> toleration using the matching operator ( <operator> ). []corev1.Toleration 1.20. ClusterManagementAddOn API (v1alpha1) 1.20.1. Overview This documentation is for the ClusterManagementAddOn resource for Red Hat Advanced Cluster Management for Kubernetes. The ClusterManagementAddOn resource has four possible requests: create, query, delete, and update. ClusterManagementAddOn represents the registration of an add-on to the cluster manager. This resource allows the user to discover which add-on is available for the cluster manager and also provides metadata information about the add-on. This resource also provides a reference to ManagedClusterAddOn, the name of the ClusterManagementAddOn resource that is used for the namespace-scoped ManagedClusterAddOn resource. ClusterManagementAddOn is a cluster-scoped resource. 1.20.1.1. Version information Version : 2.12.0 1.20.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.20.1.3. Tags addon.open-cluster-management.io : Create and manage ClusterManagementAddOns 1.20.2. Paths 1.20.2.1. Query all ClusterManagementAddOns 1.20.2.1.1. Description Query your ClusterManagementAddOns for more details. 1.20.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string 1.20.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.20.2.1.4. Consumes clustermanagementaddon/yaml 1.20.2.1.5. Tags addon.open-cluster-management.io 1.20.2.2. Create a ClusterManagementAddOn 1.20.2.2.1. Description Create a ClusterManagementAddOn. 1.20.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Body body required Parameters describing the ClusterManagementAddon binding to be created. ClusterManagementAddOn 1.20.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.20.2.2.4. Consumes clustermanagementaddon/yaml 1.20.2.2.5. Tags addon.open-cluster-management.io 1.20.2.2.6. Example HTTP request 1.20.2.2.6.1. Request body { "apiVersion": "addon.open-cluster-management.io/v1alpha1", "kind": "ClusterManagementAddOn", "metadata": { "name": "helloworld" }, "spec": { "supportedConfigs": [ { "defaultConfig": { "name": "deploy-config", "namespace": "open-cluster-management-hub" }, "group": "addon.open-cluster-management.io", "resource": "addondeploymentconfigs" } ] }, "status" : { } } 1.20.2.3. Query a single ClusterManagementAddOn 1.20.2.3.1. Description Query a single ClusterManagementAddOn for more details. 1.20.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path clustermanagementaddon_name required Name of the ClusterManagementAddOn that you want to query. string 1.20.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.20.2.3.4. Tags addon.open-cluster-management.io 1.20.2.4. Delete a ClusterManagementAddOn 1.20.2.4.1. Description Delete a single ClusterManagementAddOn. 1.20.2.4.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path clustermanagementaddon_name required Name of the ClusterManagementAddOn that you want to delete. string 1.20.2.4.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.20.2.4.4. Tags addon.open-cluster-management.io 1.20.3. Definitions 1.20.3.1. ClusterManagementAddOn Name Description Schema apiVersion required Versioned schema of the ClusterManagementAddOn. string kind required String value that represents the REST resource. string metadata required Metadata of the ClusterManagementAddOn. object spec required Specification of the ClusterManagementAddOn. spec spec Name Description Schema addOnMeta optional AddOnMeta is a reference to the metadata information for the add-on. addOnMeta supportedConfigs optional SupportedConfigs is a list of configuration types supported by add-on. configMeta array addOnMeta Name Description Schema displayName optional Represents the name of add-on that is displayed. string description optional Represents the detailed description of the add-on. string configMeta Name Description Schema group optional Group of the add-on configuration. string resource required Resource of the add-on configuration. string defaultConfig required Represents the namespace and name of the default add-on configuration. This is where all add-ons have a same configuration. configReferent configReferent Name Description Schema namespace optional Namespace of the add-on configuration. If this field is not set, the configuration is cluster-scope. string name required Name of the add-on configuration. string 1.21. ManagedClusterAddOn API (v1alpha1) 1.21.1. Overview This documentation is for the ManagedClusterAddOn resource for Red Hat Advanced Cluster Management for Kubernetes. The ManagedClusterAddOn resource has four possible requests: create, query, delete, and update. ManagedClusterAddOn is the custom resource object which holds the current state of an add-on. This resource should be created in the ManagedCluster namespace. 1.21.1.1. Version information Version : 2.12.0 1.21.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.21.1.3. Tags addon.open-cluster-management.io : Create and manage ManagedClusterAddOns 1.21.2. Paths 1.21.2.1. Query all ManagedClusterAddOns 1.21.2.1.1. Description Query your ManagedClusterAddOns for more details. 1.21.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string 1.21.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.21.2.1.4. Consumes managedclusteraddon/yaml 1.21.2.1.5. Tags addon.open-cluster-management.io 1.21.2.2. Create a ManagedClusterAddOn 1.21.2.2.1. Description Create a ManagedClusterAddOn. 1.21.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Body body required Parameters that describe the ManagedClusterAddOn binding to be created. ManagedClusterAddOn 1.21.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.21.2.2.4. Consumes managedclusteraddon/yaml 1.21.2.2.5. Tags addon.open-cluster-management.io 1.21.2.2.6. Example HTTP request 1.21.2.2.6.1. Request body { "apiVersion": "addon.open-cluster-management.io/v1alpha1", "kind": "ManagedClusterAddOn", "metadata": { "name": "helloworld", "namespace": "cluster1" }, "spec": { "configs": [ { "group": "addon.open-cluster-management.io", "name": "cluster-deploy-config", "namespace": "open-cluster-management-hub", "resource": "addondeploymentconfigs" } ], "installNamespace": "default" }, "status" : { } } 1.21.2.3. Query a single ManagedClusterAddOn 1.21.2.3.1. Description Query a single ManagedClusterAddOn for more details. 1.21.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path managedclusteraddon_name required Name of the ManagedClusterAddOn that you want to query. string 1.21.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.21.2.3.4. Tags addon.open-cluster-management.io 1.21.2.4. Delete a ManagedClusterAddOn 1.21.2.4.1. Description Delete a single ManagedClusterAddOn. 1.21.2.4.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path managedclusteraddon_name required Name of the ManagedClusterAddOn that you want to delete. string 1.21.2.4.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.21.2.4.4. Tags addon.open-cluster-management.io 1.21.3. Definitions 1.21.3.1. ManagedClusterAddOn Name Description Schema apiVersion required Versioned schema of the ManagedClusterAddOn. string kind required String value that represents the REST resource. string metadata required Metadata of the ManagedClusterAddOn. object spec required Specification of the ManagedClusterAddOn. spec spec Name Description Schema installNamespace optional The namespace on the managed cluster to install the add-on agent. If it is not set, the open-cluster-management-agent-addon namespace is used to install the add-on agent. string configs optional A list of add-on configurations where the current add-on has its own configurations. addOnConfig array addOnConfig Name Description Schema group optional Group of the add-on configuration. string resource required Resource of the add-on configuration. string namespace optional Namespace of the add-on configuration. If this field is not set, the configuration is cluster-scope. string name required Name of the add-on configuration. string 1.22. ManagedClusterSet API (v1beta2) 1.22.1. Overview This documentation is for the ManagedClusterSet resource for Red Hat Advanced Cluster Management for Kubernetes. The ManagedClusterSet resource has four possible requests: create, query, delete, and update. ManagedClusterSet groups two or more managed clusters into a set that you can operate together. Managed clusters that belong to a set can have similar attributes, such as shared use purposes or the same deployment region. 1.22.1.1. Version information Version : 2.12.0 1.22.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.22.1.3. Tags cluster.open-cluster-management.io : Create and manage ManagedClusterSets 1.22.2. Paths 1.22.2.1. Query all managedclustersets 1.22.2.1.1. Description Query your managedclustersets for more details. 1.22.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default. string 1.22.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.22.2.1.4. Consumes managedclusterset/yaml 1.22.2.1.5. Tags cluster.open-cluster-management.io 1.22.2.2. Create a managedclusterset 1.22.2.2.1. Description Create a managedclusterset. 1.22.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default . string Body body required Parameters describing the managedclusterset to be created. Managedclusterset 1.22.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.22.2.2.4. Consumes managedclusterset/yaml 1.22.2.2.5. Tags cluster.open-cluster-management.io 1.22.2.2.6. Example HTTP request 1.22.2.2.6.1. Request body { "apiVersion" : "cluster.open-cluster-management.io/v1beta2", "kind" : "ManagedClusterSet", "metadata" : { "name" : "example-clusterset", }, "spec": { }, "status" : { } } 1.22.2.3. Query a single managedclusterset 1.22.2.3.1. Description Query a single managedclusterset for more details. 1.22.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default . string Path managedclusterset_name required Name of the managedclusterset that you want to query. string 1.22.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.22.2.3.4. Tags cluster.open-cluster-management.io 1.22.2.4. Delete a managedclusterset 1.22.2.4.1. Description Delete a single managedclusterset. 1.22.2.4.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path namespace required Namespace that you want to use, for example, default . string Path managedclusterset_name required Name of the managedclusterset that you want to delete. string 1.22.2.4.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.22.2.4.4. Tags cluster.open-cluster-management.io 1.22.3. Definitions 1.22.3.1. ManagedClusterSet Name Description Schema apiVersion required Versioned schema of the ManagedClusterSet . string kind required String value that represents the REST resource. string metadata required Metadata of the ManagedClusterSet . object spec required Specification of the ManagedClusterSet . spec 1.23. KlusterletConfig API (v1alpha1) 1.23.1. Overview This documentation is for the KlusterletConfig resource for Red Hat Advanced Cluster Management for Kubernetes. The KlusterletConfig resource has four possible requests: create, query, delete, and update. KlusterletConfig contains configuration information about a klusterlet, such as nodeSelector , tolerations , and pullSecret . KlusterletConfig is a cluster-scoped resource and only works on klusterlet pods in the open-cluster-managemnet-agent namespace. KlusterletConfig does not affect add-on deployment configurations. 1.23.1.1. Version information Version : 2.12.0 1.23.1.2. URI scheme BasePath : /kubernetes/apis Schemes : HTTPS 1.23.1.3. Tags config.open-cluster-management.io : Create and manage KlusterletConfig 1.23.2. Paths 1.23.2.1. Query all KlusterletConfig 1.23.2.1.1. Description Query your KlusterletConfigs for more details. 1.23.2.1.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string 1.23.2.1.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.23.2.1.4. Consumes klusterletconfig/yaml 1.23.2.1.5. Tags config.open-cluster-management.io 1.23.2.2. Create a KlusterletConfig 1.23.2.2.1. Description Create a KlusterletConfig. 1.23.2.2.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Body body required Parameters describing the KlusterletConfig binding to be created. KlusterletConfig 1.23.2.2.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.23.2.2.4. Consumes klusterletconfig/yaml 1.23.2.2.5. Tags config.open-cluster-management.io 1.23.2.2.6. Example HTTP request 1.23.2.2.6.1. Request body { "apiVersion": "apiextensions.k8s.io/v1", "kind": "CustomResourceDefinition", "metadata": { "annotations": { "controller-gen.kubebuilder.io/version": "v0.7.0" }, "creationTimestamp": null, "name": "klusterletconfigs.config.open-cluster-management.io" }, "spec": { "group": "config.open-cluster-management.io", "names": { "kind": "KlusterletConfig", "listKind": "KlusterletConfigList", "plural": "klusterletconfigs", "singular": "klusterletconfig" }, "preserveUnknownFields": false, "scope": "Cluster", "versions": [ { "name": "v1alpha1", "schema": { "openAPIV3Schema": { "description": "KlusterletConfig contains the configuration of a klusterlet including the upgrade strategy, config overrides, proxy configurations etc.", "properties": { "apiVersion": { "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources", "type": "string" }, "kind": { "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", "type": "string" }, "metadata": { "type": "object" }, "spec": { "description": "Spec defines the desired state of KlusterletConfig", "properties": { "hubKubeAPIServerProxyConfig": { "description": "HubKubeAPIServerProxyConfig holds proxy settings for connections between klusterlet/add-on agents on the managed cluster and the kube-apiserver on the hub cluster. Empty means no proxy settings is available.", "properties": { "caBundle": { "description": "CABundle is a CA certificate bundle to verify the proxy server. It will be ignored if only HTTPProxy is set; And it is required when HTTPSProxy is set and self signed CA certificate is used by the proxy server.", "format": "byte", "type": "string" }, "httpProxy": { "description": "HTTPProxy is the URL of the proxy for HTTP requests", "type": "string" }, "httpsProxy": { "description": "HTTPSProxy is the URL of the proxy for HTTPS requests HTTPSProxy will be chosen if both HTTPProxy and HTTPSProxy are set.", "type": "string" } }, "type": "object" }, "nodePlacement": { "description": "NodePlacement enables explicit control over the scheduling of the agent components. If the placement is nil, the placement is not specified, it will be omitted. If the placement is an empty object, the placement will match all nodes and tolerate nothing.", "properties": { "nodeSelector": { "additionalProperties": { "type": "string" }, "description": "NodeSelector defines which Nodes the Pods are scheduled on. The default is an empty list.", "type": "object" }, "tolerations": { "description": "Tolerations is attached by pods to tolerate any taint that matches the triple <key,value,effect> using the matching operator <operator>. The default is an empty list.", "items": { "description": "The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>.", "properties": { "effect": { "description": "Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute.", "type": "string" }, "key": { "description": "Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys.", "type": "string" }, "operator": { "description": "Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category.", "type": "string" }, "tolerationSeconds": { "description": "TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system.", "format": "int64", "type": "integer" }, "value": { "description": "Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.", "type": "string" } }, "type": "object" }, "type": "array" } }, "type": "object" }, "pullSecret": { "description": "PullSecret is the name of image pull secret.", "properties": { "apiVersion": { "description": "API version of the referent.", "type": "string" }, "fieldPath": { "description": "If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: \"spec.containers{name}\" (where \"name\" refers to the name of the container that triggered the event) or if no container name is specified \"spec.containers[2]\" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.", "type": "string" }, "kind": { "description": "Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds", "type": "string" }, "name": { "description": "Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names", "type": "string" }, "namespace": { "description": "Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/", "type": "string" }, "resourceVersion": { "description": "Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency", "type": "string" }, "uid": { "description": "UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids", "type": "string" } }, "type": "object" }, "registries": { "description": "Registries includes the mirror and source registries. The source registry will be replaced by the Mirror.", "items": { "properties": { "mirror": { "description": "Mirror is the mirrored registry of the Source. Will be ignored if Mirror is empty.", "type": "string" }, "source": { "description": "Source is the source registry. All image registries will be replaced by Mirror if Source is empty.", "type": "string" } }, "required": [ "mirror" ], "type": "object" }, "type": "array" } }, "type": "object" }, "status": { "description": "Status defines the observed state of KlusterletConfig", "type": "object" } }, "type": "object" } }, "served": true, "storage": true, "subresources": { "status": {} } } ] }, "status": { "acceptedNames": { "kind": "", "plural": "" }, "conditions": [], "storedVersions": [] } } 1.23.2.3. Query a single KlusterletConfig 1.23.2.3.1. Description Query a single KlusterletConfig for more details. 1.23.2.3.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path klusterletconfig_name required Name of the KlusterletConfig that you want to query. string 1.23.2.3.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.23.2.3.4. Tags config.open-cluster-management.io 1.23.2.4. Delete a KlusterletConfig 1.23.2.4.1. Description Delete a single klusterletconfig. 1.23.2.4.2. Parameters Type Name Description Schema Header COOKIE required Authorization: Bearer {ACCESS_TOKEN} ; ACCESS_TOKEN is the user access token. string Path klusterletconfig_name required Name of the KlusterletConfig that you want to delete. string 1.23.2.4.3. Responses HTTP Code Description Schema 200 Success No Content 403 Access forbidden No Content 404 Resource not found No Content 500 Internal service error No Content 503 Service unavailable No Content 1.23.2.4.4. Tags config.open-cluster-management.io 1.23.3. Definitions 1.23.3.1. KlusterletConfig Name Description Schema apiVersion required Versioned schema of the KlusterletConfig. string kind required String value that represents the REST resource. string metadata required Metadata of the KlusterletConfig. object spec required Specification of the KlusterletConfig. spec spec Name Description Schema registries optional Includes the mirror and source registries. The source registry is replaced by the mirror. registry pullSecret optional The name of image pull secret. object nodePlacement required Enables scheduling control of add-on agents on the managed cluster. nodePlacement hubKubeAPIServerProxyConfig required Contains proxy settings for the connections between the klusterlet or add-on agents on the managed cluster and the kube-apiserver on the hub cluster. Empty means no proxy setting is available. kubeAPIServerProxyConfig nodePlacement Name Description Schema nodeSelector optional Define which nodes the pods are scheduled to run on. When the nodeSelector is empty, the nodeSelector selects all nodes. map[string]string tolerations optional Applied to pods and used to schedule pods to any taint that matches the <key,value,effect> toleration using the matching operator ( <operator> ). []corev1.Toleration kubeAPIServerProxyConfig Name Description Schema caBundle optional A CA certificate bundle to verify the proxy server. The bundle is ignored if only HTTPProxy is set. The bundle is required when HTTPSProxy is set and a self signed CA certificate is used by the proxy server. map[string]string httpProxy optional The URL of the proxy for HTTP requests map[string]string httpsProxy optional The URL of the proxy for HTTPS requests. HTTPSProxy is chosen if both HTTPProxy and HTTPSProxy are set. map[string]string 1.24. Policy compliance history (Technology Preview) (Deprecated) 1.24.1. Overview The policy compliance history API is an optional technical preview feature if you want long-term storage of Red Hat Advanced Cluster Management for Kubernetes policy compliance events in a queryable format. You can use the API to get additional details such as the spec field to audit and troubleshoot your policy, and get compliance events when a policy is disabled or removed from a cluster. The policy compliance history API can also generate a comma-separated values (CSV) spreadsheet of policy compliance events to help you with auditing and troubleshooting. 1.24.1.1. Version information Version : 2.12.0 1.24.2. API Endpoints 1.24.2.1. Listing policy compliance events /api/v1/compliance-events This lists all policy compliance events that you have access to by default. The response format is as follows and is sorted by event.timestamp in descending order by default: { "data": [ { "id": 2, "cluster": { "name": "cluster1", "cluster_id": "215ce184-8dee-4cab-b99b-1f8f29dff611" }, "parent_policy": { "id": 3, "name": "configure-custom-app", "namespace": "policies", "catageories": ["CM Configuration Management"], "controls": ["CM-2 Baseline Configuration"], "standards": ["NIST SP 800-53"] }, "policy": { "apiGroup": "policy.open-cluster-management.io", "id": 2, "kind": "ConfigurationPolicy", "name": "configure-custom-app", "namespace": "", // Only shown with `?include_spec` "spec": {} }, "event": { "compliance": "NonCompliant", "message": "configmaps [app-data] not found in namespace default", "timestamp": "2023-07-19T18:25:43.511Z", "metadata": {} } }, { "id": 1, "cluster": { "name": "cluster2", "cluster_id": "415ce234-8dee-4cab-b99b-1f8f29dff461" }, "parent_policy": { "id": 3, "name": "configure-custom-app", "namespace": "policies", "catageories": ["CM Configuration Management"], "controls": ["CM-2 Baseline Configuration"], "standards": ["NIST SP 800-53"] }, "policy": { "apiGroup": "policy.open-cluster-management.io", "id": 4, "kind": "ConfigurationPolicy", "name": "configure-custom-app", "namespace": "", // Only shown with `?include_spec` "spec": {} }, "event": { "compliance": "Compliant", "message": "configmaps [app-data] found as specified in namespace default", "timestamp": "2023-07-19T18:25:41.523Z", "metadata": {} } } ], "metadata": { "page": 1, "pages": 7, "per_page": 20, "total": 123 } } The following optional query parameters are accepted. Notice that those without descriptions just filter on the field it references. The parameter value null represents no value. Additionally, multiple values can be specified with commas. For example, ?cluster.name=cluster1,cluster2 for "or" filtering. Commas can be escaped with \ , if necessary. Table 1.1. Table of query parameters Query argument Description cluster.cluster_id cluster.name direction The direction to sort by. This defaults to desc , which represents descending order. The supported values are asc and desc . event.compliance event.message_includes A filter for compliance messages that include the input string. Only a single value is supported. event.message_like A SQL LIKE filter for compliance messages. The percent sign ( % ) represents a wildcard of zero or more characters. The underscore sign ( _ ) represents a wildcard of a single character. For example %configmaps [%my-configmap%]% matches any configuration policy compliance message that refers to the config map my-configmap . event.reported_by event.timestamp event.timestamp_after An RFC 3339 timestamp to indicate only compliance events after this time should be shown. For example, 2024-02-28T16:32:57Z . event.timestamp_before An RFC 3339 timestamp to indicate only compliance events before this time should be shown. For example, 2024-02-28T16:32:57Z . id include_spec A flag to include the spec field of the policy in the return value. This is not set by default. page The page number in the query. This defaults to 1 . parent_policy.categories parent_policy.controls parent_policy.id parent_policy.name parent_policy.namespace parent_policy.standards per_page The number of compliance events returned per page. This defaults to 20 and cannot be larger than 100 . policy.apiGroup policy.id policy.kind policy.name policy.namespace policy.severity sort The field to sort by. This defaults to event.timestamp . All fields except policy.spec and event.metadata are sortable by using dot notation. To specify multiple sort options, use commas such as ?sort=policy.name,policy.namespace . 1.24.2.2. Selecting a single policy compliance event /api/v1/compliance-events/<id> You can select a single policy compliance event by specifying its database ID. For example, /api/v1/compliance-events/1 selects the compliance event with the ID of 1. The format of the return value is the following JSON: { "id": 1, "cluster": { "name": "cluster2", "cluster_id": "415ce234-8dee-4cab-b99b-1f8f29dff461" }, "parent_policy": { "id": 2, "name": "etcd-encryption", "namespace": "policies", "catageories": ["CM Configuration Management"], "controls": ["CM-2 Baseline Configuration"], "standards": ["NIST SP 800-53"] }, "policy": { "apiGroup": "policy.open-cluster-management.io", "id": 4, "kind": "ConfigurationPolicy", "name": "etcd-encryption", "namespace": "", "spec": {} }, "event": { "compliance": "Compliant", "message": "configmaps [app-data] found as specified in namespace default", "timestamp": "2023-07-19T18:25:41.523Z", "metadata": {} } } 1.24.2.3. Generating a spreadsheet /api/v1/reports/compliance-events You can generate a comma separated value (CSV) spreadsheet of compliance events for auditing and troubleshooting. It outputs the same and accepts the same query arguments as the /api/v1/compliance-events API endpoint. By default there is no per_page limitation set and there is no maximum for the per_page query argument. All the CSV headers are the same as the /api/v1/compliance-events API endpoint with underscores separating JSON objects. For example, the event timestamp has a header of event_timestamp . 1.24.3. Authentication and Authorization The policy compliance history API utilizes the OpenShift instance used by the Red Hat Advanced Cluster Management hub cluster for authentication and authorization. You must provide your OpenShift token in the Authorization header of the HTTPS request. To find your token, run the following command: oc whoami --show-token 1.24.3.1. Viewing compliance events To view the compliance events for a managed cluster, you need access to complete the get verb for the ManagedCluster object on the Red Hat Advanced Cluster Management hub cluster. For example, to view the compliance events of the local-cluster cluster, you might use the open-cluster-management:view:local-cluster ClusterRole or create your own resource as the following example: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: local-cluster-view rules: - apiGroups: - cluster.open-cluster-management.io resources: - managedclusters resourceNames: - local-cluster verbs: - get To verify your access to a particular managed cluster, use the oc auth can-i command. For example, to check if you have access to the local-cluster managed cluster, run the following command: 1.24.3.2. Recording a compliance event Users or service accounts with patch verb access in the policies.policy.open-cluster-management.io/status resource in the corresponding managed cluster namespace have access to record policy compliance events. The governance-policy-framework pod on managed clusters utilizes the open-cluster-management-compliance-history-api-recorder service account in the corresponding managed cluster namespace on the Red Hat Advanced Cluster Management hub cluster to record compliance events. Each service account has the open-cluster-management:compliance-history-api-recorder ClusterRole bound to the managed cluster namespace. Restrict user and service account patch verb access to the policy status to ensure the trustworthiness of the data stored in the policy compliance history API.
[ "GET /cluster.open-cluster-management.io/v1/managedclusters", "POST /cluster.open-cluster-management.io/v1/managedclusters", "{ \"apiVersion\" : \"cluster.open-cluster-management.io/v1\", \"kind\" : \"ManagedCluster\", \"metadata\" : { \"labels\" : { \"vendor\" : \"OpenShift\" }, \"name\" : \"cluster1\" }, \"spec\": { \"hubAcceptsClient\": true, \"managedClusterClientConfigs\": [ { \"caBundle\": \"test\", \"url\": \"https://test.com\" } ] }, \"status\" : { } }", "GET /cluster.open-cluster-management.io/v1/managedclusters/{cluster_name}", "DELETE /cluster.open-cluster-management.io/v1/managedclusters/{cluster_name}", "DELETE /hive.openshift.io/v1/{cluster_name}/clusterdeployments/{cluster_name}", "\"^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?USD\"", "GET /siteconfig.open-cluster-management.io/v1alpha1/{clusterinstance_namespace}/{clusterinstance_name}", "POST /siteconfig.open-cluster-management.io/v1alpha1/<clusterinstance_namespace>/<clusterinstance_name>", "{ \"apiVersion\": \"siteconfig.open-cluster-management.io/v1alpha1\", \"kind\": \"ClusterInstance\", \"metadata\": { \"name\": \"site-sno-du-1\", \"namespace\": \"site-sno-du-1\" }, \"spec\": { \"baseDomain\": \"example.com\", \"pullSecretRef\": { \"name\": \"pullSecretName\" }, \"sshPublicKey\": \"ssh-rsa \", \"clusterName\": \"site-sno-du-1\", \"proxy\": { \"noProxys\": \"foobar\" }, \"caBundleRef\": { \"name\": \"my-bundle-ref\" }, \"extraManifestsRefs\": [ { \"name\": \"foobar1\" }, { \"name\": \"foobar2\" } ], \"networkType\": \"OVNKubernetes\", \"installConfigOverrides\": \"{\\\"capabilities\\\":{\\\"baselineCapabilitySet\\\": \\\"None\\\", \\\"additionalEnabledCapabilities\\\": [ \\\"marketplace\\\", \\\"NodeTuning\\\" ] }}\", \"extraLabels\": { \"ManagedCluster\": { \"group-du-sno\": \"test\", \"common\": \"true\", \"sites\": \"site-sno-du-1\" } }, \"clusterNetwork\": [ { \"cidr\": \"203.0.113.0/24\", \"hostPrefix\": 23 } ], \"machineNetwork\": [ { \"cidr\": \"203.0.113.0/24\" } ], \"serviceNetwork\": [ { \"cidr\": \"203.0.113.0/24\" } ], \"additionalNTPSources\": [ \"NTP.server1\", \"198.51.100.100\" ], \"ignitionConfigOverride\": \"{\\\"ignition\\\": {\\\"version\\\": \\\"3.1.0\\\"}, \\\"storage\\\": {\\\"files\\\": [{\\\"path\\\": \\\"/etc/containers/registries.conf\\\", \\\"overwrite\\\": true, \\\"contents\\\": {\\\"source\\\": \\\"data:text/plain;base64,foobar==\\\"}}]}}\", \"diskEncryption\": { \"type\": \"nbde\", \"tang\": [ { \"url\": \"http://192.0.2.5:7500\", \"thumbprint\": \"1234567890\" } ] }, \"clusterType\": \"SNO\", \"templateRefs\": [ { \"name\": \"ai-cluster-templates-v1\", \"namespace\": \"rhacm\" } ], \"nodes\": [ { \"hostName\": \"node1\", \"role\": \"master\", \"templateRefs\": [ { \"name\": \"ai-node-templates-v1\", \"namespace\": \"rhacm\" } ], \"ironicInspect\": \"\", \"bmcAddress\": \"idrac-virtualmedia+https://203.0.113.100/redfish/v1/Systems/System.Embedded.1\", \"bmcCredentialsName\": { \"name\": \"<bmcCredentials_secre_name>\" }, \"bootMACAddress\": \"00:00:5E:00:53:00\", \"bootMode\": \"UEFI\", \"installerArgs\": \"[\\\"--append-karg\\\", \\\"nameserver=8.8.8.8\\\", \\\"-n\\\"]\", \"ignitionConfigOverride\": \"{\\\"ignition\\\": {\\\"version\\\": \\\"3.1.0\\\"}, \\\"storage\\\": {\\\"files\\\": [{\\\"path\\\": \\\"/etc/containers/registries.conf\\\", \\\"overwrite\\\": true, \\\"contents\\\": {\\\"source\\\": \\\"data:text/plain;base64,foobar==\\\"}}]}}\", \"nodeNetwork\": { \"interfaces\": [ { \"name\": \"eno1\", \"macAddress\": \"00:00:5E:00:53:01\" } ], \"config\": { \"interfaces\": [ { \"name\": \"eno1\", \"type\": \"ethernet\", \"ipv4\": { \"enabled\": true, \"dhcp\": false, \"address\": [ { \"ip\": \"192.0.2.1\", \"prefix-length\": 24 } ] }, \"ipv6\": { \"enabled\": true, \"dhcp\": false, \"address\": [ { \"ip\": \"2001:0DB8:0:0:0:0:0:1\", \"prefix-length\": 32 } ] } } ], \"dns-resolver\": { \"config\": { \"server\": [ \"198.51.100.1\" ] } }, \"routes\": { \"config\": [ { \"destination\": \"0.0.0.0/0\", \"next-hop-address\": \"203.0.113.255\", \"next-hop-interface\": \"eno1\", \"table-id\": 254 } ] } } } } ] } }", "GET /siteconfig.open-cluster-management.io/v1alpha1/<clusterinstance_namespace>/<clusterinstance_name>", "extraAnnotations: ClusterDeployment: myClusterAnnotation: success", "extraLabels: ManagedCluster: common: \"true\" label-a : \"value-a\"", "extraAnnotations: BareMetalHost: myNodeAnnotation: success", "extraLabels: ManagedCluster: common: \"true\" label-a : \"value-a\"", "GET /cluster.open-cluster-management.io/v1beta2/managedclustersets", "POST /cluster.open-cluster-management.io/v1beta2/managedclustersets", "{ \"apiVersion\": \"cluster.open-cluster-management.io/v1beta2\", \"kind\": \"ManagedClusterSet\", \"metadata\": { \"name\": \"clusterset1\" }, \"spec\": { \"clusterSelector\": { \"selectorType\": \"ExclusiveClusterSetLabel\" } }, \"status\": {} }", "GET /cluster.open-cluster-management.io/v1beta2/managedclustersets/{clusterset_name}", "DELETE /cluster.open-cluster-management.io/v1beta2/managedclustersets/{clusterset_name}", "GET /cluster.open-cluster-management.io/v1beta2/namespaces/{namespace}/managedclustersetbindings", "POST /cluster.open-cluster-management.io/v1beta2/namespaces/{namespace}/managedclustersetbindings", "{ \"apiVersion\" : \"cluster.open-cluster-management.io/v1beta2\", \"kind\" : \"ManagedClusterSetBinding\", \"metadata\" : { \"name\" : \"clusterset1\", \"namespace\" : \"ns1\" }, \"spec\": { \"clusterSet\": \"clusterset1\" }, \"status\" : { } }", "GET /cluster.open-cluster-management.io/v1beta2/namespaces/{namespace}/managedclustersetbindings/{clustersetbinding_name}", "DELETE /cluster.open-cluster-management.io/v1beta2/managedclustersetbindings/{clustersetbinding_name}", "GET /managedclusters.clusterview.open-cluster-management.io", "LIST /managedclusters.clusterview.open-cluster-management.io", "{ \"apiVersion\" : \"clusterview.open-cluster-management.io/v1alpha1\", \"kind\" : \"ClusterView\", \"metadata\" : { \"name\" : \"<user_ID>\" }, \"spec\": { }, \"status\" : { } }", "WATCH /managedclusters.clusterview.open-cluster-management.io", "GET /managedclustersets.clusterview.open-cluster-management.io", "LIST /managedclustersets.clusterview.open-cluster-management.io", "WATCH /managedclustersets.clusterview.open-cluster-management.io", "POST /apps.open-cluster-management.io/v1/namespaces/{namespace}/channels", "{ \"apiVersion\": \"apps.open-cluster-management.io/v1\", \"kind\": \"Channel\", \"metadata\": { \"name\": \"sample-channel\", \"namespace\": \"default\" }, \"spec\": { \"configMapRef\": { \"kind\": \"configmap\", \"name\": \"bookinfo-resource-filter-configmap\" }, \"pathname\": \"https://charts.helm.sh/stable\", \"type\": \"HelmRepo\" } }", "GET /apps.open-cluster-management.io/v1/namespaces/{namespace}/channels", "GET /apps.open-cluster-management.io/v1/namespaces/{namespace}/channels/{channel_name}", "DELETE /apps.open-cluster-management.io/v1/namespaces/{namespace}/channels/{channel_name}", "POST /apps.open-cluster-management.io/v1/namespaces/{namespace}/subscriptions", "{ \"apiVersion\" : \"apps.open-cluster-management.io/v1\", \"kind\" : \"Subscription\", \"metadata\" : { \"name\" : \"sample_subscription\", \"namespace\" : \"default\", \"labels\" : { \"app\" : \"sample_subscription-app\" }, \"annotations\" : { \"apps.open-cluster-management.io/git-path\" : \"apps/sample/\", \"apps.open-cluster-management.io/git-branch\" : \"sample_branch\" } }, \"spec\" : { \"channel\" : \"channel_namespace/sample_channel\", \"packageOverrides\" : [ { \"packageName\" : \"my-sample-application\", \"packageAlias\" : \"the-sample-app\", \"packageOverrides\" : [ { \"path\" : \"spec\", \"value\" : { \"persistence\" : { \"enabled\" : false, \"useDynamicProvisioning\" : false }, \"license\" : \"accept\", \"tls\" : { \"hostname\" : \"my-mcm-cluster.icp\" }, \"sso\" : { \"registrationImage\" : { \"pullSecret\" : \"hub-repo-docker-secret\" } } } } ] } ], \"placement\" : { \"placementRef\" : { \"kind\" : \"PlacementRule\", \"name\" : \"demo-clusters\" } } } }", "GET /apps.open-cluster-management.io/v1/namespaces/{namespace}/subscriptions", "GET /apps.open-cluster-management.io/v1/namespaces/{namespace}/subscriptions/{subscription_name}", "DELETE /apps.open-cluster-management.io/v1/namespaces/{namespace}/subscriptions/{subscription_name}", "POST /apps.open-cluster-management.io/v1/namespaces/{namespace}/placementrules", "{ \"apiVersion\" : \"apps.open-cluster-management.io/v1\", \"kind\" : \"PlacementRule\", \"metadata\" : { \"name\" : \"towhichcluster\", \"namespace\" : \"ns-sub-1\" }, \"spec\" : { \"clusterConditions\" : [ { \"type\": \"ManagedClusterConditionAvailable\", \"status\": \"True\" } ], \"clusterSelector\" : { } } }", "GET /apps.open-cluster-management.io/v1/namespaces/{namespace}/placementrules", "GET /apps.open-cluster-management.io/v1/namespaces/{namespace}/placementrules/{placementrule_name}", "DELETE /apps.open-cluster-management.io/v1/namespaces/{namespace}/placementrules/{placementrule_name}", "POST /app.k8s.io/v1beta1/namespaces/{namespace}/applications", "{ \"apiVersion\" : \"app.k8s.io/v1beta1\", \"kind\" : \"Application\", \"metadata\" : { \"labels\" : { \"app\" : \"nginx-app-details\" }, \"name\" : \"nginx-app-3\", \"namespace\" : \"ns-sub-1\" }, \"spec\" : { \"componentKinds\" : [ { \"group\" : \"apps.open-cluster-management.io\", \"kind\" : \"Subscription\" } ] }, \"selector\" : { \"matchLabels\" : { \"app\" : \"nginx-app-details\" } }, \"status\" : { } }", "GET /app.k8s.io/v1beta1/namespaces/{namespace}/applications", "GET /app.k8s.io/v1beta1/namespaces/{namespace}/applications/{application_name}", "DELETE /app.k8s.io/v1beta1/namespaces/{namespace}/applications/{application_name}", "POST /apps.open-cluster-management.io/v1/namespaces/{namespace}/helmreleases", "{ \"apiVersion\" : \"apps.open-cluster-management.io/v1\", \"kind\" : \"HelmRelease\", \"metadata\" : { \"name\" : \"nginx-ingress\", \"namespace\" : \"default\" }, \"repo\" : { \"chartName\" : \"nginx-ingress\", \"source\" : { \"helmRepo\" : { \"urls\" : [ \"https://kubernetes-charts.storage.googleapis.com/nginx-ingress-1.26.0.tgz\" ] }, \"type\" : \"helmrepo\" }, \"version\" : \"1.26.0\" }, \"spec\" : { \"defaultBackend\" : { \"replicaCount\" : 3 } } }", "GET /apps.open-cluster-management.io/v1/namespaces/{namespace}/helmreleases", "GET /apps.open-cluster-management.io/v1/namespaces/{namespace}/helmreleases/{helmrelease_name}", "DELETE /apps.open-cluster-management.io/v1/namespaces/{namespace}/helmreleases/{helmrelease_name}", "POST /policy.open-cluster-management.io/v1/v1alpha1/namespaces/{namespace}/policies/{policy_name}", "{ \"apiVersion\": \"policy.open-cluster-management.io/v1\", \"kind\": \"Policy\", \"metadata\": { \"name\": \"test-policy-swagger\", }, \"spec\": { \"remediationAction\": \"enforce\", \"namespaces\": { \"include\": [ \"default\" ], \"exclude\": [ \"kube*\" ] }, \"policy-templates\": { \"kind\": \"ConfigurationPolicy\", \"apiVersion\": \"policy.open-cluster-management.io/v1\", \"metadata\": { \"name\": \"test-role\" }, \"spec\" : { \"object-templates\": { \"complianceType\": \"musthave\", \"metadataComplianceType\": \"musthave\", \"objectDefinition\": { \"apiVersion\": \"rbac.authorization.k8s.io/v1\", \"kind\": \"Role\", \"metadata\": { \"name\": \"role-policy\", }, \"rules\": [ { \"apiGroups\": [ \"extensions\", \"apps\" ], \"resources\": [ \"deployments\" ], \"verbs\": [ \"get\", \"list\", \"watch\", \"delete\" ] }, { \"apiGroups\": [ \"core\" ], \"resources\": [ \"pods\" ], \"verbs\": [ \"create\", \"update\", \"patch\" ] }, { \"apiGroups\": [ \"core\" ], \"resources\": [ \"secrets\" ], \"verbs\": [ \"get\", \"watch\", \"list\", \"create\", \"delete\", \"update\", \"patch\" ], }, ], }, }, }, },", "GET /policy.open-cluster-management.io/v1/namespaces/{namespace}/policies/{policy_name}", "GET /policy.open-cluster-management.io/v1/namespaces/{namespace}/policies/{policy_name}", "DELETE /policy.open-cluster-management.io/v1/namespaces/{namespace}/policies/{policy_name}", "POST /apis/observability.open-cluster-management.io/v1beta2/multiclusterobservabilities", "{ \"apiVersion\": \"observability.open-cluster-management.io/v1beta2\", \"kind\": \"MultiClusterObservability\", \"metadata\": { \"name\": \"example\" }, \"spec\": { \"observabilityAddonSpec\": {} \"storageConfig\": { \"metricObjectStorage\": { \"name\": \"thanos-object-storage\", \"key\": \"thanos.yaml\" \"writeStorage\": { - \"key\": \" \", \"name\" : \" \" - \"key\": \" \", \"name\" : \" \" } } } }", "GET /apis/observability.open-cluster-management.io/v1beta2/multiclusterobservabilities", "GET /apis/observability.open-cluster-management.io/v1beta2/multiclusterobservabilities/{multiclusterobservability_name}", "DELETE /apis/observability.open-cluster-management.io/v1beta2/multiclusterobservabilities/{multiclusterobservability_name}", "create route passthrough search-api --service=search-search-api -n open-cluster-management", "input SearchFilter { property: String! values: [String]! } input SearchInput { keywords: [String] filters: [SearchFilter] limit: Int relatedKinds: [String] } type SearchResult { count: Int items: [Map] related: [SearchRelatedResult] } type SearchRelatedResult { kind: String! count: Int items: [Map] }", "{ \"query\": \"type SearchResult {count: Intitems: [Map]related: [SearchRelatedResult]} type SearchRelatedResult {kind: String!count: Intitems: [Map]}\", \"variables\": { \"input\": [ { \"keywords\": [], \"filters\": [ { \"property\": \"kind\", \"values\": [ \"Deployment\" ] } ], \"limit\": 10 } ] } }", "type Query { search(input: [SearchInput]): [SearchResult] searchComplete(property: String!, query: SearchInput, limit: Int): [String] searchSchema: Map messages: [Message] }", "query mySearch(USDinput: [SearchInput]) { search(input: USDinput) { items } }", "{\"input\":[ { \"keywords\":[], \"filters\":[ {\"property\":\"kind\",\"values\":[\"Deployment\"]}], \"limit\":10 } ]}", "query mySearch(USDinput: [SearchInput]) { search(input: USDinput) { items } }", "{\"input\":[ { \"keywords\":[], \"filters\":[ {\"property\":\"kind\",\"values\":[\"Pod\"]}], \"limit\":10 } ]}", "POST /operator.open-cluster-management.io/v1beta1/namespaces/{namespace}/mch", "{ \"apiVersion\": \"apiextensions.k8s.io/v1\", \"kind\": \"CustomResourceDefinition\", \"metadata\": { \"name\": \"multiclusterhubs.operator.open-cluster-management.io\" }, \"spec\": { \"group\": \"operator.open-cluster-management.io\", \"names\": { \"kind\": \"MultiClusterHub\", \"listKind\": \"MultiClusterHubList\", \"plural\": \"multiclusterhubs\", \"shortNames\": [ \"mch\" ], \"singular\": \"multiclusterhub\" }, \"scope\": \"Namespaced\", \"versions\": [ { \"additionalPrinterColumns\": [ { \"description\": \"The overall status of the multicluster hub.\", \"jsonPath\": \".status.phase\", \"name\": \"Status\", \"type\": \"string\" }, { \"jsonPath\": \".metadata.creationTimestamp\", \"name\": \"Age\", \"type\": \"date\" } ], \"name\": \"v1\", \"schema\": { \"openAPIV3Schema\": { \"description\": \"MultiClusterHub defines the configuration for an instance of the multiCluster hub, a central point for managing multiple Kubernetes-based clusters. The deployment of multicluster hub components is determined based on the configuration that is defined in this resource.\", \"properties\": { \"apiVersion\": { \"description\": \"APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\", \"type\": \"string\" }, \"kind\": { \"description\": \"Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. The value is in CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\", \"type\": \"string\" }, \"metadata\": { \"type\": \"object\" }, \"spec\": { \"description\": \"MultiClusterHubSpec defines the desired state of MultiClusterHub.\", \"properties\": { \"availabilityConfig\": { \"description\": \"Specifies deployment replication for improved availability. Options are: Basic and High (default).\", \"type\": \"string\" }, \"customCAConfigmap\": { \"description\": \"Provide the customized OpenShift default ingress CA certificate to Red Hat Advanced Cluster Management.\", } \"type\": \"string\" }, \"disableHubSelfManagement\": { \"description\": \"Disable automatic import of the hub cluster as a managed cluster.\", \"type\": \"boolean\" }, \"disableUpdateClusterImageSets\": { \"description\": \"Disable automatic update of ClusterImageSets.\", \"type\": \"boolean\" }, \"hive\": { \"description\": \"(Deprecated) Overrides for the default HiveConfig specification.\", \"properties\": { \"additionalCertificateAuthorities\": { \"description\": \"(Deprecated) AdditionalCertificateAuthorities is a list of references to secrets in the 'hive' namespace that contain an additional Certificate Authority to use when communicating with target clusters. These certificate authorities are used in addition to any self-signed CA generated by each cluster on installation.\", \"items\": { \"description\": \"LocalObjectReference contains the information to let you locate the referenced object inside the same namespace.\", \"properties\": { \"name\": { \"description\": \"Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\", \"type\": \"string\" } }, \"type\": \"object\" }, \"type\": \"array\" }, \"backup\": { \"description\": \"(Deprecated) Backup specifies configuration for backup integration. If absent, backup integration is disabled.\", \"properties\": { \"minBackupPeriodSeconds\": { \"description\": \"(Deprecated) MinBackupPeriodSeconds specifies that a minimum of MinBackupPeriodSeconds occurs in between each backup. This is used to rate limit backups. This potentially batches together multiple changes into one backup. No backups are lost for changes that happen during the interval that is queued up, and results in a backup once the interval has been completed.\", \"type\": \"integer\" }, \"velero\": { \"description\": \"(Deprecated) Velero specifies configuration for the Velero backup integration.\", \"properties\": { \"enabled\": { \"description\": \"(Deprecated) Enabled dictates if the Velero backup integration is enabled. If not specified, the default is disabled.\", \"type\": \"boolean\" } }, \"type\": \"object\" } }, \"type\": \"object\" }, \"externalDNS\": { \"description\": \"(Deprecated) ExternalDNS specifies configuration for external-dns if it is to be deployed by Hive. If absent, external-dns is not deployed.\", \"properties\": { \"aws\": { \"description\": \"(Deprecated) AWS contains AWS-specific settings for external DNS.\", \"properties\": { \"credentials\": { \"description\": \"(Deprecated) Credentials reference a secret that is used to authenticate with AWS Route53. It needs permission to manage entries in each of the managed domains for this cluster. Secret should have AWS keys named 'aws_access_key_id' and 'aws_secret_access_key'.\", \"properties\": { \"name\": { \"description\": \"Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\", \"type\": \"string\" } }, \"type\": \"object\" } }, \"type\": \"object\" }, \"gcp\": { \"description\": \"(Deprecated) GCP contains Google Cloud Platform specific settings for external DNS.\", \"properties\": { \"credentials\": { \"description\": \"(Deprecated) Credentials reference a secret that is used to authenticate with GCP DNS. It needs permission to manage entries in each of the managed domains for this cluster. Secret should have a key names 'osServiceAccount.json'. The credentials must specify the project to use.\", \"properties\": { \"name\": { \"description\": \"Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\", \"type\": \"string\" } }, \"type\": \"object\" } }, \"type\": \"object\" } }, \"type\": \"object\" }, \"failedProvisionConfig\": { \"description\": \"(Deprecated) FailedProvisionConfig is used to configure settings related to handling provision failures.\", \"properties\": { \"skipGatherLogs\": { \"description\": \"(Deprecated) SkipGatherLogs disables functionality that attempts to gather full logs from the cluster if an installation fails for any reason. The logs are stored in a persistent volume for up to seven days.\", \"type\": \"boolean\" } }, \"type\": \"object\" }, \"globalPullSecret\": { \"description\": \"(Deprecated) GlobalPullSecret is used to specify a pull secret that is used globally by all of the cluster deployments. For each cluster deployment, the contents of GlobalPullSecret are merged with the specific pull secret for a cluster deployment(if specified), with precedence given to the contents of the pull secret for the cluster deployment.\", \"properties\": { \"name\": { \"description\": \"Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\", \"type\": \"string\" } }, \"type\": \"object\" }, \"maintenanceMode\": { \"description\": \"(Deprecated) MaintenanceMode can be set to true to disable the Hive controllers in situations where you need to ensure nothing is running that adds or act upon finalizers on Hive types. This should rarely be needed. Sets replicas to zero for the 'hive-controllers' deployment to accomplish this.\", \"type\": \"boolean\" } }, \"required\": [ \"failedProvisionConfig\" ], \"type\": \"object\" }, \"imagePullSecret\": { \"description\": \"Override pull secret for accessing MultiClusterHub operand and endpoint images.\", \"type\": \"string\" }, \"ingress\": { \"description\": \"Configuration options for ingress management.\", \"properties\": { \"sslCiphers\": { \"description\": \"List of SSL ciphers enabled for management ingress. Defaults to full list of supported ciphers.\", \"items\": { \"type\": \"string\" }, \"type\": \"array\" } }, \"type\": \"object\" }, \"nodeSelector\": { \"additionalProperties\": { \"type\": \"string\" }, \"description\": \"Set the node selectors..\", \"type\": \"object\" }, \"overrides\": { \"description\": \"Developer overrides.\", \"properties\": { \"imagePullPolicy\": { \"description\": \"Pull policy of the multicluster hub images.\", \"type\": \"string\" } }, \"type\": \"object\" }, \"separateCertificateManagement\": { \"description\": \"(Deprecated) Install cert-manager into its own namespace.\", \"type\": \"boolean\" } }, \"type\": \"object\" }, \"status\": { \"description\": \"MulticlusterHubStatus defines the observed state of MultiClusterHub.\", \"properties\": { \"components\": { \"additionalProperties\": { \"description\": \"StatusCondition contains condition information.\", \"properties\": { \"lastTransitionTime\": { \"description\": \"LastTransitionTime is the last time the condition changed from one status to another.\", \"format\": \"date-time\", \"type\": \"string\" }, \"message\": { \"description\": \"Message is a human-readable message indicating\\ndetails about the last status change.\", \"type\": \"string\" }, \"reason\": { \"description\": \"Reason is a (brief) reason for the last status change of the condition.\", \"type\": \"string\" }, \"status\": { \"description\": \"Status is the status of the condition. One of True, False, Unknown.\", \"type\": \"string\" }, \"type\": { \"description\": \"Type is the type of the cluster condition.\", \"type\": \"string\" } }, \"type\": \"object\" }, \"description\": \"Components []ComponentCondition `json:\\\"manifests,omitempty\\\"`\", \"type\": \"object\" }, \"conditions\": { \"description\": \"Conditions contain the different condition statuses for the MultiClusterHub.\", \"items\": { \"description\": \"StatusCondition contains condition information.\", \"properties\": { \"lastTransitionTime\": { \"description\": \"LastTransitionTime is the last time the condition changed from one status to another.\", \"format\": \"date-time\", \"type\": \"string\" }, \"lastUpdateTime\": { \"description\": \"The last time this condition was updated.\", \"format\": \"date-time\", \"type\": \"string\" }, \"message\": { \"description\": \"Message is a human-readable message indicating details about the last status change.\", \"type\": \"string\" }, \"reason\": { \"description\": \"Reason is a (brief) reason for the last status change of the condition.\", \"type\": \"string\" }, \"status\": { \"description\": \"Status is the status of the condition. One of True, False, Unknown.\", \"type\": \"string\" }, \"type\": { \"description\": \"Type is the type of the cluster condition.\", \"type\": \"string\" } }, \"type\": \"object\" }, \"type\": \"array\" }, \"currentVersion\": { \"description\": \"CurrentVersion indicates the current version..\", \"type\": \"string\" }, \"desiredVersion\": { \"description\": \"DesiredVersion indicates the desired version.\", \"type\": \"string\" }, \"phase\": { \"description\": \"Represents the running phase of the MultiClusterHub\", \"type\": \"string\" } }, \"type\": \"object\" } }, \"type\": \"object\" } }, \"served\": true, \"storage\": true, \"subresources\": { \"status\": {} } } ] } }", "GET /operator.open-cluster-management.io/v1beta1/namespaces/{namespace}/operator", "GET /operator.open-cluster-management.io/v1beta1/namespaces/{namespace}/operator/{multiclusterhub_name}", "DELETE /operator.open-cluster-management.io/v1beta1/namespaces/{namespace}/operator/{multiclusterhub_name}", "GET /cluster.open-cluster-management.io/v1beta1/namespaces/{namespace}/placement", "POST /cluster.open-cluster-management.io/v1beta1/namespaces/{namespace}/placements", "{ \"apiVersion\" : \"cluster.open-cluster-management.io/v1beta1\", \"kind\" : \"Placement\", \"metadata\" : { \"name\" : \"placement1\", \"namespace\": \"ns1\" }, \"spec\": { \"predicates\": [ { \"requiredClusterSelector\": { \"labelSelector\": { \"matchLabels\": { \"vendor\": \"OpenShift\" } } } } ] }, \"status\" : { } }", "GET /cluster.open-cluster-management.io/v1beta1/namespaces/{namespace}/placements/{placement_name}", "DELETE /cluster.open-cluster-management.io/v1beta1/namespaces/{namespace}/placements/{placement_name}", "GET /cluster.open-cluster-management.io/v1beta1/namespaces/{namespace}/placementdecisions", "POST /cluster.open-cluster-management.io/v1beta1/namespaces/{namespace}/placementdecisions", "{ \"apiVersion\" : \"cluster.open-cluster-management.io/v1beta1\", \"kind\" : \"PlacementDecision\", \"metadata\" : { \"labels\" : { \"cluster.open-cluster-management.io/placement\" : \"placement1\" }, \"name\" : \"placement1-decision1\", \"namespace\": \"ns1\" }, \"status\" : { } }", "GET /cluster.open-cluster-management.io/v1beta1/namespaces/{namespace}/placementdecisions/{placementdecision_name}", "DELETE /cluster.open-cluster-management.io/v1beta1/namespaces/{namespace}/placementdecisions/{placementdecision_name}", "POST /app.k8s.io/v1/namespaces/{namespace}/discoveryconfigs", "{ \"apiVersion\": \"apiextensions.k8s.io/v1\", \"kind\": \"CustomResourceDefinition\", \"metadata\": { \"annotations\": { \"controller-gen.kubebuilder.io/version\": \"v0.4.1\", }, \"creationTimestamp\": null, \"name\": \"discoveryconfigs.discovery.open-cluster-management.io\", }, \"spec\": { \"group\": \"discovery.open-cluster-management.io\", \"names\": { \"kind\": \"DiscoveryConfig\", \"listKind\": \"DiscoveryConfigList\", \"plural\": \"discoveryconfigs\", \"singular\": \"discoveryconfig\" }, \"scope\": \"Namespaced\", \"versions\": [ { \"name\": \"v1\", \"schema\": { \"openAPIV3Schema\": { \"description\": \"DiscoveryConfig is the Schema for the discoveryconfigs API\", \"properties\": { \"apiVersion\": { \"description\": \"APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\", \"type\": \"string\" }, \"kind\": { \"description\": \"Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\", \"type\": \"string\" }, \"metadata\": { \"type\": \"object\" }, \"spec\": { \"description\": \"DiscoveryConfigSpec defines the desired state of DiscoveryConfig\", \"properties\": { \"credential\": { \"description\": \"Credential is the secret containing credentials to connect to the OCM api on behalf of a user\", \"type\": \"string\" }, \"filters\": { \"description\": \"Sets restrictions on what kind of clusters to discover\", \"properties\": { \"lastActive\": { \"description\": \"LastActive is the last active in days of clusters to discover, determined by activity timestamp\", \"type\": \"integer\" }, \"openShiftVersions\": { \"description\": \"OpenShiftVersions is the list of release versions of OpenShift of the form \\\"<Major>.<Minor>\\\"\", \"items\": { \"description\": \"Semver represents a partial semver string with the major and minor version in the form \\\"<Major>.<Minor>\\\". For example: \\\"4.15\\\"\", \"pattern\": \"^(?:0|[1-9]\\\\d*)\\\\.(?:0|[1-9]\\\\d*)USD\", \"type\": \"string\" }, \"type\": \"array\" } }, \"type\": \"object\" } }, \"required\": [ \"credential\" ], \"type\": \"object\" }, \"status\": { \"description\": \"DiscoveryConfigStatus defines the observed state of DiscoveryConfig\", \"type\": \"object\" } }, \"type\": \"object\" } }, \"served\": true, \"storage\": true, \"subresources\": { \"status\": {} } } ] }, \"status\": { \"acceptedNames\": { \"kind\": \"\", \"plural\": \"\" }, \"conditions\": [], \"storedVersions\": [] } }", "GET /operator.open-cluster-management.io/v1/namespaces/{namespace}/operator", "DELETE /operator.open-cluster-management.io/v1/namespaces/{namespace}/operator/{discoveryconfigs_name}", "POST /app.k8s.io/v1/namespaces/{namespace}/discoveredclusters", "{ \"apiVersion\": \"apiextensions.k8s.io/v1\", \"kind\": \"CustomResourceDefinition\", \"metadata\": { \"annotations\": { \"controller-gen.kubebuilder.io/version\": \"v0.4.1\", }, \"creationTimestamp\": null, \"name\": \"discoveredclusters.discovery.open-cluster-management.io\", }, \"spec\": { \"group\": \"discovery.open-cluster-management.io\", \"names\": { \"kind\": \"DiscoveredCluster\", \"listKind\": \"DiscoveredClusterList\", \"plural\": \"discoveredclusters\", \"singular\": \"discoveredcluster\" }, \"scope\": \"Namespaced\", \"versions\": [ { \"name\": \"v1\", \"schema\": { \"openAPIV3Schema\": { \"description\": \"DiscoveredCluster is the Schema for the discoveredclusters API\", \"properties\": { \"apiVersion\": { \"description\": \"APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\", \"type\": \"string\" }, \"kind\": { \"description\": \"Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\", \"type\": \"string\" }, \"metadata\": { \"type\": \"object\" }, \"spec\": { \"description\": \"DiscoveredClusterSpec defines the desired state of DiscoveredCluster\", \"properties\": { \"activityTimestamp\": { \"format\": \"date-time\", \"type\": \"string\" }, \"apiUrl\": { \"type\": \"string\" }, \"cloudProvider\": { \"type\": \"string\" }, \"console\": { \"type\": \"string\" }, \"creationTimestamp\": { \"format\": \"date-time\", \"type\": \"string\" }, \"credential\": { \"description\": \"ObjectReference contains enough information to let you inspect or modify the referred object. --- New uses of this type are discouraged because of difficulty describing its usage when embedded in APIs. 1. Ignored fields. It includes many fields which are not generally honored. For instance, ResourceVersion and FieldPath are both very rarely valid in actual usage. 2. Invalid usage help. It is impossible to add specific help for individual usage. In most embedded usages, there are particular restrictions like, \\\"must refer only to types A and B\\\" or \\\"UID not honored\\\" or \\\"name must be restricted\\\". Those cannot be well described when embedded. 3. Inconsistent validation. Because the usages are different, the validation rules are different by usage, which makes it hard for users to predict what will happen. 4. The fields are both imprecise and overly precise. Kind is not a precise mapping to a URL. This can produce ambiguity during interpretation and require a REST mapping. In most cases, the dependency is on the group,resource tuple and the version of the actual struct is irrelevant. 5. We cannot easily change it. Because this type is embedded in many locations, updates to this type will affect numerous schemas. Don't make new APIs embed an underspecified API type they do not control. Instead of using this type, create a locally provided and used type that is well-focused on your reference. For example, ServiceReferences for admission registration: https://github.com/kubernetes/api/blob/release-1.17/admissionregistration/v1/types.go#L533 .\", \"properties\": { \"apiVersion\": { \"description\": \"API version of the referent.\", \"type\": \"string\" }, \"fieldPath\": { \"description\": \"If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: \\\"spec.containers{name}\\\" (where \\\"name\\\" refers to the name of the container that triggered the event) or if no container name is specified \\\"spec.containers[2]\\\" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.\", \"type\": \"string\" }, \"kind\": { \"description\": \"Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\", \"type\": \"string\" }, \"name\": { \"description\": \"Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\", \"type\": \"string\" }, \"namespace\": { \"description\": \"Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/\", \"type\": \"string\" }, \"resourceVersion\": { \"description\": \"Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\", \"type\": \"string\" }, \"uid\": { \"description\": \"UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids\", \"type\": \"string\" } }, \"type\": \"object\" }, \"displayName\": { \"type\": \"string\" }, \"isManagedCluster\": { \"type\": \"boolean\" }, \"name\": { \"type\": \"string\" }, \"openshiftVersion\": { \"type\": \"string\" }, \"status\": { \"type\": \"string\" }, \"type\": { \"type\": \"string\" } }, \"required\": [ \"apiUrl\", \"displayName\", \"isManagedCluster\", \"name\", \"type\" ], \"type\": \"object\" }, \"status\": { \"description\": \"DiscoveredClusterStatus defines the observed state of DiscoveredCluster\", \"type\": \"object\" } }, \"type\": \"object\" } }, \"served\": true, \"storage\": true, \"subresources\": { \"status\": {} } } ] }, \"status\": { \"acceptedNames\": { \"kind\": \"\", \"plural\": \"\" }, \"conditions\": [], \"storedVersions\": [] } }", "GET /operator.open-cluster-management.io/v1/namespaces/{namespace}/operator", "DELETE /operator.open-cluster-management.io/v1/namespaces/{namespace}/operator/{discoveredclusters_name}", "GET /addon.open-cluster-management.io/v1alpha1/namespaces/{namespace}/addondeploymentconfigs", "POST /addon.open-cluster-management.io/v1alpha1/namespaces/{namespace}/addondeploymentconfigs", "{ \"apiVersion\": \"addon.open-cluster-management.io/v1alpha1\", \"kind\": \"AddOnDeploymentConfig\", \"metadata\": { \"name\": \"deploy-config\", \"namespace\": \"open-cluster-management-hub\" }, \"spec\": { \"nodePlacement\": { \"nodeSelector\": { \"node-dedicated\": \"acm-addon\" }, \"tolerations\": [ { \"effect\": \"NoSchedule\", \"key\": \"node-dedicated\", \"operator\": \"Equal\", \"value\": \"acm-addon\" } ] } } }", "GET /addon.open-cluster-management.io/v1alpha1/namespaces/{namespace}/addondeploymentconfigs/{addondeploymentconfig_name}", "DELETE /addon.open-cluster-management.io/v1alpha1/namespaces/{namespace}/addondeploymentconfigs/{addondeploymentconfig_name}", "GET /addon.open-cluster-management.io/v1alpha1/clustermanagementaddons", "POST /addon.open-cluster-management.io/v1alpha1/clustermanagementaddons", "{ \"apiVersion\": \"addon.open-cluster-management.io/v1alpha1\", \"kind\": \"ClusterManagementAddOn\", \"metadata\": { \"name\": \"helloworld\" }, \"spec\": { \"supportedConfigs\": [ { \"defaultConfig\": { \"name\": \"deploy-config\", \"namespace\": \"open-cluster-management-hub\" }, \"group\": \"addon.open-cluster-management.io\", \"resource\": \"addondeploymentconfigs\" } ] }, \"status\" : { } }", "GET /addon.open-cluster-management.io/v1alpha1/clustermanagementaddons/{clustermanagementaddon_name}", "DELETE /addon.open-cluster-management.io/v1alpha1/clustermanagementaddons/{clustermanagementaddon_name}", "GET /addon.open-cluster-management.io/v1alpha1/namespaces/{namespace}/managedclusteraddons", "POST /addon.open-cluster-management.io/v1alpha1/namespaces/{namespace}/managedclusteraddons", "{ \"apiVersion\": \"addon.open-cluster-management.io/v1alpha1\", \"kind\": \"ManagedClusterAddOn\", \"metadata\": { \"name\": \"helloworld\", \"namespace\": \"cluster1\" }, \"spec\": { \"configs\": [ { \"group\": \"addon.open-cluster-management.io\", \"name\": \"cluster-deploy-config\", \"namespace\": \"open-cluster-management-hub\", \"resource\": \"addondeploymentconfigs\" } ], \"installNamespace\": \"default\" }, \"status\" : { } }", "GET /addon.open-cluster-management.io/v1alpha1/namespaces/{namespace}/managedclusteraddons/{managedclusteraddon_name}", "DELETE /addon.open-cluster-management.io/v1alpha1/namespaces/{namespace}/managedclusteraddons/{managedclusteraddon_name}", "GET /cluster.open-cluster-management.io/v1beta2/namespaces/{namespace}/managedclustersets", "POST /cluster.open-cluster-management.io/v1beta2/namespaces/{namespace}/managedclustersets", "{ \"apiVersion\" : \"cluster.open-cluster-management.io/v1beta2\", \"kind\" : \"ManagedClusterSet\", \"metadata\" : { \"name\" : \"example-clusterset\", }, \"spec\": { }, \"status\" : { } }", "GET /cluster.open-cluster-management.io/v1beta2/namespaces/{namespace}/managedclustersets/{managedclusterset_name}", "DELETE /cluster.open-cluster-management.io/v1beta2/managedclustersets/{managedclusterset_name}", "GET /config.open-cluster-management.io/v1alpha1/namespaces/{namespace}/klusterletconfigs", "POST /config.open-cluster-management.io/v1alpha1/namespaces/{namespace}/klusterletconfigs", "{ \"apiVersion\": \"apiextensions.k8s.io/v1\", \"kind\": \"CustomResourceDefinition\", \"metadata\": { \"annotations\": { \"controller-gen.kubebuilder.io/version\": \"v0.7.0\" }, \"creationTimestamp\": null, \"name\": \"klusterletconfigs.config.open-cluster-management.io\" }, \"spec\": { \"group\": \"config.open-cluster-management.io\", \"names\": { \"kind\": \"KlusterletConfig\", \"listKind\": \"KlusterletConfigList\", \"plural\": \"klusterletconfigs\", \"singular\": \"klusterletconfig\" }, \"preserveUnknownFields\": false, \"scope\": \"Cluster\", \"versions\": [ { \"name\": \"v1alpha1\", \"schema\": { \"openAPIV3Schema\": { \"description\": \"KlusterletConfig contains the configuration of a klusterlet including the upgrade strategy, config overrides, proxy configurations etc.\", \"properties\": { \"apiVersion\": { \"description\": \"APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\", \"type\": \"string\" }, \"kind\": { \"description\": \"Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\", \"type\": \"string\" }, \"metadata\": { \"type\": \"object\" }, \"spec\": { \"description\": \"Spec defines the desired state of KlusterletConfig\", \"properties\": { \"hubKubeAPIServerProxyConfig\": { \"description\": \"HubKubeAPIServerProxyConfig holds proxy settings for connections between klusterlet/add-on agents on the managed cluster and the kube-apiserver on the hub cluster. Empty means no proxy settings is available.\", \"properties\": { \"caBundle\": { \"description\": \"CABundle is a CA certificate bundle to verify the proxy server. It will be ignored if only HTTPProxy is set; And it is required when HTTPSProxy is set and self signed CA certificate is used by the proxy server.\", \"format\": \"byte\", \"type\": \"string\" }, \"httpProxy\": { \"description\": \"HTTPProxy is the URL of the proxy for HTTP requests\", \"type\": \"string\" }, \"httpsProxy\": { \"description\": \"HTTPSProxy is the URL of the proxy for HTTPS requests HTTPSProxy will be chosen if both HTTPProxy and HTTPSProxy are set.\", \"type\": \"string\" } }, \"type\": \"object\" }, \"nodePlacement\": { \"description\": \"NodePlacement enables explicit control over the scheduling of the agent components. If the placement is nil, the placement is not specified, it will be omitted. If the placement is an empty object, the placement will match all nodes and tolerate nothing.\", \"properties\": { \"nodeSelector\": { \"additionalProperties\": { \"type\": \"string\" }, \"description\": \"NodeSelector defines which Nodes the Pods are scheduled on. The default is an empty list.\", \"type\": \"object\" }, \"tolerations\": { \"description\": \"Tolerations is attached by pods to tolerate any taint that matches the triple <key,value,effect> using the matching operator <operator>. The default is an empty list.\", \"items\": { \"description\": \"The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>.\", \"properties\": { \"effect\": { \"description\": \"Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute.\", \"type\": \"string\" }, \"key\": { \"description\": \"Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys.\", \"type\": \"string\" }, \"operator\": { \"description\": \"Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category.\", \"type\": \"string\" }, \"tolerationSeconds\": { \"description\": \"TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system.\", \"format\": \"int64\", \"type\": \"integer\" }, \"value\": { \"description\": \"Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.\", \"type\": \"string\" } }, \"type\": \"object\" }, \"type\": \"array\" } }, \"type\": \"object\" }, \"pullSecret\": { \"description\": \"PullSecret is the name of image pull secret.\", \"properties\": { \"apiVersion\": { \"description\": \"API version of the referent.\", \"type\": \"string\" }, \"fieldPath\": { \"description\": \"If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: \\\"spec.containers{name}\\\" (where \\\"name\\\" refers to the name of the container that triggered the event) or if no container name is specified \\\"spec.containers[2]\\\" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.\", \"type\": \"string\" }, \"kind\": { \"description\": \"Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\", \"type\": \"string\" }, \"name\": { \"description\": \"Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names\", \"type\": \"string\" }, \"namespace\": { \"description\": \"Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/\", \"type\": \"string\" }, \"resourceVersion\": { \"description\": \"Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\", \"type\": \"string\" }, \"uid\": { \"description\": \"UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids\", \"type\": \"string\" } }, \"type\": \"object\" }, \"registries\": { \"description\": \"Registries includes the mirror and source registries. The source registry will be replaced by the Mirror.\", \"items\": { \"properties\": { \"mirror\": { \"description\": \"Mirror is the mirrored registry of the Source. Will be ignored if Mirror is empty.\", \"type\": \"string\" }, \"source\": { \"description\": \"Source is the source registry. All image registries will be replaced by Mirror if Source is empty.\", \"type\": \"string\" } }, \"required\": [ \"mirror\" ], \"type\": \"object\" }, \"type\": \"array\" } }, \"type\": \"object\" }, \"status\": { \"description\": \"Status defines the observed state of KlusterletConfig\", \"type\": \"object\" } }, \"type\": \"object\" } }, \"served\": true, \"storage\": true, \"subresources\": { \"status\": {} } } ] }, \"status\": { \"acceptedNames\": { \"kind\": \"\", \"plural\": \"\" }, \"conditions\": [], \"storedVersions\": [] } }", "GET /config.open-cluster-management.io/v1alpha1/namespaces/{namespace}/klusterletconfigs/{klusterletconfig_name}", "DELETE /addon.open-cluster-management.io/v1alpha1/namespaces/{namespace}/klusterletconfigs/{klusterletconfig_name}", "{ \"data\": [ { \"id\": 2, \"cluster\": { \"name\": \"cluster1\", \"cluster_id\": \"215ce184-8dee-4cab-b99b-1f8f29dff611\" }, \"parent_policy\": { \"id\": 3, \"name\": \"configure-custom-app\", \"namespace\": \"policies\", \"catageories\": [\"CM Configuration Management\"], \"controls\": [\"CM-2 Baseline Configuration\"], \"standards\": [\"NIST SP 800-53\"] }, \"policy\": { \"apiGroup\": \"policy.open-cluster-management.io\", \"id\": 2, \"kind\": \"ConfigurationPolicy\", \"name\": \"configure-custom-app\", \"namespace\": \"\", // Only shown with `?include_spec` \"spec\": {} }, \"event\": { \"compliance\": \"NonCompliant\", \"message\": \"configmaps [app-data] not found in namespace default\", \"timestamp\": \"2023-07-19T18:25:43.511Z\", \"metadata\": {} } }, { \"id\": 1, \"cluster\": { \"name\": \"cluster2\", \"cluster_id\": \"415ce234-8dee-4cab-b99b-1f8f29dff461\" }, \"parent_policy\": { \"id\": 3, \"name\": \"configure-custom-app\", \"namespace\": \"policies\", \"catageories\": [\"CM Configuration Management\"], \"controls\": [\"CM-2 Baseline Configuration\"], \"standards\": [\"NIST SP 800-53\"] }, \"policy\": { \"apiGroup\": \"policy.open-cluster-management.io\", \"id\": 4, \"kind\": \"ConfigurationPolicy\", \"name\": \"configure-custom-app\", \"namespace\": \"\", // Only shown with `?include_spec` \"spec\": {} }, \"event\": { \"compliance\": \"Compliant\", \"message\": \"configmaps [app-data] found as specified in namespace default\", \"timestamp\": \"2023-07-19T18:25:41.523Z\", \"metadata\": {} } } ], \"metadata\": { \"page\": 1, \"pages\": 7, \"per_page\": 20, \"total\": 123 } }", "{ \"id\": 1, \"cluster\": { \"name\": \"cluster2\", \"cluster_id\": \"415ce234-8dee-4cab-b99b-1f8f29dff461\" }, \"parent_policy\": { \"id\": 2, \"name\": \"etcd-encryption\", \"namespace\": \"policies\", \"catageories\": [\"CM Configuration Management\"], \"controls\": [\"CM-2 Baseline Configuration\"], \"standards\": [\"NIST SP 800-53\"] }, \"policy\": { \"apiGroup\": \"policy.open-cluster-management.io\", \"id\": 4, \"kind\": \"ConfigurationPolicy\", \"name\": \"etcd-encryption\", \"namespace\": \"\", \"spec\": {} }, \"event\": { \"compliance\": \"Compliant\", \"message\": \"configmaps [app-data] found as specified in namespace default\", \"timestamp\": \"2023-07-19T18:25:41.523Z\", \"metadata\": {} } }", "whoami --show-token", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: local-cluster-view rules: - apiGroups: - cluster.open-cluster-management.io resources: - managedclusters resourceNames: - local-cluster verbs: - get", "auth can-i get managedclusters.cluster.open-cluster-management.io/local-cluster" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/apis/apis
Chapter 2. Understand MicroProfile
Chapter 2. Understand MicroProfile 2.1. MicroProfile Config 2.1.1. MicroProfile Config in JBoss EAP Configuration data can change dynamically and applications need to be able to access the latest configuration information without restarting the server. MicroProfile Config provides portable externalization of configuration data. This means, you can configure applications and microservices to run in multiple environments without modification or repackaging. MicroProfile Config functionality is implemented in JBoss EAP using the SmallRye Config component and is provided by the microprofile-config-smallrye subsystem. Note MicroProfile Config is only supported in JBoss EAP XP. It is not supported in JBoss EAP. Important If you are adding your own Config implementations, you need to use the methods in the latest version of the Config interface. Additional Resources MicroProfile Config SmallRye Config Config implementations 2.1.2. MicroProfile Config sources supported in MicroProfile Config MicroProfile Config configuration properties can come from different locations and can be in different formats. These properties are provided by ConfigSources. ConfigSources are implementations of the org.eclipse.microprofile.config.spi.ConfigSource interface. The MicroProfile Config specification provides the following default ConfigSource implementations for retrieving configuration values: System.getProperties() . System.getenv() . All META-INF/microprofile-config.properties files on the class path. The microprofile-config-smallrye subsystem supports additional types of ConfigSource resources for retrieving configuration values. You can also retrieve the configuration values from the following resources: Properties in a microprofile-config-smallrye/config-source management resource Files in a directory ConfigSource class ConfigSourceProvider class Additional Resources org.eclipse.microprofile.config.spi.ConfigSource 2.2. MicroProfile Fault Tolerance 2.2.1. About MicroProfile Fault Tolerance specification The MicroProfile Fault Tolerance specification defines strategies to deal with errors inherent in distributed microservices. The MicroProfile Fault Tolerance specification defines the following strategies to handle errors: Timeout Define the amount of time within which an execution must finish. Defining a timeout prevents waiting for an execution indefinitely. Retry Define the criteria for retrying a failed execution. Fallback Provide an alternative in the case of a failed execution. CircuitBreaker Define the number of failed execution attempts before temporarily stopping. You can define the length of the delay before resuming execution. Bulkhead Isolate failures in part of the system so that the rest of the system can still function. Asynchronous Execute client request in a separate thread. Additional Resources MicroProfile Fault Tolerance specification 2.2.2. MicroProfile Fault Tolerance in JBoss EAP The microprofile-fault-tolerance-smallrye subsystem provides support for MicroProfile Fault Tolerance in JBoss EAP. The subsystem is available only in the JBoss EAP XP stream. The microprofile-fault-tolerance-smallrye subsystem provides the following annotations for interceptor bindings: @Timeout @Retry @Fallback @CircuitBreaker @Bulkhead @Asynchronous You can bind these annotations at the class level or at the method level. An annotation bound to a class applies to all of the business methods of that class. The following rules apply to binding interceptors: If a component class declares or inherits a class-level interceptor binding, the following restrictions apply: The class must not be declared final. The class must not contain any static, private, or final methods. If a non-static, non-private method of a component class declares a method level interceptor binding, neither the method nor the component class may be declared final. Fault tolerance operations have the following restrictions: Fault tolerance interceptor bindings must be applied to a bean class or bean class method. When invoked, the invocation must be the business method invocation as defined in the Jakarta Contexts and Dependency Injection specification. An operation is not considered fault tolerant if both of the following conditions are true: The method itself is not bound to any fault tolerance interceptor. The class containing the method is not bound to any fault tolerance interceptor. The microprofile-fault-tolerance-smallrye subsystem provides the following configuration options, in addition to the configuration options provided by MicroProfile Fault Tolerance: io.smallrye.faulttolerance.mainThreadPoolSize io.smallrye.faulttolerance.mainThreadPoolQueueSize Additional Resources MicroProfile Fault Tolerance Specification SmallRye Fault Tolerance project 2.3. MicroProfile Health 2.3.1. MicroProfile Health in JBoss EAP JBoss EAP includes the SmallRye Health component, which you can use to determine whether the JBoss EAP instance is responding as expected. This capability is enabled by default. MicroProfile Health is only available when running JBoss EAP as a standalone server. The MicroProfile Health specification defines the following health checks: Readiness Determines whether an application is ready to process requests. The annotation @Readiness provides this health check. Liveness Determines whether an application is running. The annotation @Liveness provides this health check. Startup Determines whether an application has already started. The annotation @Startup provides this health check. The @Health annotation was removed in MicroProfile Health 3.0. MicroProfile Health 3.1 includes a new Startup health check probe. For more information about the changes in MicroProfile Health 3.1, see Release Notes for MicroProfile Health 3.1 . Important The :empty-readiness-checks-status , :empty-liveness-checks-status , and :empty-startup-checks-status management attributes specify the global status when no readiness , liveness , or startup probes are defined. Additional Resources Global status when probes are not defined SmallRye Health MicroProfile Health Custom health check example 2.4. MicroProfile JWT 2.4.1. MicroProfile JWT integration in JBoss EAP The subsystem microprofile-jwt-smallrye provides MicroProfile JWT integration in JBoss EAP. The following functionalities are provided by the microprofile-jwt-smallrye subsystem: Detecting deployments that use MicroProfile JWT security. Activating support for MicroProfile JWT. The subsystem contains no configurable attributes or resources. In addition to the microprofile-jwt-smallrye subsystem, the org.eclipse.microprofile.jwt.auth.api module provides MicroProfile JWT integration in JBoss EAP. Additional Resources SmallRye JWT 2.4.2. Differences between a traditional deployment and an MicroProfile JWT deployment MicroProfile JWT deployments do not depend on managed SecurityDomain resources like traditional JBoss EAP deployments. Instead, a virtual SecurityDomain is created and used across the MicroProfile JWT deployment. As the MicroProfile JWT deployment is configured entirely within the MicroProfile Config properties and the microprofile-jwt-smallrye subsystem, the virtual SecurityDomain does not need any other managed configuration for the deployment. 2.4.3. MicroProfile JWT activation in JBoss EAP MicroProfile JWT is activated for applications based on the presence of an auth-method in the application. The MicroProfile JWT integration is activated for an application in the following way: As part of the deployment process, JBoss EAP scans the application archive for the presence of an auth-method . If an auth-method is present and defined as MP-JWT , the MicroProfile JWT integration is activated. The auth-method can be specified in either or both of the following files: the file containing the class that extends javax.ws.rs.core.Application , annotated with the @LoginConfig the web.xml configuration file If auth-method is defined both in a class, using annotation, and in the web.xml configuration file, the definition in web.xml configuration file is used. 2.4.4. Limitations of MicroProfile JWT in JBoss EAP The MicroProfile JWT implementation in JBoss EAP has certain limitations. The following limitations of MicroProfile JWT implementation exist in JBoss EAP: The MicroProfile JWT implementation parses only the first key from the JSON Web Key Set (JWKS) supplied in the mp.jwt.verify.publickey property. Therefore, if a token claims to be signed by the second key or any key after the second key, the token fails verification and the request containing the token is not authorized. Base64 encoding of JWKS is not supported. In both cases, a clear text JWKS can be referenced instead of using the mp.jwt.verify.publickey.location config property. 2.5. MicroProfile OpenAPI 2.5.1. MicroProfile OpenAPI in JBoss EAP MicroProfile OpenAPI is integrated in JBoss EAP using the microprofile-openapi-smallrye subsystem. The MicroProfile OpenAPI specification defines an HTTP endpoint that serves an OpenAPI 3.0 document. The OpenAPI 3.0 document describes the REST services for the host. The OpenAPI endpoint is registered using the configured path, for example http://localhost:8080/openapi , local to the root of the host associated with a deployment. Note Currently, the OpenAPI endpoint for a virtual host can only document a single deployment. To use OpenAPI with multiple deployments registered with different context paths on the same virtual host, each deployment must use a distinct endpoint path. The OpenAPI endpoint returns a YAML document by default. You can also request a JSON document using an Accept HTTP header, or a format query parameter. If the Undertow server or host of a given application defines an HTTPS listener then the OpenAPI document is also available using HTTPS. For example, an endpoint for HTTPS is https://localhost:8443/openapi . 2.6. MicroProfile Telemetry 2.6.1. MicroProfile Telemetry in JBoss EAP MicroProfile Telemetry provides tracing functionality for applications based on OpenTelemetry. The ability to trace requests across service boundaries is important, especially in a microservices environment where a request can flow through multiple services during its life cycle. MicroProfile Telemetry expands on the OpenTelemetry subsystem and adds support for MicroProfile Config. This allows users to configure OpenTelemetry using MicroProfile Config. Note There are no configurable resources or attributes in the MicroProfile Telemetry subsystem. Additional resources Observability in JBoss EAP MicroProfile Telemetry subsystem configuration in WildFly Admin guide OpenTelemetry documentation @WithSpan annotations OpenTelemetry documentation Baggage API OpenTelemetry documentation 2.7. MicroProfile REST Client 2.7.1. MicroProfile REST client JBoss EAP XP 5.0.0 supports the MicroProfile REST client 2.0 that builds on Jakarta RESTful Web Services 2.1.6 client APIs to provide a type-safe approach to invoke RESTful services over HTTP. The MicroProfile Type Safe REST clients are defined as Java interfaces. With the MicroProfile REST clients, you can write client applications with executable code. Use the MicroProfile REST client to avail the following capabilities: An intuitive syntax Programmatic registration of providers Declarative registration of providers Declarative specification of headers Propagation of headers on the server ResponseExceptionMapper Jakarta Contexts and Dependency Injection integration Access to server-sent events (SSE) 2.7.2. The resteasy.original.webapplicationexception.behavior MicroProfile Config property MicroProfile Config is the name of a specification that developers can use to configure applications and microservices to run in multiple environments without having to modify or repackage those apps. Previously, MicroProfile Config was available for JBoss EAP as a technology preview, but it has since been removed. MicroProfile Config is now available only on JBoss EAP XP. Defining the resteasy.original.webapplicationexception.behavior MicroProfile Config property You can set the resteasy.original.webapplicationexception.behavior parameter as either a web.xml servlet property or a system property. Here's an example of one such servlet property in web.xml : <context-param> <param-name>resteasy.original.webapplicationexception.behavior</param-name> <param-value>true</param-value> </context-param> You can also use MicroProfile Config to configure any other RESTEasy property. Additional resources For more information about MicroProfile Config on JBoss EAP XP, see Understand MicroProfile . For more information about the MicroProfile REST Client, see MicroProfile REST Client . For more information about RESTEasy, see Jakarta RESTful Web Services Request Processing . 2.8. MicroProfile Reactive Messaging 2.8.1. MicroProfile Reactive Messaging When you upgrade to JBoss EAP XP 5.0.0, you can enable the newest version of MicroProfile Reactive Messaging, which includes reactive messaging extensions and subsystems. A "reactive stream" is a succession of event data, along with processing protocols and standards, that is pushed across an asynchronous boundary (like a scheduler) without any buffering. An "event" might be a scheduled and repeating temperature check in a weather app, for example. The primary benefit of reactive streams is the seamless interoperability of your various applications and implementations. Reactive messaging provides a framework for building event-driven, data-streaming, and event-sourcing applications. Reactive messaging results in the constant and smooth exchange of event data, the reactive stream, from one app to another. You can use MicroProfile Reactive Messaging for asynchronous messaging through reactive streams so that your application can interact with others, like Apache Kafka, for example. After you upgrade your instance of MicroProfile Reactive Messaging to the latest version, you can do the following: Provision a server with MicroProfile Reactive Messaging for the Apache Kafka data-streaming platform. Interact with reactive messaging in-memory and backed by Apache Kafka topics through the latest reactive messaging APIs. Use any metric system available to determine the number of messages streamed on a given channel. Additional resources For more information about Apache Kafka, see What is Apache Kafka? 2.8.2. MicroProfile Reactive Messaging connectors You can use connectors to integrate MicroProfile Reactive Messaging with a number of external messaging systems. MicroProfile for JBoss EAP comes with an Apache Kafka connector, and an Advanced Message Queuing Protocol (AMQP) connector. Use the Eclipse MicroProfile Config specification to configure your connectors. MicroProfile Reactive Messaging connectors and incorporated layers MicroProfile Reactive Messaging includes the following connectors: Kafka connector The microprofile-reactive-messaging-kafka layer incorporates the Kafka connector. AMQP connector The microprofile-reactive-messaging-amqp layer incorporates the AMQP connector. Both the connector layers include the microprofile-reactive-messaging Galleon layer. The microprofile-reactive-messaging layer provides the core MicroProfile Reactive Messaging functionality. Table 2.1. Reactive messaging and connector Galleon layers Layer Definition microprofile-reactive-streams-operators Provides MicroProfile Reactive Streams Operators APIs and supporting implementing modules. Contains MicroProfile Reactive Streams Operators with SmallRye extension and subsystem. Depends on cdi layer. cdi stands for Jakarta Contexts and Dependency Injection; provides subsystems that add @Inject functionality. microprofile-reactive-messaging Provides MicroProfile Reactive Messaging APIs and supporting implementing modules. Contains MicroProfile with SmallRye extension and subsystem. Depends on microprofile-config and microprofile-reactive-streams-operators layers. microprofile-reactive-messaging-kafka Provides Kafka connector modules that enable MicroProfile Reactive Messaging to interact with Kafka. Depends on microprofile-reactive-messaging layer. microprofile-reactive-messaging-amqp Provides AMQP connector modules that enable MicroProfile Reactive Messaging to interact with AMQP clients. Depends on microprofile-reactive-messaging layer. 2.8.3. The Apache Kafka event streaming platform Apache Kafka is an open source distributed event (data) streaming platform that can publish, subscribe to, store, and process streams of records in real time. It handles event streams from multiple sources and delivers them to multiple consumers, moving large amounts of data from points A to Z and everywhere else, all at the same time. MicroProfile Reactive Messaging uses Apache Kafka to deliver these event records in as few as two microseconds, store them safely in distributed, fault-tolerant clusters, all while making them available across any team-defined zones or geographic regions. Additional resources What is Apache Kafka? Red Hat AMQ
[ "<context-param> <param-name>resteasy.original.webapplicationexception.behavior</param-name> <param-value>true</param-value> </context-param>" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/using_jboss_eap_xp_5.0/understand_microprofile
About
About Red Hat Advanced Cluster Management for Kubernetes 2.12 About 2.12
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html-single/about/index
5.2.22. /proc/mounts
5.2.22. /proc/mounts This file provides a list of all mounts in use by the system: The output found here is similar to the contents of /etc/mtab , except that /proc/mount is more up-to-date. The first column specifies the device that is mounted, the second column reveals the mount point, and the third column tells the file system type, and the fourth column tells you if it is mounted read-only ( ro ) or read-write ( rw ). The fifth and sixth columns are dummy values designed to match the format used in /etc/mtab .
[ "rootfs / rootfs rw 0 0 /proc /proc proc rw,nodiratime 0 0 none /dev ramfs rw 0 0 /dev/mapper/VolGroup00-LogVol00 / ext3 rw 0 0 none /dev ramfs rw 0 0 /proc /proc proc rw,nodiratime 0 0 /sys /sys sysfs rw 0 0 none /dev/pts devpts rw 0 0 usbdevfs /proc/bus/usb usbdevfs rw 0 0 /dev/hda1 /boot ext3 rw 0 0 none /dev/shm tmpfs rw 0 0 none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0 sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw 0 0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-proc-mounts
Preface
Preface Preface
null
https://docs.redhat.com/en/documentation/workload_availability_for_red_hat_openshift/24.4/html/release_notes/pr01
Chapter 1. Web Console Overview
Chapter 1. Web Console Overview The Red Hat OpenShift Dedicated web console provides a graphical user interface to visualize your project data and perform administrative, management, and troubleshooting tasks. The web console runs as pods on the control plane nodes in the openshift-console project. It is managed by a console-operator pod. Both Administrator and Developer perspectives are supported. Both Administrator and Developer perspectives enable you to create quick start tutorials for OpenShift Dedicated. A quick start is a guided tutorial with user tasks and is useful for getting oriented with an application, Operator, or other product offering. 1.1. About the Administrator perspective in the web console The Administrator perspective enables you to view the cluster inventory, capacity, general and specific utilization information, and the stream of important events, all of which help you to simplify planning and troubleshooting tasks. Both project administrators and cluster administrators can view the Administrator perspective. Cluster administrators can also open an embedded command line terminal instance with the web terminal Operator in OpenShift Dedicated 4.7 and later. Note The default web console perspective that is shown depends on the role of the user. The Administrator perspective is displayed by default if the user is recognized as an administrator. The Administrator perspective provides workflows specific to administrator use cases, such as the ability to: Manage workload, storage, networking, and cluster settings. Install and manage Operators using the Operator Hub. Add identity providers that allow users to log in and manage user access through roles and role bindings. View and manage a variety of advanced settings such as cluster updates, partial cluster updates, cluster Operators, custom resource definitions (CRDs), role bindings, and resource quotas. Access and manage monitoring features such as metrics, alerts, and monitoring dashboards. View and manage logging, metrics, and high-status information about the cluster. Visually interact with applications, components, and services associated with the Administrator perspective in OpenShift Dedicated. 1.2. About the Developer perspective in the web console The Developer perspective offers several built-in ways to deploy applications, services, and databases. In the Developer perspective, you can: View real-time visualization of rolling and recreating rollouts on the component. View the application status, resource utilization, project event streaming, and quota consumption. Share your project with others. Troubleshoot problems with your applications by running Prometheus Query Language (PromQL) queries on your project and examining the metrics visualized on a plot. The metrics provide information about the state of a cluster and any user-defined workloads that you are monitoring. Cluster administrators can also open an embedded command line terminal instance in the web console in OpenShift Dedicated 4.7 and later. Note The default web console perspective that is shown depends on the role of the user. The Developer perspective is displayed by default if the user is recognised as a developer. The Developer perspective provides workflows specific to developer use cases, such as the ability to: Create and deploy applications on OpenShift Dedicated by importing existing codebases, images, and container files. Visually interact with applications, components, and services associated with them within a project and monitor their deployment and build status. Group components within an application and connect the components within and across applications. Integrate serverless capabilities (Technology Preview). Create workspaces to edit your application code using Eclipse Che. You can use the Topology view to display applications, components, and workloads of your project. If you have no workloads in the project, the Topology view will show some links to create or import them. You can also use the Quick Search to import components directly. Additional resources See Viewing application composition using the Topology view for more information on using the Topology view in Developer perspective. 1.3. Accessing the Perspectives You can access the Administrator and Developer perspective from the web console as follows: Prerequisites To access a perspective, ensure that you have logged in to the web console. Your default perspective is automatically determined by the permission of the users. The Administrator perspective is selected for users with access to all projects, while the Developer perspective is selected for users with limited access to their own projects Additional resources See Adding User Preferences for more information on changing perspectives. Procedure Use the perspective switcher to switch to the Administrator or Developer perspective. Select an existing project from the Project drop-down list. You can also create a new project from this dropdown. Note You can use the perspective switcher only as cluster-admin . Additional resources Viewing cluster information Using the web terminal Creating quick start tutorials
null
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/web_console/web-console-overview
Securing Applications and Services Guide
Securing Applications and Services Guide Red Hat build of Keycloak 24.0 Red Hat Customer Content Services
[ "/realms/{realm-name}/.well-known/openid-configuration", "/realms/{realm-name}/protocol/openid-connect/auth", "/realms/{realm-name}/protocol/openid-connect/token", "/realms/{realm-name}/protocol/openid-connect/userinfo", "/realms/{realm-name}/protocol/openid-connect/logout", "/realms/{realm-name}/protocol/openid-connect/certs", "/realms/{realm-name}/protocol/openid-connect/token/introspect", "/realms/{realm-name}/clients-registrations/openid-connect", "/realms/{realm-name}/protocol/openid-connect/revoke", "/realms/{realm-name}/protocol/openid-connect/auth/device", "/realms/{realm-name}/protocol/openid-connect/ext/ciba/auth", "curl -d \"client_id=myclient\" -d \"client_secret=40cc097b-2a57-4c17-b36a-8fdf3fc2d578\" -d \"username=user\" -d \"password=password\" -d \"grant_type=password\" \"http://localhost:8080/realms/master/protocol/openid-connect/token\"", "npm install keycloak-js", "import Keycloak from 'keycloak-js'; const keycloak = new Keycloak({ url: 'http://keycloak-serverUSD{kc_base_path}', realm: 'myrealm', clientId: 'myapp' }); try { const authenticated = await keycloak.init(); console.log(`User is USD{authenticated ? 'authenticated' : 'not authenticated'}`); } catch (error) { console.error('Failed to initialize adapter:', error); }", "keycloak.init({ onLoad: 'check-sso', silentCheckSsoRedirectUri: `USD{location.origin}/silent-check-sso.html` });", "<!doctype html> <html> <body> <script> parent.postMessage(location.href, location.origin); </script> </body> </html>", "keycloak.init({ onLoad: 'login-required' });", "async function fetchUsers() { const response = await fetch('/api/users', { headers: { accept: 'application/json', authorization: `Bearer USD{keycloak.token}` } }); return response.json(); }", "try { await keycloak.updateToken(30); } catch (error) { console.error('Failed to refresh token:', error); } const users = await fetchUsers();", "keycloak.init({ flow: 'implicit' })", "keycloak.init({ flow: 'hybrid' });", "keycloak.init({ adapter: 'cordova-native' });", "<preference name=\"AndroidLaunchMode\" value=\"singleTask\" />", "import Keycloak from 'keycloak-js'; import KeycloakCapacitorAdapter from 'keycloak-capacitor-adapter'; const keycloak = new Keycloak(); keycloak.init({ adapter: KeycloakCapacitorAdapter, });", "import Keycloak, { KeycloakAdapter } from 'keycloak-js'; // Implement the 'KeycloakAdapter' interface so that all required methods are guaranteed to be present. const MyCustomAdapter: KeycloakAdapter = { login(options) { // Write your own implementation here. } // The other methods go here }; const keycloak = new Keycloak(); keycloak.init({ adapter: MyCustomAdapter, });", "new Keycloak(); new Keycloak('http://localhost/keycloak.json'); new Keycloak({ url: 'http://localhost', realm: 'myrealm', clientId: 'myApp' });", "try { const profile = await keycloak.loadUserProfile(); console.log('Retrieved user profile:', profile); } catch (error) { console.error('Failed to load user profile:', error); }", "try { const refreshed = await keycloak.updateToken(5); console.log(refreshed ? 'Token was refreshed' : 'Token is still valid'); } catch (error) { console.error('Failed to refresh the token:', error); }", "keycloak.onAuthSuccess = () => console.log('Authenticated!');", "mkdir myapp && cd myapp", "\"dependencies\": { \"keycloak-connect\": \"file:keycloak-connect-24.0.10.tgz\" }", "const session = require('express-session'); const Keycloak = require('keycloak-connect'); const memoryStore = new session.MemoryStore(); const keycloak = new Keycloak({ store: memoryStore });", "npm install express-session", "\"scripts\": { \"test\": \"echo \\\"Error: no test specified\\\" && exit 1\", \"start\": \"node server.js\" },", "npm run start", "const kcConfig = { clientId: 'myclient', bearerOnly: true, serverUrl: 'http://localhost:8080', realm: 'myrealm', realmPublicKey: 'MIIBIjANB...' }; const keycloak = new Keycloak({ store: memoryStore }, kcConfig);", "const keycloak = new Keycloak({ store: memoryStore, idpHint: myIdP }, kcConfig);", "const session = require('express-session'); const memoryStore = new session.MemoryStore(); // Configure session app.use( session({ secret: 'mySecret', resave: false, saveUninitialized: true, store: memoryStore, }) ); const keycloak = new Keycloak({ store: memoryStore });", "const keycloak = new Keycloak({ scope: 'offline_access' });", "npm install express", "const express = require('express'); const app = express();", "app.use( keycloak.middleware() );", "app.listen(3000, function () { console.log('App listening on port 3000'); });", "const app = express(); app.set( 'trust proxy', true ); app.use( keycloak.middleware() );", "app.get( '/complain', keycloak.protect(), complaintHandler );", "app.get( '/special', keycloak.protect('special'), specialHandler );", "app.get( '/extra-special', keycloak.protect('other-app:special'), extraSpecialHandler );", "app.get( '/admin', keycloak.protect( 'realm:admin' ), adminHandler );", "app.get('/apis/me', keycloak.enforcer('user:profile'), userProfileHandler);", "app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'token'}), userProfileHandler);", "app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'token'}), function (req, res) { const token = req.kauth.grant.access_token.content; const permissions = token.authorization ? token.authorization.permissions : undefined; // show user profile });", "app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'permissions'}), function (req, res) { const permissions = req.permissions; // show user profile });", "keycloak.enforcer('user:profile', {resource_server_id: 'my-apiserver'})", "app.get('/protected/resource', keycloak.enforcer(['resource:view', 'resource:write'], { claims: function(request) { return { \"http.uri\": [\"/protected/resource\"], \"user.agent\": // get user agent from request } } }), function (req, res) { // access granted", "function protectBySection(token, request) { return token.hasRole( request.params.section ); } app.get( '/:section/:page', keycloak.protect( protectBySection ), sectionHandler );", "Keycloak.prototype.redirectToLogin = function(req) { const apiReqMatcher = /\\/api\\//i; return !apiReqMatcher.test(req.originalUrl || req.url); };", "app.use( keycloak.middleware( { logout: '/logoff' } ));", "https://example.com/logoff?redirect_url=https%3A%2F%2Fexample.com%3A3000%2Flogged%2Fout", "app.use( keycloak.middleware( { admin: '/callbacks' } );", "auth: token: realm: http://localhost:8080/realms/master/protocol/docker-v2/auth service: docker-test issuer: http://localhost:8080/realms/master", "REGISTRY_AUTH_TOKEN_REALM: http://localhost:8080/realms/master/protocol/docker-v2/auth REGISTRY_AUTH_TOKEN_SERVICE: docker-test REGISTRY_AUTH_TOKEN_ISSUER: http://localhost:8080/realms/master", "docker login localhost:5000 -u USDusername Password: ******* Login Succeeded", "Authorization: bearer eyJhbGciOiJSUz", "Authorization: basic BASE64(client-id + ':' + client-secret)", "curl -X POST -d '{ \"clientId\": \"myclient\" }' -H \"Content-Type:application/json\" -H \"Authorization: bearer eyJhbGciOiJSUz...\" http://localhost:8080/realms/master/clients-registrations/default", "String token = \"eyJhbGciOiJSUz...\"; ClientRepresentation client = new ClientRepresentation(); client.setClientId(CLIENT_ID); ClientRegistration reg = ClientRegistration.create() .url(\"http://localhost:8080\", \"myrealm\") .build(); reg.auth(Auth.token(token)); client = reg.create(client); String registrationAccessToken = client.getRegistrationAccessToken();", "export PATH=USDPATH:USDKEYCLOAK_HOME/bin kcreg.sh", "c:\\> set PATH=%PATH%;%KEYCLOAK_HOME%\\bin c:\\> kcreg", "kcreg.sh config credentials --server http://localhost:8080 --realm demo --user user --client reg-cli kcreg.sh create -s clientId=my_client -s 'redirectUris=[\"http://localhost:8980/myapp/*\"]' kcreg.sh get my_client", "c:\\> kcreg config credentials --server http://localhost:8080 --realm demo --user user --client reg-cli c:\\> kcreg create -s clientId=my_client -s \"redirectUris=[\\\"http://localhost:8980/myapp/*\\\"]\" c:\\> kcreg get my_client", "kcreg.sh config truststore --trustpass USDPASSWORD ~/.keycloak/truststore.jks", "c:\\> kcreg config truststore --trustpass %PASSWORD% %HOMEPATH%\\.keycloak\\truststore.jks", "kcreg.sh help", "c:\\> kcreg help", "kcreg.sh config initial-token USDTOKEN kcreg.sh create -s clientId=myclient", "kcreg.sh create -s clientId=myclient -t USDTOKEN", "c:\\> kcreg config initial-token %TOKEN% c:\\> kcreg create -s clientId=myclient", "c:\\> kcreg create -s clientId=myclient -t %TOKEN%", "kcreg.sh create -f client-template.json -s clientId=myclient -s baseUrl=/myclient -s 'redirectUris=[\"/myclient/*\"]' -o", "C:\\> kcreg create -f client-template.json -s clientId=myclient -s baseUrl=/myclient -s \"redirectUris=[\\\"/myclient/*\\\"]\" -o", "kcreg.sh get myclient", "C:\\> kcreg get myclient", "kcreg.sh get myclient -e install > keycloak.json", "C:\\> kcreg get myclient -e install > keycloak.json", "kcreg.sh get myclient > myclient.json vi myclient.json kcreg.sh update myclient -f myclient.json", "C:\\> kcreg get myclient > myclient.json C:\\> notepad myclient.json C:\\> kcreg update myclient -f myclient.json", "kcreg.sh update myclient -s enabled=false -d redirectUris", "C:\\> kcreg update myclient -s enabled=false -d redirectUris", "kcreg.sh update myclient --merge -d redirectUris -f mychanges.json", "C:\\> kcreg update myclient --merge -d redirectUris -f mychanges.json", "kcreg.sh delete myclient", "C:\\> kcreg delete myclient" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html-single/securing_applications_and_services_guide/index
20.4. Connecting to the Hypervisor with virsh Connect
20.4. Connecting to the Hypervisor with virsh Connect The virsh connect [ hostname-or-URI ] [--readonly] command begins a local hypervisor session using virsh. After the first time you run this command it will run automatically each time the virsh shell runs. The hypervisor connection URI specifies how to connect to the hypervisor. The most commonly used URIs are: qemu:///system - connects locally as the root user to the daemon supervising guest virtual machines on the KVM hypervisor. qemu:///session - connects locally as a user to the user's set of guest local machines using the KVM hypervisor. lxc:/// - connects to a local Linux container. The command can be run as follows, with the target guest being specified either either by its machine name (hostname) or the URL of the hypervisor (the output of the virsh uri command), as shown: For example, to establish a session to connect to your set of guest virtual machines, with you as the local user: To initiate a read-only connection, append the above command with --readonly . For more information on URIs, see Remote URIs . If you are unsure of the URI, the virsh uri command will display it:
[ "virsh uri qemu:///session", "virsh connect qemu:///session" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-Generic_Commands-connect
4.3. Brocade Fabric Switch
4.3. Brocade Fabric Switch Table 4.4, "Brocade Fabric Switch" lists the fence device parameters used by fence_brocade , the fence agent for Brocade FC switches. Table 4.4. Brocade Fabric Switch luci Field cluster.conf Attribute Description Name name A name for the Brocade device connected to the cluster. IP Address or Hostname ipaddr The IP address or host name assigned to the device. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Force IP Family inet4_only, inet6_only Force the agent to use IPv4 or IPv6 addresses only Force Command Prompt cmd_prompt The command prompt to use. The default value is '\USD'. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port port The switch outlet number. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Use SSH secure Indicates that the system will use SSH to access the device. When using SSH, you must specify either a password, a password script, or an identity file. SSH Options ssh_options SSH options to use. The default value is -1 -c blowfish . Path to SSH Identity File identity_file The identity file for SSH. Unfencing unfence section of the cluster configuration file When enabled, this ensures that a fenced node is not re-enabled until the node has been rebooted. This is necessary for non-power fence methods (that is, SAN/storage fencing). When you configure a device that requires unfencing, the cluster must first be stopped and the full configuration including devices and unfencing must be added before the cluster is started. For more information about unfencing a node, see the fence_node (8) man page. Figure 4.3, "Brocade Fabric Switch" shows the configuration screen for adding an Brocade Fabric Switch fence device. Figure 4.3. Brocade Fabric Switch The following command creates a fence device instance for a Brocade device: The following is the cluster.conf entry for the fence_brocade device:
[ "ccs -f cluster.conf --addfencedev brocadetest agent=fence_brocade ipaddr=brocadetest.example.com login=root passwd=password123", "<fencedevices> <fencedevice agent=\"fence_brocade\" ipaddr=\"brocadetest.example.com\" login=\"brocadetest\" name=\"brocadetest\" passwd=\"brocadetest\"/> </fencedevices>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/s1-software-fence-brocade-ca
Chapter 12. SelfSubjectAccessReview [authorization.k8s.io/v1]
Chapter 12. SelfSubjectAccessReview [authorization.k8s.io/v1] Description SelfSubjectAccessReview checks whether or the current user can perform an action. Not filling in a spec.namespace means "in all namespaces". Self is a special case, because users should always be able to check whether they can perform an action Type object Required spec 12.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object SelfSubjectAccessReviewSpec is a description of the access request. Exactly one of ResourceAuthorizationAttributes and NonResourceAuthorizationAttributes must be set status object SubjectAccessReviewStatus 12.1.1. .spec Description SelfSubjectAccessReviewSpec is a description of the access request. Exactly one of ResourceAuthorizationAttributes and NonResourceAuthorizationAttributes must be set Type object Property Type Description nonResourceAttributes object NonResourceAttributes includes the authorization attributes available for non-resource requests to the Authorizer interface resourceAttributes object ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface 12.1.2. .spec.nonResourceAttributes Description NonResourceAttributes includes the authorization attributes available for non-resource requests to the Authorizer interface Type object Property Type Description path string Path is the URL path of the request verb string Verb is the standard HTTP verb 12.1.3. .spec.resourceAttributes Description ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface Type object Property Type Description group string Group is the API Group of the Resource. "*" means all. name string Name is the name of the resource being requested for a "get" or deleted for a "delete". "" (empty) means all. namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces "" (empty) is defaulted for LocalSubjectAccessReviews "" (empty) is empty for cluster-scoped resources "" (empty) means "all" for namespace scoped resources from a SubjectAccessReview or SelfSubjectAccessReview resource string Resource is one of the existing resource types. "*" means all. subresource string Subresource is one of the existing resource types. "" means none. verb string Verb is a kubernetes resource API verb, like: get, list, watch, create, update, delete, proxy. "*" means all. version string Version is the API Version of the Resource. "*" means all. 12.1.4. .status Description SubjectAccessReviewStatus Type object Required allowed Property Type Description allowed boolean Allowed is required. True if the action would be allowed, false otherwise. denied boolean Denied is optional. True if the action would be denied, otherwise false. If both allowed is false and denied is false, then the authorizer has no opinion on whether to authorize the action. Denied may not be true if Allowed is true. evaluationError string EvaluationError is an indication that some error occurred during the authorization check. It is entirely possible to get an error and be able to continue determine authorization status in spite of it. For instance, RBAC can be missing a role, but enough roles are still present and bound to reason about the request. reason string Reason is optional. It indicates why a request was allowed or denied. 12.2. API endpoints The following API endpoints are available: /apis/authorization.k8s.io/v1/selfsubjectaccessreviews POST : create a SelfSubjectAccessReview 12.2.1. /apis/authorization.k8s.io/v1/selfsubjectaccessreviews Table 12.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create a SelfSubjectAccessReview Table 12.2. Body parameters Parameter Type Description body SelfSubjectAccessReview schema Table 12.3. HTTP responses HTTP code Reponse body 200 - OK SelfSubjectAccessReview schema 201 - Created SelfSubjectAccessReview schema 202 - Accepted SelfSubjectAccessReview schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/authorization_apis/selfsubjectaccessreview-authorization-k8s-io-v1
5.2. Resolved Issues
5.2. Resolved Issues BZ-1378498 - ClientListener stops working after connection failure After a connection failure to a JDG server, normal operations like get and put recover and work as expected. But a ClientListener stops working. After registering the listener using cache.addClientListener(listener) again, it recovers and works as expected. This issue is resolved as of Red Hat JBoss Data Grid 6.6.2. BZ-1382273 - NPE in CacheNotifierImpl by LIRS eviction listener When inserting 20 key/values in a 10 sized cache (keys from key-1 to key-20) and then reinserting the first 10 keys (key-1 to key-10) while also using LIRS eviction strategy and listener, a NullPointerException is thrown in CacheNotifierImpl.notifyCacheEntriesEvicted . This issue existed in both Remote Client-Server mode and Library mode. This issue is resolved as of Red Hat JBoss Data Grid 6.6.2 BZ-1383945 - Expiration is not working under some circumstances with AtomicMap Previously, when using AtomicMaps and writeskew, lifespan as set in the configuration wasn't applied. This has been addressed in JBoss Data Grid 6.6.2 and now lifespan in the configuration is honored. BZ-1388562 - Expiration is not applied to a repeatable read entry that was read as null prior The expiration metadata is not applied to a newly created entry that was read in the same transaction as a null. This issue is resolved as of Red Hat JBoss Data Grid 6.6.2. BZ-1379414 - @CacheEntryExpired not getting invoked for non auto-commit cache @CacheEntryExpired listener method is not invoked for non auto-commit cache. When auto commit is true the listener method is invoked. @CacheEntryCreated is invoked in both configs. This issue is resolved as of Red Hat JBoss Data Grid 6.6.2. BZ-1412752 - MissingFormatArgumentException thrown by PreferConsistencyStrategy if debug mode is enabled on state-transfer or merge Methods PreferConsistencyStrategy and StateConsumerImpl contained non-typesafe printf errors. This issue is resolved as of Red Hat JBoss Data Grid 6.6.2. BZ-1428027 - DMR operation register-proto-schemas fails with NPE if the proto file has syntax errors If the proto file has syntax errors it isn't placed in the ___protobuf_metadata cache as it should be. Additionally, a myFileWithSyntaxErrors.proto.errors key fails to be created and an exception is thrown. This issue is resolved as of Red Hat JBoss Data Grid 6.6.2. BZ-1431965 - SimpleDateFormat used in REST server is not thread safe org.infinispan.rest.Server has a static field of DatePatternRfc1123LocaleUS , wihich is an instance of SimpleDateFormat . This causes a java.lang.ArrayIndexOutOfBoundsException under load. This issue is resolved as of Red Hat JBoss Data Grid 6.6.2. BZ-1425687 - JDG client unable to resolve the system property in external-host jdg6 If infinispan.external_addr is defined in a server configuration file like clustered.xml , the JBoss Data Grid client will be unable to resolve the address. This issue is resolved as of Red Hat JBoss Data Grid 6.6.2. BZ-1440102 - Cache clear doesn't work when passivation is enabled Clearing the cache map doesn't work under the following configuration: JBoss Data Grid is running in Library mode inside JBoss EAP 6.4.6 with passivation enabled. In this case the size will never be reduced to 0. This issue is resolved as of Red Hat JBoss Data Grid 6.6.2. BZ-1388888 - LIRS Eviction with local cache under high load fail with a NullPointerException at BoundedEquivalentConcurrentHashMapV8.java:1414 When using LIRSEvictionPolicy in some situations a NullPointerException can occur in BoundedEquivalentConcurrentHashMapV8 . The error is harmless but the issue has been fixed. This issue is resolved as of Red Hat JBoss Data Grid 6.6.2. BZ-1448366 - HotRod client write buffer is too large The Hot Rod client uses more memory than it should. The buffering implementation of TcpTransport.socketOutputStream does not use a fixed sized input buffer. Instead it grows as BufferedInputStream does. A growing buffer can lead to excessive memory consumption in the heap, as well as in the native memory of the operating system. This issue is resolved as of Red Hat JBoss Data Grid 6.6.2. BZ-1435617 - Rolling upgrade fails with java.lang.ClassCastException Previously, performing a rolling upgrade could fail with a java.lang.ClassCastException of either org.infinispan.container.entries.RepeatableReadEntry cannot be cast to org.infinispan.container.entries.InternalCacheEntry or SimpleClusteredVersion cannot be cast to NumericVersion . This issue is resolved as of Red Hat JBoss Data Grid 6.6.2. BZ-1435618 - Hot Rod Rolling Upgrade throws TimeOutException When doing a rolling upgrade that takes more than a few minutes to complete a TimeOutException could be thrown. This issue is resolved as of Red Hat JBoss Data Grid 6.6.2. BZ-1435620 - Rolling Upgrade: use of Remote Store in mode read-only causes data inconsistencies Previously, during Hot Rod rolling upgrades, write operations executed by the client on the target cluster were ignored. This could cause unexpected results in the application, such as entries being deleted that weren't intentionally deleted. This issue is resolved as of Red Hat JBoss Data Grid 6.6.2. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/6.6.2_release_notes/resolved_issues
Appendix A. Using Your Subscription
Appendix A. Using Your Subscription AMQ Streams is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. Accessing Your Account Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. Activating a Subscription Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. Downloading Zip and Tar Files To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ Streams entries in the JBOSS INTEGRATION AND AUTOMATION category. Select the desired AMQ Streams product. The Software Downloads page opens. Click the Download link for your component. Revised on 2022-02-01 16:35:06 UTC
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/deploying_and_upgrading_amq_streams_on_openshift/using_your_subscription
Chapter 20. Working with containers using Buildah
Chapter 20. Working with containers using Buildah With Buildah, you can do several operations on a container image or container from the command line. Examples of operations are: create a working container from scratch or from a container image as a starting point, create an image from a working container or using a Containerfile , configure a container's entrypoint, labels, port, shell, and working directory. You can mount working containers directories for filesystem manipulation, delete a working container or container image, and more. You can then create an image from a working container and push the image to the registry. 20.1. Running commands inside of the container Use the buildah run command to execute a command from the container. Prerequisites The container-tools module is installed. A pulled image is available on the local system. Procedure Display the operating system version: Additional resources buildah-run man page on your system 20.2. Inspecting containers and images with Buildah Use the buildah inspect command to display information about a container or image. Prerequisites The container-tools module is installed. An image was built using instructions from Containerfile. For details, see section Building an image from a Containerfile with Buildah . Procedure Inspect the image: To inspect the myecho image, enter: To inspect the working container from the myecho image: Create a working container based on the localhost/myecho image: Inspect the myecho-working-container container: Additional resources buildah-inspect man page on your system 20.3. Modifying a container using buildah mount Use the buildah mount command to display information about a container or image. Prerequisites The container-tools module is installed. An image built using instructions from Containerfile. For details, see section Building an image from a Containerfile with Buildah . Procedure Create a working container based on the registry.access.redhat.com/ubi8/ubi image and save the name of the container to the mycontainer variable: Mount the myecho-working-container container and save the mount point path to the mymount variable: Modify the myecho script and make it executable: Create the myecho2 image from the myecho-working-container container: Verification List all images in local storage: Run the myecho2 container based on the docker.io/library/myecho2 image: Additional resources buildah-mount and buildah-commit man pages on your system 20.4. Modifying a container using buildah copy and buildah config Use buildah copy command to copy files to a container without mounting it. You can then configure the container using the buildah config command to run the script you created by default. Prerequisites The container-tools module is installed. An image built using instructions from Containerfile. For details, see section Building an image from a Containerfile with Buildah . Procedure Create a script named newecho and make it executable: Create a new working container: Copy the newecho script to /usr/local/bin directory inside the container: Change the configuration to use the newecho script as the new entrypoint: Optional: Run the myecho-working-container-2 container whixh triggers the newecho script to be executed: Commit the myecho-working-container-2 container to a new image called mynewecho : Verification List all images in local storage: Additional resources buildah-copy , buildah-config , buildah-commit , buildah-run man pages on your system 20.5. Pushing containers to a private registry Use buildah push command to push an image from local storage to a public or private repository. Prerequisites The container-tools module is installed. An image was built using instructions from Containerfile. For details, see section Building an image from a Containerfile with Buildah . Procedure Create the local registry on your machine: Push the myecho:latest image to the localhost registry: Verification List all images in the localhost repository: Inspect the docker://localhost:5000/myecho:latest image: Pull the localhost:5000/myecho image: Additional resources buildah-push man page on your system 20.6. Pushing containers to the Docker Hub Use your Docker Hub credentials to push and pull images from the Docker Hub with the buildah command. Prerequisites The container-tools module is installed. An image built using instructions from Containerfile. For details, see section Building an image from a Containerfile with Buildah . Procedure Push the docker.io/library/myecho:latest to your Docker Hub. Replace username and password with your Docker Hub credentials: Verification Get and run the docker.io/testaccountXX/myecho:latest image: Using Podman tool: Using Buildah and Podman tools: Additional resources buildah-push man page on your system 20.7. Removing containers with Buildah Use the buildah rm command to remove containers. You can specify containers for removal with the container ID or name. Prerequisites The container-tools module is installed. At least one container has been stopped. Procedure List all containers: Remove the myecho-working-container container: Verification Ensure that containers were removed: Additional resources buildah-rm man page on your system
[ "buildah run ubi-working-container cat /etc/redhat-release Red Hat Enterprise Linux release 8.4 (Ootpa)", "buildah inspect localhost/myecho { \"Type\": \"buildah 0.0.1\", \"FromImage\": \"localhost/myecho:latest\", \"FromImageID\": \"b28cd00741b38c92382ee806e1653eae0a56402bcd2c8d31bdcd36521bc267a4\", \"FromImageDigest\": \"sha256:0f5b06cbd51b464fabe93ce4fe852a9038cdd7c7b7661cd7efef8f9ae8a59585\", \"Config\": \"Entrypoint\": [ \"/bin/sh\", \"-c\", \"\\\"/usr/local/bin/myecho\\\"\" ], }", "buildah from localhost/myecho", "buildah inspect ubi-working-container { \"Type\": \"buildah 0.0.1\", \"FromImage\": \"registry.access.redhat.com/ubi8/ubi:latest\", \"FromImageID\": \"272209ff0ae5fe54c119b9c32a25887e13625c9035a1599feba654aa7638262d\", \"FromImageDigest\": \"sha256:77623387101abefbf83161c7d5a0378379d0424b2244009282acb39d42f1fe13\", \"Config\": \"Container\": \"ubi-working-container\", \"ContainerID\": \"01eab9588ae1523746bb706479063ba103f6281ebaeeccb5dc42b70e450d5ad0\", \"ProcessLabel\": \"system_u:system_r:container_t:s0:c162,c1000\", \"MountLabel\": \"system_u:object_r:container_file_t:s0:c162,c1000\", }", "mycontainer=USD(buildah from localhost/myecho) echo USDmycontainer myecho-working-container", "mymount=USD(buildah mount USDmycontainer) echo USDmymount /var/lib/containers/storage/overlay/c1709df40031dda7c49e93575d9c8eebcaa5d8129033a58e5b6a95019684cc25/merged", "echo 'echo \"We modified this container.\"' >> USDmymount/usr/local/bin/myecho chmod +x USDmymount/usr/local/bin/myecho", "buildah commit USDmycontainer containers-storage:myecho2", "buildah images REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/library/myecho2 latest 4547d2c3e436 4 minutes ago 234 MB localhost/myecho latest b28cd00741b3 56 minutes ago 234 MB", "podman run --name=myecho2 docker.io/library/myecho2 This container works! We even modified it.", "cat newecho echo \"I changed this container\" chmod 755 newecho", "buildah from myecho:latest myecho-working-container-2", "buildah copy myecho-working-container-2 newecho /usr/local/bin", "buildah config --entrypoint \"/bin/sh -c /usr/local/bin/newecho\" myecho-working-container-2", "buildah run myecho-working-container-2 -- sh -c '/usr/local/bin/newecho' I changed this container", "buildah commit myecho-working-container-2 containers-storage:mynewecho", "buildah images REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/library/mynewecho latest fa2091a7d8b6 8 seconds ago 234 MB", "podman run -d -p 5000:5000 registry:2", "buildah push --tls-verify=false myecho:latest localhost:5000/myecho:latest Getting image source signatures Copying blob sha256:e4efd0 Writing manifest to image destination Storing signatures", "curl http://localhost:5000/v2/_catalog {\"repositories\":[\"myecho2]} curl http://localhost:5000/v2/myecho2/tags/list {\"name\":\"myecho\",\"tags\":[\"latest\"]}", "skopeo inspect --tls-verify=false docker://localhost:5000/myecho:latest | less { \"Name\": \"localhost:5000/myecho\", \"Digest\": \"sha256:8999ff6050...\", \"RepoTags\": [ \"latest\" ], \"Created\": \"2021-06-28T14:44:05.919583964Z\", \"DockerVersion\": \"\", \"Labels\": { \"architecture\": \"x86_64\", \"authoritative-source-url\": \"registry.redhat.io\", }", "podman pull --tls-verify=false localhost:5000/myecho2 podman run localhost:5000/myecho2 This container works!", "buildah push --creds username:password docker.io/library/myecho:latest docker://testaccountXX/myecho:latest", "podman run docker.io/testaccountXX/myecho:latest This container works!", "buildah from docker.io/testaccountXX/myecho:latest myecho2-working-container-2 podman run myecho-working-container-2", "buildah containers CONTAINER ID BUILDER IMAGE ID IMAGE NAME CONTAINER NAME 05387e29ab93 * c37e14066ac7 docker.io/library/myecho:latest myecho-working-container", "buildah rm myecho-working-container 05387e29ab93151cf52e9c85c573f3e8ab64af1592b1ff9315db8a10a77d7c22", "buildah containers" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/building_running_and_managing_containers/working-with-containers-using-buildah
Chapter 1. Overview of the OpenShift Data Foundation update process
Chapter 1. Overview of the OpenShift Data Foundation update process This chapter helps you to upgrade between the minor releases and z-streams for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. You can upgrade OpenShift Data Foundation and its components, either between minor releases like 4.14 and 4.15, or between z-stream updates like 4.15.0 and 4.15.1 by enabling automatic updates (if not done so during operator installation) or performing manual updates. When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic. Extended Update Support (EUS) EUS to EUS upgrade in OpenShift Data Foundation is sequential and it is aligned with OpenShift upgrade. For more information, see Performing an EUS-to-EUS update and EUS-to-EUS update for layered products and Operators installed through Operator Lifecycle Manager . For EUS upgrade of OpenShift Container Platform and OpenShift Data Foundation, make sure that OpenShift Data Foundation is upgraded along with OpenShift Container Platform and compatibility between OpenShift Data Foundation and OpenShift Container Platform is always maintained. Example workflow of EUS upgrade: Pause the worker machine pools. Update OpenShift <4.y> OpenShift <4.y+1>. Update OpenShift Data Foundation <4.y> OpenShift Data Foundation <4.y+1>. Update OpenShift <4.y+1> OpenShift <4.y+2>. Update to OpenShift Data Foundation <4.y+2>. Unpause the worker machine pools. Note You can update to ODF <4.y+2> either before or after worker machine pools are unpaused. Important When you update OpenShift Data Foundation in external mode, make sure that the Red Had Ceph Storage and OpenShift Data Foundation versions are compatible. For more information about supported Red Had Ceph Storage version in external mode, refer to Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . Provide the required OpenShift Data Foundation version in the checker to see the supported Red Had Ceph version corresponding to the version in use. You also need to upgrade the different parts of Red Hat OpenShift Data Foundation in the following order for both internal and external mode deployments: Update OpenShift Container Platform according to the Updating clusters documentation for OpenShift Container Platform. Update Red Hat OpenShift Data Foundation. To prepare a disconnected environment for updates , see Operators guide to using Operator Lifecycle Manager on restricted networks to be able to update OpenShift Data Foundation as well as Local Storage Operator when in use. For updating between minor releases , see Updating Red Hat OpenShift Data Foundation 4.14 to 4.15 . For updating between z-stream releases , see Updating Red Hat OpenShift Data Foundation 4.15.x to 4.15.y . For updating external mode deployments , you must also perform the steps from section Updating the Red Hat OpenShift Data Foundation external secret . If you use local storage, then update the Local Storage operator . See Checking for Local Storage Operator deployments if you are unsure. Important If you have an existing setup of OpenShift Data Foundation 4.12 with disaster recovery (DR) enabled, ensure to update all your clusters in the environment at the same time and avoid updating a single cluster. This is to avoid any potential issues and maintain best compatibility. It is also important to maintain consistency across all OpenShift Data Foundation DR instances. Update considerations Review the following important considerations before you begin. The Red Hat OpenShift Container Platform version is the same as Red Hat OpenShift Data Foundation. See the Interoperability Matrix for more information about supported combinations of OpenShift Container Platform and Red Hat OpenShift Data Foundation. To know whether your cluster was deployed in internal or external mode, refer to the knowledgebase article on How to determine if ODF cluster has storage in internal or external mode . The Local Storage Operator is fully supported only when the Local Storage Operator version matches the Red Hat OpenShift Container Platform version. Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/updating_openshift_data_foundation/overview-of-the-openshift-data-foundation-update-process_rhodf
Chapter 10. Using report templates to monitor hosts
Chapter 10. Using report templates to monitor hosts You can use report templates to query Satellite data to obtain information about, for example, host status, registered hosts, applicable errata, applied errata, subscription details, and user activity. You can use the report templates that ship with Satellite or write your own custom report templates to suit your requirements. The reporting engine uses the embedded Ruby (ERB) syntax. For more information about writing templates and ERB syntax, see Appendix B, Template writing reference . You can create a template, or clone a template and edit the clone. For help with the template syntax, click a template and click the Help tab. 10.1. Generating host monitoring reports To view the report templates in the Satellite web UI, navigate to Monitor > Reports > Report Templates . To schedule reports, configure a cron job or use the Satellite web UI. Procedure In the Satellite web UI, navigate to Monitor > Reports > Report Templates . For example, the following templates are available: Host - Installed Products Use this template for hosts in Simple Content Access (SCA) organizations. It generates a report with installed product information along with other metrics included in Subscription - Entitlement Report except information about subscriptions. Subscription - Entitlement Report Use this template for hosts that are not in SCA organizations. It generates a report with information about subscription entitlements including when they expire. It only outputs information for hosts in organizations that do not use SCA. To the right of the report template that you want to use, click Generate . Optional: To schedule a report, to the right of the Generate at field, click the icon to select the date and time you want to generate the report at. Optional: To send a report to an e-mail address, select the Send report via e-mail checkbox, and in the Deliver to e-mail addresses field, enter the required e-mail address. Optional: Apply search query filters. To view all available results, do not populate the filter field with any values. Click Submit . A CSV file that contains the report is downloaded. If you have selected the Send report via e-mail checkbox, the host monitoring report is sent to your e-mail address. CLI procedure List all available report templates: Generate a report: This command waits until the report fully generates before completing. If you want to generate the report as a background task, you can use the hammer report-template schedule command. Note If you want to generate a subscription entitlement report, you have to use the Days from Now option to specify the latest expiration time of entitlement subscriptions. You can use the no limit value to show all entitlements. Show all entitlements Show all entitlements that are going to expire within 60 days 10.2. Creating a report template In Satellite, you can create a report template and customize the template to suit your requirements. You can import existing report templates and further customize them with snippets and template macros. Report templates use Embedded Ruby (ERB) syntax. To view information about working with ERB syntax and macros, in the Satellite web UI, navigate to Monitor > Reports > Report Templates , and click Create Template , and then click the Help tab. When you create a report template in Satellite, safe mode is enabled by default. Procedure In the Satellite web UI, navigate to Monitor > Reports > Report Templates . Click Create Template . In the Name field, enter a unique name for your report template. If you want the template to be available to all locations and organizations, select Default . Create the template directly in the template editor or import a template from a text file by clicking Import . For more information about importing templates, see Section 10.5, "Importing report templates" . Optional: In the Audit Comment field, you can add any useful information about this template. Click the Input tab, and in the Name field, enter a name for the input that you can reference in the template in the following format: input('name') . Note that you must save the template before you can reference this input value in the template body. Select whether the input value is mandatory. If the input value is mandatory, select the Required checkbox. From the Value Type list, select the type of input value that the user must input. Optional: If you want to use facts for template input, select the Advanced checkbox. Optional: In the Options field, define the options that the user can select from. If this field remains undefined, the users receive a free-text field in which they can enter the value they want. Optional: In the Default field, enter a value, for example, a host name, that you want to set as the default template input. Optional: In the Description field, you can enter information that you want to display as inline help about the input when you generate the report. Optional: Click the Type tab, and select whether this template is a snippet to be included in other templates. Click the Location tab and add the locations where you want to use the template. Click the Organizations tab and add the organizations where you want to use the template. Click Submit to save your changes. Additional resources For more information about safe mode, see Section 10.9, "Report template safe mode" . For more information about writing templates, see Appendix B, Template writing reference . For more information about macros you can use in report templates, see Section B.6, "Template macros" . To view a step by step example of populating a template, see Section 10.8, "Creating a report template to monitor entitlements" . 10.3. Exporting report templates You can export report templates that you create in Satellite. Procedure In the Satellite web UI, navigate to Monitor > Reports > Report Templates . Locate the template that you want to export, and from the list in the Actions column, select Export . Repeat this action for every report template that you want to download. An .erb file that contains the template downloads. CLI procedure To view the report templates available for export, enter the following command: Note the template ID of the template that you want to export in the output of this command. To export a report template, enter the following command: 10.4. Exporting report templates using the Satellite API You can use the Satellite report_templates API to export report templates from Satellite. For more information about using the Satellite API, see API guide . Procedure Use the following request to retrieve a list of available report templates: Example request: In this example, the json_reformat tool is used to format the JSON output. Example response: Note the id of the template that you want to export, and use the following request to export the template: Example request: Note that 158 is an example ID of the template to export. In this example, the exported template is redirected to host_complete_list.erb . 10.5. Importing report templates You can import a report template into the body of a new template that you want to create. Note that using the Satellite web UI, you can only import templates individually. For bulk actions, use the Satellite API. For more information, see Section 10.6, "Importing report templates using the Satellite API" . Prerequisites You must have exported templates from Satellite to import them to use in new templates. For more information see Section 10.3, "Exporting report templates" . Procedure In the Satellite web UI, navigate to Monitor > Reports > Report Templates . In the upper right of the Report Templates window, click Create Template . On the upper right of the Editor tab, click the folder icon, and select the .erb file that you want to import. Edit the template to suit your requirements. Click Submit . For more information about customizing your new template, see Appendix B, Template writing reference . 10.6. Importing report templates using the Satellite API You can use the Satellite API to import report templates into Satellite. Importing report templates using the Satellite API automatically parses the report template metadata and assigns organizations and locations. For more information about using the Satellite API, see the API guide . Prerequisites Create a template using .erb syntax or export a template from another Satellite. For more information about writing templates, see Appendix B, Template writing reference . For more information about exporting templates from Satellite, see Section 10.4, "Exporting report templates using the Satellite API" . Procedure Use the following example to format the template that you want to import to a .json file: Example JSON file with ERB template: Use the following request to import the template: Use the following request to retrieve a list of report templates and validate that you can view the template in Satellite: 10.7. Generating a list of installed packages Use this procedure to generate a list of installed packages in Report Templates . Procedure In the Satellite web UI, navigate to Monitor > Reports > Report Templates . To the right of Host - All Installed Packages , click Generate . Optional: Use the Hosts filter search field to search for and apply specific host filters. Click Generate . If the download does not start automatically, click Download . Verification You have the spreadsheet listing the installed packages for the selected hosts downloaded on your machine. 10.8. Creating a report template to monitor entitlements You can use a report template to return a list of hosts with a certain subscription and to display the number of cores for those hosts. For more information about writing templates, see Appendix B, Template writing reference . Procedure In the Satellite web UI, navigate to Monitor > Reports > Report Templates . Click Create Template . Optional: In the Editor field, use the <%# > tags to add a comment with information that might be useful for later reference. For example: Add a line with the load_hosts() macro and populate the macro with the following method and variables: To view a list of variables you can use, click the Help tab and in the Safe mode methods and variables table, find the Host::Managed row. Add a line with the host.pools variable with the each method, for example: Add a line with the report_row() method to create a report and add the variables that you want to target as part of the report: Add end statements to the template: To generate a report, you must add the <%= report_render -%> macro: Click Submit to save the template. 10.9. Report template safe mode When you create report templates in Satellite, safe mode is enabled by default. Safe mode limits the macros and variables that you can use in the report template. Safe mode prevents rendering problems and enforces best practices in report templates. The list of supported macros and variables is available in the Satellite web UI. To view the macros and variables that are available, in the Satellite web UI, navigate to Monitor > Reports > Report Templates and click Create Template . In the Create Template window, click the Help tab and expand Safe mode methods . While safe mode is enabled, if you try to use a macro or variable that is not listed in Safe mode methods , the template editor displays an error message. To view the status of safe mode in Satellite, in the Satellite web UI, navigate to Administer > Settings and click the Provisioning tab. Locate the Safemode rendering row to check the value.
[ "hammer report-template list", "hammer report-template generate --id My_Template_ID", "hammer report-template generate --inputs \"Days from Now=no limit\" --name \"Subscription - Entitlement Report\"", "hammer report-template generate --inputs \"Days from Now=60\" --name \"Subscription - Entitlement Report\"", "hammer report-template list", "hammer report-template dump --id My_Template_ID > example_export .erb", "curl --insecure --user admin:redhat --request GET --config https:// satellite.example.com /api/report_templates | json_reformat", "{ \"total\": 6, \"subtotal\": 6, \"page\": 1, \"per_page\": 20, \"search\": null, \"sort\": { \"by\": null, \"order\": null }, \"results\": [ { \"created_at\": \"2019-11-20 17:49:52 UTC\", \"updated_at\": \"2019-11-20 17:49:52 UTC\", \"name\": \"Applicable errata\", \"id\": 112 }, { \"created_at\": \"2019-11-20 17:49:52 UTC\", \"updated_at\": \"2019-11-20 17:49:52 UTC\", \"name\": \"Applied Errata\", \"id\": 113 }, { \"created_at\": \"2019-11-30 16:15:24 UTC\", \"updated_at\": \"2019-11-30 16:15:24 UTC\", \"name\": \"Hosts - complete list\", \"id\": 158 }, { \"created_at\": \"2019-11-20 17:49:52 UTC\", \"updated_at\": \"2019-11-20 17:49:52 UTC\", \"name\": \"Host statuses\", \"id\": 114 }, { \"created_at\": \"2019-11-20 17:49:52 UTC\", \"updated_at\": \"2019-11-20 17:49:52 UTC\", \"name\": \"Registered hosts\", \"id\": 115 }, { \"created_at\": \"2019-11-20 17:49:52 UTC\", \"updated_at\": \"2019-11-20 17:49:52 UTC\", \"name\": \"Subscriptions\", \"id\": 116 } ] }", "curl --insecure --output /tmp/_Example_Export_Template .erb_ --user admin:password --request GET --config https:// satellite.example.com /api/report_templates/ My_Template_ID /export", "cat Example_Template .json { \"name\": \" Example Template Name \", \"template\": \" Enter ERB Code Here \" }", "{ \"name\": \"Hosts - complete list\", \"template\": \" <%# name: Hosts - complete list snippet: false template_inputs: - name: host required: false input_type: user advanced: false value_type: plain resource_type: Katello::ActivationKey model: ReportTemplate -%> <% load_hosts(search: input('host')).each_record do |host| -%> <% report_row( 'Server FQDN': host.name ) -%> <% end -%> <%= report_render %> \" }", "curl --insecure --user admin:redhat --data @ Example_Template .json --header \"Content-Type:application/json\" --request POST --config https:// satellite.example.com /api/report_templates/import", "curl --insecure --user admin:redhat --request GET --config https:// satellite.example.com /api/report_templates | json_reformat", "<%# name: Entitlements snippet: false model: ReportTemplate require: - plugin: katello version: 3.14.0 -%>", "<%- load_hosts(includes: [:lifecycle_environment, :operatingsystem, :architecture, :content_view, :organization, :reported_data, :subscription_facet, :pools => [:subscription]]).each_record do |host| -%>", "<%- host.pools.each do |pool| -%>", "<%- report_row( 'Name': host.name, 'Organization': host.organization, 'Lifecycle Environment': host.lifecycle_environment, 'Content View': host.content_view, 'Host Collections': host.host_collections, 'Virtual': host.virtual, 'Guest of Host': host.hypervisor_host, 'OS': host.operatingsystem, 'Arch': host.architecture, 'Sockets': host.sockets, 'RAM': host.ram, 'Cores': host.cores, 'SLA': host_sla(host), 'Products': host_products(host), 'Subscription Name': sub_name(pool), 'Subscription Type': pool.type, 'Subscription Quantity': pool.quantity, 'Subscription SKU': sub_sku(pool), 'Subscription Contract': pool.contract_number, 'Subscription Account': pool.account_number, 'Subscription Start': pool.start_date, 'Subscription End': pool.end_date, 'Subscription Guest': registered_through(host) ) -%>", "<%- end -%> <%- end -%>", "<%= report_render -%>" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_hosts/Using_Report_Templates_to_Monitor_Hosts_managing-hosts
Chapter 1. Introduction to the Enterprise Security Client
Chapter 1. Introduction to the Enterprise Security Client The Enterprise Security Client is a tool for Red Hat Certificate System which simplifies managing smart cards. End users can use security tokens (smart cards) to store user certificates used for applications such as single sign-on access and client authentication. End users are issued the tokens containing certificates and keys required for signing, encryption, and other cryptographic functions. After a token is enrolled, applications such as Mozilla Firefox and Thunderbird can be configured to recognize the token and use it for security operations, like client authentication and S/MIME mail. The Enterprise Security Client provides the following capabilities: Supports Global Platform-compliant smart cards. Enrolls security tokens so they are recognized by the token management system in Red Hat Certificate System. Maintains the security token, such as re-enrolling a token. Provides information about the current status of the token or tokens being managed. Supports server-side key generation through the Certificate System subsystems so that keys can be archived and recovered on a separate token if a token is lost. 1.1. Red Hat Enterprise Linux, Single Sign-On, and Authentication Network users frequently have to submit multiple passwords for the various services they use, such as email, web browsing and intranets, and servers on the network. Maintaining multiple passwords, and constantly being prompted to enter them, is a hassle for users and administrators. Single sign-on is a configuration which allows administrators to create a single password store so that users can log in once, using a single password, and be authenticated to all network resources. Red Hat Enterprise Linux 6 supports single sign-on for several resources, including logging into workstations and unlocking screensavers, accessing encrypted web pages using Mozilla Firefox, and sending encrypted email using Mozilla Thunderbird. Single sign-on is both a convenience to users and another layer of security for the server and the network. Single sign-on hinges on secure and effective authentication. Red Hat Enterprise Linux provides two authentication mechanisms which can be used to enable single sign-on: Kerberos-based authentication Smart card-based authentication, using the Enterprise Security Client tied into the public-key infrastructure implemented by Red Hat Certificate System One of the cornerstones of establishing a secure network environment is making sure that access is restricted to people who have the right to access the network. If access is allowed, users can authenticate to the system, meaning they can verify their identities. Many systems use Kerberos to establish a system of short-lived credentials, called tickets , which are generated ad hoc at a user request. The user is required to present credentials in the form of a username-password pair that identify the user and indicate to the system that the user can be issued a ticket. This ticket can be referenced repeatedly by other services, like websites and email, requiring the user to go through only a single authentication process. An alternative method of verifying an identity is presenting a certificate. A certificate is an electronic document which identifies the entity which presents it. With smart card-based authentication, these certificates are stored on a small hardware device called a smart card or token. When a user inserts a smart card, the smart card presents the certificates to the system and identifies the user so the user can be authenticated. Single sign-on using smart cards goes through three steps: A user inserts a smart card into the card reader. This is detected by the pluggable authentication modules (PAM) on Red Hat Enterprise Linux. The system maps the certificate to the user entry and then compares the presented certificates on the smart card to the certificates stored in the user entry. If the certificate is successfully validated against the key distribution center (KDC), then the user is allowed to log in. Smart card-based authentication builds on the simple authentication layer established by Kerberos by adding additional identification mechanisms (certificates) and physical access requirements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_smart_cards/introduction
Chapter 15. What huge pages do and how they are consumed by applications
Chapter 15. What huge pages do and how they are consumed by applications 15.1. What huge pages do Memory is managed in blocks known as pages. On most systems, a page is 4Ki. 1Mi of memory is equal to 256 pages; 1Gi of memory is 256,000 pages, and so on. CPUs have a built-in memory management unit that manages a list of these pages in hardware. The Translation Lookaside Buffer (TLB) is a small hardware cache of virtual-to-physical page mappings. If the virtual address passed in a hardware instruction can be found in the TLB, the mapping can be determined quickly. If not, a TLB miss occurs, and the system falls back to slower, software-based address translation, resulting in performance issues. Since the size of the TLB is fixed, the only way to reduce the chance of a TLB miss is to increase the page size. A huge page is a memory page that is larger than 4Ki. On x86_64 architectures, there are two common huge page sizes: 2Mi and 1Gi. Sizes vary on other architectures. To use huge pages, code must be written so that applications are aware of them. Transparent Huge Pages (THP) attempt to automate the management of huge pages without application knowledge, but they have limitations. In particular, they are limited to 2Mi page sizes. THP can lead to performance degradation on nodes with high memory utilization or fragmentation due to defragmenting efforts of THP, which can lock memory pages. For this reason, some applications may be designed to (or recommend) usage of pre-allocated huge pages instead of THP. In OpenShift Container Platform, applications in a pod can allocate and consume pre-allocated huge pages. 15.2. How huge pages are consumed by apps Nodes must pre-allocate huge pages in order for the node to report its huge page capacity. A node can only pre-allocate huge pages for a single size. Huge pages can be consumed through container-level resource requirements using the resource name hugepages-<size> , where size is the most compact binary notation using integer values supported on a particular node. For example, if a node supports 2048KiB page sizes, it exposes a schedulable resource hugepages-2Mi . Unlike CPU or memory, huge pages do not support over-commitment. apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- spec: containers: - securityContext: privileged: true image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: hugepages-2Mi: 100Mi 1 memory: "1Gi" cpu: "1" volumes: - name: hugepage emptyDir: medium: HugePages 1 Specify the amount of memory for hugepages as the exact amount to be allocated. Do not specify this value as the amount of memory for hugepages multiplied by the size of the page. For example, given a huge page size of 2MB, if you want to use 100MB of huge-page-backed RAM for your application, then you would allocate 50 huge pages. OpenShift Container Platform handles the math for you. As in the above example, you can specify 100MB directly. Allocating huge pages of a specific size Some platforms support multiple huge page sizes. To allocate huge pages of a specific size, precede the huge pages boot command parameters with a huge page size selection parameter hugepagesz=<size> . The <size> value must be specified in bytes with an optional scale suffix [ kKmMgG ]. The default huge page size can be defined with the default_hugepagesz=<size> boot parameter. Huge page requirements Huge page requests must equal the limits. This is the default if limits are specified, but requests are not. Huge pages are isolated at a pod scope. Container isolation is planned in a future iteration. EmptyDir volumes backed by huge pages must not consume more huge page memory than the pod request. Applications that consume huge pages via shmget() with SHM_HUGETLB must run with a supplemental group that matches proc/sys/vm/hugetlb_shm_group . Additional resources Configuring Transparent Huge Pages 15.3. Configuring huge pages Nodes must pre-allocate huge pages used in an OpenShift Container Platform cluster. There are two ways of reserving huge pages: at boot time and at run time. Reserving at boot time increases the possibility of success because the memory has not yet been significantly fragmented. The Node Tuning Operator currently supports boot time allocation of huge pages on specific nodes. 15.3.1. At boot time Procedure To minimize node reboots, the order of the steps below needs to be followed: Label all nodes that need the same huge pages setting by a label. USD oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp= Create a file with the following content and name it hugepages-tuned-boottime.yaml : apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages 1 namespace: openshift-cluster-node-tuning-operator spec: profile: 2 - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 3 name: openshift-node-hugepages recommend: - machineConfigLabels: 4 machineconfiguration.openshift.io/role: "worker-hp" priority: 30 profile: openshift-node-hugepages 1 Set the name of the Tuned resource to hugepages . 2 Set the profile section to allocate huge pages. 3 Note the order of parameters is important as some platforms support huge pages of various sizes. 4 Enable machine config pool based matching. Create the Tuned hugepages profile USD oc create -f hugepages-tuned-boottime.yaml Create a file with the following content and name it hugepages-mcp.yaml : apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-hp labels: worker-hp: "" spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]} nodeSelector: matchLabels: node-role.kubernetes.io/worker-hp: "" Create the machine config pool: USD oc create -f hugepages-mcp.yaml Given enough non-fragmented memory, all the nodes in the worker-hp machine config pool should now have 50 2Mi huge pages allocated. USD oc get node <node_using_hugepages> -o jsonpath="{.status.allocatable.hugepages-2Mi}" 100Mi Warning This functionality is currently only supported on Red Hat Enterprise Linux CoreOS (RHCOS) 8.x worker nodes. On Red Hat Enterprise Linux (RHEL) 7.x worker nodes the Tuned [bootloader] plug-in is currently not supported.
[ "apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- spec: containers: - securityContext: privileged: true image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: hugepages-2Mi: 100Mi 1 memory: \"1Gi\" cpu: \"1\" volumes: - name: hugepage emptyDir: medium: HugePages", "oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp=", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages 1 namespace: openshift-cluster-node-tuning-operator spec: profile: 2 - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 3 name: openshift-node-hugepages recommend: - machineConfigLabels: 4 machineconfiguration.openshift.io/role: \"worker-hp\" priority: 30 profile: openshift-node-hugepages", "oc create -f hugepages-tuned-boottime.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-hp labels: worker-hp: \"\" spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]} nodeSelector: matchLabels: node-role.kubernetes.io/worker-hp: \"\"", "oc create -f hugepages-mcp.yaml", "oc get node <node_using_hugepages> -o jsonpath=\"{.status.allocatable.hugepages-2Mi}\" 100Mi" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/scalability_and_performance/what-huge-pages-do-and-how-they-are-consumed
5.118. iscsi-initiator-utils
5.118. iscsi-initiator-utils 5.118.1. RHBA-2012:0957 - iscsi-initiator-utils bug fix and enhancement update Updated iscsi-initiator-utils packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The iscsi-initiator-utils package provides the server daemon for the iSCSI protocol, as well as utilities used to manage the daemon. iSCSI is a protocol for distributed disk access using SCSI commands sent over Internet Protocol networks. The iscsiuio tool has been upgraded to upstream version 0.7.2.1, which provides a number of bug fixes and one enhancement over the version. (BZ# 740054 ) Bug Fixes BZ# 738192 The iscsistart utility used hard-coded values as its settings. Consequently, it could take several minutes before change failure detection and path failover when using dm-multipath took place. With this update, the iscsistart utility has been modified to process settings provided on the command line. BZ# 739049 The iSCSI README file incorrectly listed the --info option as the option to display iscsiadm iSCSI information. The README has been corrected and it now states correctly that you need to use the "-P 1" argument to obtain such information. BZ# 739843 The iSCSI discovery process via a TOE (TCP Offload Engine) interface failed if the "iscsiadm -m iface" command had not been executed. This happened because the "iscsiadm -m" discovery command did not check interface settings. With this update, the iscsiadm tool creates the default ifaces settings when first used and the problem no longer occurs. BZ# 796574 If the port number was passed with a non-fully-qualified hostname to the iscsiadm tool, the tool created records with the port being part of the hostname. Consequently, the login or discovery operation failed because iscsiadm was not able to find the record. With this update, the iscsiadm portal parser has been modified to separate the port from the hostname. As a result, the port is parsed and processed correctly. Enhancement BZ# 790609 The iscsidm tool has been updated to support the ping command using QLogic's iSCSI offload cards and to manage the CHAP (Challenge-Handshake Authentication Protocol) entries on the host. All users of iscsi-initiator-utils are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/iscsi-initiator-utils
Chapter 4. Managing build output
Chapter 4. Managing build output Use the following sections for an overview of and instructions for managing build output. 4.1. Build output Builds that use the docker or source-to-image (S2I) strategy result in the creation of a new container image. The image is then pushed to the container image registry specified in the output section of the Build specification. If the output kind is ImageStreamTag , then the image will be pushed to the integrated OpenShift image registry and tagged in the specified imagestream. If the output is of type DockerImage , then the name of the output reference will be used as a docker push specification. The specification may contain a registry or will default to DockerHub if no registry is specified. If the output section of the build specification is empty, then the image will not be pushed at the end of the build. Output to an ImageStreamTag spec: output: to: kind: "ImageStreamTag" name: "sample-image:latest" Output to a docker Push Specification spec: output: to: kind: "DockerImage" name: "my-registry.mycompany.com:5000/myimages/myimage:tag" 4.2. Output image environment variables docker and source-to-image (S2I) strategy builds set the following environment variables on output images: Variable Description OPENSHIFT_BUILD_NAME Name of the build OPENSHIFT_BUILD_NAMESPACE Namespace of the build OPENSHIFT_BUILD_SOURCE The source URL of the build OPENSHIFT_BUILD_REFERENCE The Git reference used in the build OPENSHIFT_BUILD_COMMIT Source commit used in the build Additionally, any user-defined environment variable, for example those configured with S2I] or docker strategy options, will also be part of the output image environment variable list. 4.3. Output image labels docker and source-to-image (S2I)` builds set the following labels on output images: Label Description io.openshift.build.commit.author Author of the source commit used in the build io.openshift.build.commit.date Date of the source commit used in the build io.openshift.build.commit.id Hash of the source commit used in the build io.openshift.build.commit.message Message of the source commit used in the build io.openshift.build.commit.ref Branch or reference specified in the source io.openshift.build.source-location Source URL for the build You can also use the BuildConfig.spec.output.imageLabels field to specify a list of custom labels that will be applied to each image built from the build configuration. Custom Labels to be Applied to Built Images spec: output: to: kind: "ImageStreamTag" name: "my-image:latest" imageLabels: - name: "vendor" value: "MyCompany" - name: "authoritative-source-url" value: "registry.mycompany.com"
[ "spec: output: to: kind: \"ImageStreamTag\" name: \"sample-image:latest\"", "spec: output: to: kind: \"DockerImage\" name: \"my-registry.mycompany.com:5000/myimages/myimage:tag\"", "spec: output: to: kind: \"ImageStreamTag\" name: \"my-image:latest\" imageLabels: - name: \"vendor\" value: \"MyCompany\" - name: \"authoritative-source-url\" value: \"registry.mycompany.com\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/builds/managing-build-output
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/planning_identity_management/proc_providing-feedback-on-red-hat-documentation_planning-identity-management
probe::signal.handle.return
probe::signal.handle.return Name probe::signal.handle.return - Signal handler invocation completed Synopsis Values retstr Return value as a string name Name of the probe point
[ "signal.handle.return" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-signal-handle-return
Chapter 3. Dynamically provisioned OpenShift Data Foundation deployed on Red Hat Virtualization
Chapter 3. Dynamically provisioned OpenShift Data Foundation deployed on Red Hat Virtualization 3.1. Replacing operational or failed storage devices on Red Hat Virtualization installer-provisioned infrastructure Create a new Persistent Volume Claim (PVC) on a new volume, and remove the old object storage device (OSD). Prerequisites Ensure that the data is resilient. In the OpenShift Web Console, click Storage OpenShift Data Foundation . Click the Storage Systems tab, and then click ocs-storagecluster-storagesystem . In the Status card of Block and File dashboard, under the Overview tab, verify that Data Resiliency has a green tick mark. Procedure Identify the OSD that needs to be replaced and the OpenShift Container Platform node that has the OSD scheduled on it. Example output: In this example, rook-ceph-osd-0-6d77d6c7c6-m8xj6 needs to be replaced and compute-2 is the OpenShift Container platform node on which the OSD is scheduled. Note If the OSD to be replaced is healthy, the status of the pod will be Running . Scale down the OSD deployment for the OSD to be replaced. Each time you want to replace the OSD, update the osd_id_to_remove parameter with the OSD ID, and repeat this step. where, osd_id_to_remove is the integer in the pod name immediately after the rook-ceph-osd prefix. In this example, the deployment name is rook-ceph-osd-0 . Example output: Verify that the rook-ceph-osd pod is terminated. Example output: Important If the rook-ceph-osd pod is in terminating state, use the force option to delete the pod. Example output: Remove the old OSD from the cluster so that you can add a new OSD. Delete any old ocs-osd-removal jobs. Example output: Navigate to the openshift-storage project. Remove the old OSD from the cluster. <failed_osd_id> Is the integer in the pod name immediately after the rook-ceph-osd prefix. You can add comma separated OSD IDs in the command to remove more than one OSD, for example, FAILED_OSD_IDS=0,1,2 . The FORCE_OSD_REMOVAL value must be changed to true in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Warning This step results in OSD being completely removed from the cluster. Ensure that the correct value of osd_id_to_remove is provided. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job fails and the pod is not in the expected Completed state, check the pod logs for further debugging. For example: If encryption was enabled at the time of install, remove dm-crypt managed device-mapper mapping from the OSD devices that are removed from the respective OpenShift Data Foundation nodes. Get the PVC name(s) of the replaced OSD(s) from the logs of ocs-osd-removal-job pod. Example output: For each of the previously identified nodes, do the following: Create a debug pod and chroot to the host on the storage node. <node name> Is the name of the node. Find a relevant device name based on the PVC names identified in the step. <pvc name> Is the name of the PVC. Example output: Remove the mapped device. Important If the above command gets stuck due to insufficient privileges, run the following commands: Press CTRL+Z to exit the above command. Find the PID of the process which was stuck. Terminate the process using the kill command. <PID> Is the process ID. Verify that the device name is removed. Delete the ocs-osd-removal job. Example output: Note When using an external key management system (KMS) with data encryption, the old OSD encryption key can be removed from the Vault server as it is now an orphan key. Verfication steps Verify that there is a new OSD running. Example output: Verify that there is a new PVC created which is in Bound state. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the node(s) where the new OSD pod(s) are running. <OSD pod name> Is the name of the OSD pod. For example: For each of the previously identified nodes, do the following: Create a debug pod and open a chroot environment for the selected host(s). <node name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset name(s). Log in to OpenShift Web Console and view the storage dashboard.
[ "oc get -n openshift-storage pods -l app=rook-ceph-osd -o wide", "rook-ceph-osd-0-6d77d6c7c6-m8xj6 0/1 CrashLoopBackOff 0 24h 10.129.0.16 compute-2 <none> <none> rook-ceph-osd-1-85d99fb95f-2svc7 1/1 Running 0 24h 10.128.2.24 compute-0 <none> <none> rook-ceph-osd-2-6c66cdb977-jp542 1/1 Running 0 24h 10.130.0.18 compute-1 <none> <none>", "osd_id_to_remove=0", "oc scale -n openshift-storage deployment rook-ceph-osd-USD{osd_id_to_remove} --replicas=0", "deployment.extensions/rook-ceph-osd-0 scaled", "oc get -n openshift-storage pods -l ceph-osd-id=USD{osd_id_to_remove}", "No resources found.", "oc delete pod rook-ceph-osd-0-6d77d6c7c6-m8xj6 --force --grace-period=0", "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod \"rook-ceph-osd-0-6d77d6c7c6-m8xj6\" force deleted", "oc delete -n openshift-storage job ocs-osd-removal-job", "job.batch \"ocs-osd-removal-job\"", "oc project openshift-storage", "oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS= <failed_osd_id> FORCE_OSD_REMOVAL=false | oc create -n openshift-storage -f -", "oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'", "2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 |egrep -i 'pvc|deviceset'", "2021-05-12 14:31:34.666000 I | cephosd: removing the OSD PVC \"ocs-deviceset-xxxx-xxx-xxx-xxx\"", "oc debug node/ <node name>", "chroot /host", "dmsetup ls| grep <pvc name>", "ocs-deviceset-xxx-xxx-xxx-xxx-block-dmcrypt (253:0)", "cryptsetup luksClose --debug --verbose ocs-deviceset-xxx-xxx-xxx-xxx-block-dmcrypt", "ps -ef | grep crypt", "kill -9 <PID>", "dmsetup ls", "oc delete -n openshift-storage job ocs-osd-removal-job", "job.batch \"ocs-osd-removal-job\" deleted", "oc get -n openshift-storage pods -l app=rook-ceph-osd", "rook-ceph-osd-0-5f7f4747d4-snshw 1/1 Running 0 4m47s rook-ceph-osd-1-85d99fb95f-2svc7 1/1 Running 0 1d20h rook-ceph-osd-2-6c66cdb977-jp542 1/1 Running 0 1d20h", "oc get -n openshift-storage pvc", "oc get -o=custom-columns=NODE:.spec.nodeName pod/ <OSD pod name>", "get -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm", "oc debug node/ <node name>", "chroot /host", "lsblk" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/replacing_devices/dynamically_provisioned_openshift_data_foundation_deployed_on_red_hat_virtualization
12.1 Release Notes
12.1 Release Notes Red Hat Developer Toolset 12 Release Notes for Red Hat Developer Toolset 12.1 Lenka Spackova Red Hat Customer Content Services [email protected] Jaromir Hradilek Red Hat Customer Content Services Eliska Slobodova Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_developer_toolset/12/html/12.1_release_notes/index
A.13. Workaround for Creating External Snapshots with libvirt
A.13. Workaround for Creating External Snapshots with libvirt There are two classes of snapshots for KVM guests: Internal snapshots are contained completely within a qcow2 file, and fully supported by libvirt , allowing for creating, deleting, and reverting of snapshots. This is the default setting used by libvirt when creating a snapshot, especially when no option is specified. This file type take slightly longer than others for creating the snapshot, and has the drawback of requiring qcow2 disks. Important Internal snapshots are not being actively developed, and Red Hat discourages their use. External snapshots work with any type of original disk image, can be taken with no guest downtime, and are more stable and reliable. As such, external snapshots are recommended for use on KVM guest virtual machines. However, external snapshots are currently not fully implemented on Red Hat Enterprise Linux 7, and are not available when using virt-manager . To create an external snapshot, use the snapshot-create-as with the --diskspec vda,snapshot=external option, or use the following disk line in the snapshot XML file: At the moment, external snapshots are a one-way operation as libvirt can create them but cannot do anything further with them. A workaround is described on libvirt upstream pages .
[ "<disk name='vda' snapshot='external'> <source file='/path/to,new'/> </disk>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-troubleshooting-workaround_for_creating_external_snapshots_with_libvirt
Chapter 6. Mirroring Ceph block devices
Chapter 6. Mirroring Ceph block devices As a storage administrator, you can add another layer of redundancy to Ceph block devices by mirroring data images between Red Hat Ceph Storage clusters. Understanding and using Ceph block device mirroring can provide you protection against data loss, such as a site failure. There are two configurations for mirroring Ceph block devices, one-way mirroring or two-way mirroring, and you can configure mirroring on pools and individual images. 6.1. Prerequisites A minimum of two healthy running Red Hat Ceph Storage clusters. Network connectivity between the two storage clusters. Access to a Ceph client node for each Red Hat Ceph Storage cluster. A CephX user with administrator-level capabilities. 6.2. Ceph block device mirroring RADOS Block Device (RBD) mirroring is a process of asynchronous replication of Ceph block device images between two or more Ceph storage clusters. By locating a Ceph storage cluster in different geographic locations, RBD Mirroring can help you recover from a site disaster. Journal-based Ceph block device mirroring ensures point-in-time consistent replicas of all changes to an image, including reads and writes, block device resizing, snapshots, clones and flattening. RBD mirroring uses exclusive locks and the journaling feature to record all modifications to an image in the order in which they occur. This ensures that a crash-consistent mirror of an image is available. Important The CRUSH hierarchies supporting primary and secondary pools that mirror block device images must have the same capacity and performance characteristics, and must have adequate bandwidth to ensure mirroring without excess latency. For example, if you have X MB/s average write throughput to images in the primary storage cluster, the network must support N * X throughput in the network connection to the secondary site plus a safety factor of Y% to mirror N images. The rbd-mirror daemon is responsible for synchronizing images from one Ceph storage cluster to another Ceph storage cluster by pulling changes from the remote primary image and writes those changes to the local, non-primary image. The rbd-mirror daemon can run either on a single Ceph storage cluster for one-way mirroring or on two Ceph storage clusters for two-way mirroring that participate in the mirroring relationship. For RBD mirroring to work, either using one-way or two-way replication, a couple of assumptions are made: A pool with the same name exists on both storage clusters. A pool contains journal-enabled images you want to mirror. Important In one-way or two-way replication, each instance of rbd-mirror must be able to connect to the other Ceph storage cluster simultaneously. Additionally, the network must have sufficient bandwidth between the two data center sites to handle mirroring. One-way Replication One-way mirroring implies that a primary image or pool of images in one storage cluster gets replicated to a secondary storage cluster. One-way mirroring also supports replicating to multiple secondary storage clusters. On the secondary storage cluster, the image is the non-primary replicate; that is, Ceph clients cannot write to the image. When data is mirrored from a primary storage cluster to a secondary storage cluster, the rbd-mirror runs ONLY on the secondary storage cluster. For one-way mirroring to work, a couple of assumptions are made: You have two Ceph storage clusters and you want to replicate images from a primary storage cluster to a secondary storage cluster. The secondary storage cluster has a Ceph client node attached to it running the rbd-mirror daemon. The rbd-mirror daemon will connect to the primary storage cluster to sync images to the secondary storage cluster. Figure 6.1. One-way mirroring Two-way Replication Two-way replication adds an rbd-mirror daemon on the primary cluster so images can be demoted on it and promoted on the secondary cluster. Changes can then be made to the images on the secondary cluster and they will be replicated in the reverse direction, from secondary to primary. Both clusters must have rbd-mirror running to allow promoting and demoting images on either cluster. Currently, two-way replication is only supported between two sites. For two-way mirroring to work, a couple of assumptions are made: You have two storage clusters and you want to be able to replicate images between them in either direction. Both storage clusters have a client node attached to them running the rbd-mirror daemon. The rbd-mirror daemon running on the secondary storage cluster will connect to the primary storage cluster to synchronize images to secondary, and the rbd-mirror daemon running on the primary storage cluster will connect to the secondary storage cluster to synchronize images to primary. Figure 6.2. Two-way mirroring Mirroring Modes Mirroring is configured on a per-pool basis with mirror peering storage clusters. Ceph supports two mirroring modes, depending on the type of images in the pool. Pool Mode All images in a pool with the journaling feature enabled are mirrored. Image Mode Only a specific subset of images within a pool are mirrored. You must enable mirroring for each image separately. Image States Whether or not an image can be modified depends on its state: Images in the primary state can be modified. Images in the non-primary state cannot be modified. Images are automatically promoted to primary when mirroring is first enabled on an image. The promotion can happen: Implicitly by enabling mirroring in pool mode. Explicitly by enabling mirroring of a specific image. It is possible to demote primary images and promote non-primary images. Additional Resources See the Image promotion and demotion section of the Red Hat Ceph Storage Block Device Guide for more details. 6.2.1. An overview of journal-based and snapshot-based mirroring RBD images can be asynchronously mirrored between two Red Hat Ceph Storage clusters through two modes: Journal-based mirroring This mode uses the RBD journaling image feature to ensure point-in-time and crash consistent replication between two Red Hat Ceph Storage clusters. The actual image is not modified till every write to the RBD image is first recorded to the associated journal. The remote cluster reads from this journal and replays the updates to its local copy of the image. Since each write to the RBD images results in two writes to the Ceph cluster, write latencies will nearly double with the usage of the RBD journaling image feature. Snapshot-based mirroring This mode uses periodic scheduled or manually created RBD image mirror snapshots to replicate crash consistent RBD images between two Red Hat Ceph Storage clusters. The remote cluster determines any data or metadata updates between two mirror snapshots and copy the deltas to its local copy of the image. The RBD fast-diff image feature enables the quick determination of updated data blocks without the need to scan the full RBD image. The complete delta between two snapshots needs to be synced prior to use during a failover scenario. Any partially applied set of deltas will be rolled back at moment of failover. 6.3. Configuring one-way mirroring using the command-line interface This procedure configures one-way replication of a pool from the primary storage cluster to a secondary storage cluster. Note When using one-way replication you can mirror to multiple secondary storage clusters. Note Examples in this section will distinguish between two storage clusters by referring to the primary storage cluster with the primary images as site-a , and the secondary storage cluster you are replicating the images to, as site-b . The pool name used in these examples is called data . Prerequisites A minimum of two healthy and running Red Hat Ceph Storage clusters. Root-level access to a Ceph client node for each storage cluster. A CephX user with administrator-level capabilities. Procedure Log into the cephadm shell on both the sites: Example On site-b , schedule the deployment of mirror daemon on the secondary cluster: Syntax Example Note The nodename is the host where you want to configure mirroring in the secondary cluster. Enable journaling features on an image on site-a . For new images , use the --image-feature option: Syntax Example Note If exclusive-lock is already enabled, use journaling as the only argument, else it returns the following error: For existing images , use the rbd feature enable command: Syntax Example Enable journaling on all new images by default: Syntax Example Choose the mirroring mode, either pool or image mode, on both the storage clusters. Enabling pool mode : Syntax Example This example enables mirroring of the whole pool named data . Enabling image mode : Syntax Example This example enables image mode mirroring on the pool named data . Note To enable mirroring on specific images in a pool, see the Enabling image mirroring section in the Red Hat Ceph Storage Block Device Guide for more details. Verify that mirroring has been successfully enabled at both the sites: Syntax Example On a Ceph client node, bootstrap the storage cluster peers. Create Ceph user accounts, and register the storage cluster peer to the pool: Syntax Example Note This example bootstrap command creates the client.rbd-mirror.site-a and the client.rbd-mirror-peer Ceph users. Copy the bootstrap token file to the site-b storage cluster. Import the bootstrap token on the site-b storage cluster: Syntax Example Note For one-way RBD mirroring, you must use the --direction rx-only argument, as two-way mirroring is the default when bootstrapping peers. To verify the mirroring status, run the following command from a Ceph Monitor node on the primary and secondary sites: Syntax Example Here, up means the rbd-mirror daemon is running, and stopped means this image is not the target for replication from another storage cluster. This is because the image is primary on this storage cluster. Example Additional Resources See the Ceph block device mirroring section in the Red Hat Ceph Storage Block Device Guide for more details. See the User Management section in the Red Hat Ceph Storage Administration Guide for more details on Ceph users. 6.4. Configuring two-way mirroring using the command-line interface This procedure configures two-way replication of a pool between the primary storage cluster, and a secondary storage cluster. Note When using two-way replication you can only mirror between two storage clusters. Note Examples in this section will distinguish between two storage clusters by referring to the primary storage cluster with the primary images as site-a , and the secondary storage cluster you are replicating the images to, as site-b . The pool name used in these examples is called data . Prerequisites A minimum of two healthy and running Red Hat Ceph Storage clusters. Root-level access to a Ceph client node for each storage cluster. A CephX user with administrator-level capabilities. Procedure Log into the cephadm shell on both the sites: Example On the site-a primary cluster, run the following command: Example Note The nodename is the host where you want to configure mirroring. On site-b , schedule the deployment of mirror daemon on the secondary cluster: Syntax Example Note The nodename is the host where you want to configure mirroring in the secondary cluster. Enable journaling features on an image on site-a . For new images , use the --image-feature option: Syntax Example Note If exclusive-lock is already enabled, use journaling as the only argument, else it returns the following error: For existing images , use the rbd feature enable command: Syntax Example Enable journaling on all new images by default: Syntax Example Choose the mirroring mode, either pool or image mode, on both the storage clusters. Enabling pool mode : Syntax Example This example enables mirroring of the whole pool named data . Enabling image mode : Syntax Example This example enables image mode mirroring on the pool named data . Note To enable mirroring on specific images in a pool, see the Enabling image mirroring section in the Red Hat Ceph Storage Block Device Guide for more details. Verify that mirroring has been successfully enabled at both the sites: Syntax Example On a Ceph client node, bootstrap the storage cluster peers. Create Ceph user accounts, and register the storage cluster peer to the pool: Syntax Example Note This example bootstrap command creates the client.rbd-mirror.site-a and the client.rbd-mirror-peer Ceph users. Copy the bootstrap token file to the site-b storage cluster. Import the bootstrap token on the site-b storage cluster: Syntax Example Note The --direction argument is optional, as two-way mirroring is the default when bootstrapping peers. To verify the mirroring status, run the following command from a Ceph Monitor node on the primary and secondary sites: Syntax Example Here, up means the rbd-mirror daemon is running, and stopped means this image is not the target for replication from another storage cluster. This is because the image is primary on this storage cluster. Example If images are in the state up+replaying , then mirroring is functioning properly. Here, up means the rbd-mirror daemon is running, and replaying means this image is the target for replication from another storage cluster. Note Depending on the connection between the sites, mirroring can take a long time to sync the images. Additional Resources See the Ceph block device mirroring section in the Red Hat Ceph Storage Block Device Guide for more details. See the User Management section in the Red Hat Ceph Storage Administration Guide for more details on Ceph users. 6.5. Administration for mirroring Ceph block devices As a storage administrator, you can do various tasks to help you manage the Ceph block device mirroring environment. You can do the following tasks: Viewing information about storage cluster peers. Add or remove a storage cluster peer. Getting mirroring status for a pool or image. Enabling mirroring on a pool or image. Disabling mirroring on a pool or image. Delaying block device replication. Promoting and demoting an image. 6.5.1. Prerequisites A minimum of two healthy running Red Hat Ceph Storage cluster. Root-level access to the Ceph client nodes. A one-way or two-way Ceph block device mirroring relationship. A CephX user with administrator-level capabilities. 6.5.2. Viewing information about peers View information about storage cluster peers. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To view information about the peers: Syntax Example 6.5.3. Enabling mirroring on a pool Enable mirroring on a pool by running the following commands on both peer clusters. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To enable mirroring on a pool: Syntax Example This example enables mirroring of the whole pool named data . Example This example enables image mode mirroring on the pool named data . Additional Resources See the Mirroring Ceph block devices section in the Red Hat Ceph Storage Block Device Guide for details. 6.5.4. Disabling mirroring on a pool Before disabling mirroring, remove the peer clusters. Note When you disable mirroring on a pool, you also disable it on any images within the pool for which mirroring was enabled separately in image mode. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To disable mirroring on a pool: Syntax Example This example disables mirroring of a pool named data . 6.5.5. Enabling image mirroring Enable mirroring on the whole pool in image mode on both peer storage clusters. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Enable mirroring for a specific image within the pool: Syntax Example This example enables mirroring for the image2 image in the data pool. Additional Resources See the Enabling mirroring on a pool section in the Red Hat Ceph Storage Block Device Guide for details. 6.5.6. Disabling image mirroring You can disable Ceph Block Device mirroring on images. Prerequisites A running Red Hat Ceph Storage cluster with snapshot-based mirroring configured. Root-level access to the node. Procedure To disable mirroring for a specific image: Syntax Example This example disables mirroring of the image2 image in the data pool. Additional Resources See the Configuring Ansible inventory location section in the Red Hat Ceph Storage Installation Guide for more details on adding clients to the cephadm-ansible inventory. 6.5.7. Image promotion and demotion You can promote or demote an image in a pool. Note Do not force promote non-primary images that are still syncing, because the images will not be valid after the promotion. Prerequisites A running Red Hat Ceph Storage cluster with snapshot-based mirroring configured. Root-level access to the node. Procedure To demote an image to non-primary: Syntax Example This example demotes the image2 image in the data pool. To promote an image to primary: Syntax Example This example promotes image2 in the data pool. Depending on which type of mirroring you are using, see either Recover from a disaster with one-way mirroring or Recover from a disaster with two-way mirroring for details. Syntax Example Use forced promotion when the demotion cannot be propagated to the peer Ceph storage cluster. For example, because of cluster failure or communication outage. Additional Resources See the Failover after a non-orderly shutdown section in the Red Hat Ceph Storage Block Device Guide for details. 6.5.8. Image resynchronization You can re-synchronize an image. In case of an inconsistent state between the two peer clusters, the rbd-mirror daemon does not attempt to mirror the image that is causing the inconsistency. Prerequisites A running Red Hat Ceph Storage cluster with snapshot-based mirroring configured. Root-level access to the node. Procedure To request a re-synchronization to the primary image: Syntax Example This example requests resynchronization of image2 in the data pool. Additional Resources To recover from an inconsistent state because of a disaster, see either Recover from a disaster with one-way mirroring or Recover from a disaster with two-way mirroring for details. 6.5.9. Getting mirroring status for a pool You can get the mirror status for a pool on the storage clusters. Prerequisites A running Red Hat Ceph Storage cluster with snapshot-based mirroring configured. Root-level access to the node. Procedure To get the mirroring pool summary: Syntax Example Tip To output status details for every mirroring image in a pool, use the --verbose option. 6.5.10. Getting mirroring status for a single image You can get the mirror status for an image by running the mirror image status command. Prerequisites A running Red Hat Ceph Storage cluster with snapshot-based mirroring configured. Root-level access to the node. Procedure To get the status of a mirrored image: Syntax Example This example gets the status of the image2 image in the data pool. 6.5.11. Delaying block device replication Whether you are using one- or two-way replication, you can delay replication between RADOS Block Device (RBD) mirroring images. You might want to implement delayed replication if you want a window of cushion time in case an unwanted change to the primary image needs to be reverted before being replicated to the secondary image. To implement delayed replication, the rbd-mirror daemon within the destination storage cluster should set the rbd_mirroring_replay_delay = MINIMUM_DELAY_IN_SECONDS configuration option. This setting can either be applied globally within the ceph.conf file utilized by the rbd-mirror daemons, or on an individual image basis. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To utilize delayed replication for a specific image, on the primary image, run the following rbd CLI command: Syntax Example This example sets a 10 minute minimum replication delay on image vm-1 in the vms pool. 6.5.12. Converting journal-based mirroring to snapshot-based mirrorring You can convert journal-based mirroring to snapshot-based mirroring by disabling mirroring and enabling snapshot. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Log into the Cephadm shell: Example Disable mirroring for a specific image within the pool. Syntax Example Enable snapshot-based mirroring for the image. Syntax Example This example enables snapshot-based mirroring for the mirror_image image in the mirror_pool pool. 6.5.13. Creating an image mirror-snapshot Create an image mirror-snapshot when it is required to mirror the changed contents of an RBD image when using snapshot-based mirroring. Prerequisites A minimum of two healthy running Red Hat Ceph Storage clusters. Root-level access to the Ceph client nodes for the Red Hat Ceph Storage clusters. A CephX user with administrator-level capabilities. Access to the Red Hat Ceph Storage cluster where a snapshot mirror will be created. Important By default, a maximum of 5 image mirror-snapshots is retained. The most recent image mirror-snapshot is automatically removed if the limit is reached. If required, the limit can be overridden through the rbd_mirroring_max_mirroring_snapshots configuration. Image mirror-snapshots are automatically deleted when the image is removed or when mirroring is disabled. Procedure To create an image-mirror snapshot: Syntax Example Additional Resources See the Mirroring Ceph block devices section in the Red Hat Ceph Storage Block Device Guide for details. 6.5.14. Scheduling mirror-snapshots Mirror-snapshots can be automatically created when mirror-snapshot schedules are defined. The mirror-snapshot can be scheduled globally, per-pool or per-image levels. Multiple mirror-snapshot schedules can be defined at any level but only the most specific snapshot schedules that match an individual mirrored image will run. 6.5.14.1. Creating a mirror-snapshot schedule You can create a mirror-snapshot schedule using the snapshot schedule command. Prerequisites A minimum of two healthy running Red Hat Ceph Storage clusters. Root-level access to the Ceph client nodes for the Red Hat Ceph Storage clusters. A CephX user with administrator-level capabilities. Access to the Red Hat Ceph Storage cluster where a snapshot mirror will be created. Procedure Create a mirror-snapshot schedule: Syntax The CLUSTER_NAME should be used only when the cluster name is different from the default name ceph . The interval can be specified in days, hours, or minutes using d, h, or m suffix respectively. The optional START_TIME can be specified using the ISO 8601 time format. Example Scheduling at image level: Scheduling at pool level: Scheduling at global level: Additional Resources See the Mirroring Ceph block devices section in the Red Hat Ceph Storage Block Device Guide for details. 6.5.14.2. Listing all snapshot schedules at a specific level You can list all snapshot schedules at a specific level. Prerequisites A minimum of two healthy running Red Hat Ceph Storage clusters. Root-level access to the Ceph client nodes for the Red Hat Ceph Storage clusters. A CephX user with administrator-level capabilities. Access to the Red Hat Ceph Storage cluster where a snapshot mirror will be created. Procedure To list all snapshot schedules for a specific global, pool or image level, with an optional pool or image name: Syntax Additionally, the --recursive option can be specified to list all schedules at the specified level as shown below: Example Additional Resources See the Mirroring Ceph block devices section in the Red Hat Ceph Storage Block Device Guide for details. 6.5.14.3. Removing a mirror-snapshot schedule You can remove a mirror-snapshot schedule using the snapshot schedule remove command. Prerequisites A minimum of two healthy running Red Hat Ceph Storage clusters. Root-level access to the Ceph client nodes for the Red Hat Ceph Storage clusters. A CephX user with administrator-level capabilities. Access to the Red Hat Ceph Storage cluster where a snapshot mirror will be created. Procedure To remove a mirror-snapshot schedule: Syntax The interval can be specified in days, hours, or minutes using d, h, m suffix respectively. The optional START_TIME can be specified using the ISO 8601 time format. Example Example Additional Resources See the Mirroring Ceph block devices section in the Red Hat Ceph Storage Block Device Guide for details. 6.5.14.4. Viewing the status for the snapshots to be created You can view the status for the snapshots to be created for snapshot-based mirroring RBD images. Prerequisites A minimum of two healthy running Red Hat Ceph Storage clusters. Root-level access to the Ceph client nodes for the Red Hat Ceph Storage clusters. A CephX user with administrator-level capabilities. Access to the Red Hat Ceph Storage cluster where a snapshot mirror will be created. Procedure To view the status for the snapshots to be created: Syntax Example Additional Resources See the Mirroring Ceph block devices section in the Red Hat Ceph Storage Block Device Guide for details. 6.6. Recover from a disaster As a storage administrator, you can be prepared for eventual hardware failure by knowing how to recover the data from another storage cluster where mirroring was configured. In the examples, the primary storage cluster is known as the site-a , and the secondary storage cluster is known as the site-b . Additionally, the storage clusters both have a data pool with two images, image1 and image2 . 6.6.1. Prerequisites A running Red Hat Ceph Storage cluster. One-way or two-way mirroring was configured. 6.6.2. Disaster recovery Asynchronous replication of block data between two or more Red Hat Ceph Storage clusters reduces downtime and prevents data loss in the event of a significant data center failure. These failures have a widespread impact, also referred as a large blast radius , and can be caused by impacts to the power grid and natural disasters. Customer data needs to be protected during these scenarios. Volumes must be replicated with consistency and efficiency and also within Recovery Point Objective (RPO) and Recovery Time Objective (RTO) targets. This solution is called a Wide Area Network- Disaster Recovery (WAN-DR). In such scenarios it is hard to restore the primary system and the data center. The quickest way to recover is to failover the applications to an alternate Red Hat Ceph Storage cluster (disaster recovery site) and make the cluster operational with the latest copy of the data available. The solutions that are used to recover from these failure scenarios are guided by the application: Recovery Point Objective (RPO) : The amount of data loss, an application tolerate in the worst case. Recovery Time Objective (RTO) : The time taken to get the application back on line with the latest copy of the data available. Additional Resources See the Mirroring Ceph block devices Chapter in the Red Hat Ceph Storage Block Device Guide for details. See the Encryption in transit section in the Red Hat Ceph Storage Data Security and Hardening Guide to know more about data transmission over the wire in an encrypted state. 6.6.3. Recover from a disaster with one-way mirroring To recover from a disaster when using one-way mirroring use the following procedures. They show how to fail over to the secondary cluster after the primary cluster terminates, and how to fail back. The shutdown can be orderly or non-orderly. Important One-way mirroring supports multiple secondary sites. If you are using additional secondary clusters, choose one of the secondary clusters to fail over to. Synchronize from the same cluster during fail back. 6.6.4. Recover from a disaster with two-way mirroring To recover from a disaster when using two-way mirroring use the following procedures. They show how to fail over to the mirrored data on the secondary cluster after the primary cluster terminates, and how to failback. The shutdown can be orderly or non-orderly. 6.6.5. Failover after an orderly shutdown Failover to the secondary storage cluster after an orderly shutdown. Prerequisites Minimum of two running Red Hat Ceph Storage clusters. Root-level access to the node. Pool mirroring or image mirroring configured with one-way mirroring. Procedure Stop all clients that use the primary image. This step depends on which clients use the image. For example, detach volumes from any OpenStack instances that use the image. Demote the primary images located on the site-a cluster by running the following commands on a monitor node in the site-a cluster: Syntax Example Promote the non-primary images located on the site-b cluster by running the following commands on a monitor node in the site-b cluster: Syntax Example After some time, check the status of the images from a monitor node in the site-b cluster. They should show a state of up+stopped and be listed as primary: Resume the access to the images. This step depends on which clients use the image. Additional Resources See the Block Storage and Volumes chapter in the Red Hat OpenStack Platform Storage Guide . 6.6.6. Failover after a non-orderly shutdown Failover to secondary storage cluster after a non-orderly shutdown. Prerequisites Minimum of two running Red Hat Ceph Storage clusters. Root-level access to the node. Pool mirroring or image mirroring configured with one-way mirroring. Procedure Verify that the primary storage cluster is down. Stop all clients that use the primary image. This step depends on which clients use the image. For example, detach volumes from any OpenStack instances that use the image. Promote the non-primary images from a Ceph Monitor node in the site-b storage cluster. Use the --force option, because the demotion cannot be propagated to the site-a storage cluster: Syntax Example Check the status of the images from a Ceph Monitor node in the site-b storage cluster. They should show a state of up+stopping_replay . The description should say force promoted , meaning it is in the intermittent state. Wait until the state comes to up+stopped to validate the site is successfully promoted. Example Additional Resources See the Block Storage and Volumes chapter in the Red Hat OpenStack Platform Storage Guide . 6.6.7. Prepare for fail back If two storage clusters were originally configured only for one-way mirroring, in order to fail back, configure the primary storage cluster for mirroring in order to replicate the images in the opposite direction. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the client node. Procedure Log into the Cephadm shell: Example On the site-a storage cluster , run the following command: Example Create a block device pool with a name same as its peer mirror pool. To create an rbd pool, execute the following: Syntax Example On a Ceph client node, bootstrap the storage cluster peers. Create Ceph user accounts, and register the storage cluster peer to the pool: Syntax Example Note This example bootstrap command creates the client.rbd-mirror.site-a and the client.rbd-mirror-peer Ceph users. Copy the bootstrap token file to the site-b storage cluster. Import the bootstrap token on the site-b storage cluster: Syntax Example Note For one-way RBD mirroring, you must use the --direction rx-only argument, as two-way mirroring is the default when bootstrapping peers. From a monitor node in the site-a storage cluster, verify the site-b storage cluster was successfully added as a peer: Example Additional Resources For detailed information, see the User Management chapter in the Red Hat Ceph Storage Administration Guide . 6.6.7.1. Fail back to the primary storage cluster When the formerly primary storage cluster recovers, fail back to the primary storage cluster. Note If you have scheduled snapshots at the image level, then you need to re-add the schedule as image resync operations changes the RBD Image ID and the schedule becomes obsolete. Prerequisites Minimum of two running Red Hat Ceph Storage clusters. Root-level access to the node. Pool mirroring or image mirroring configured with one-way mirroring. Procedure Check the status of the images from a monitor node in the site-b cluster again. They should show a state of up-stopped and the description should say local image is primary : Example From a Ceph Monitor node on the site-a storage cluster determine if the images are still primary: Syntax Example In the output from the commands, look for mirroring primary: true or mirroring primary: false , to determine the state. Demote any images that are listed as primary by running a command like the following from a Ceph Monitor node in the site-a storage cluster: Syntax Example Resynchronize the images ONLY if there was a non-orderly shutdown. Run the following commands on a monitor node in the site-a storage cluster to resynchronize the images from site-b to site-a : Syntax Example After some time, ensure resynchronization of the images is complete by verifying they are in the up+replaying state. Check their state by running the following commands on a monitor node in the site-a storage cluster: Syntax Example Demote the images on the site-b storage cluster by running the following commands on a Ceph Monitor node in the site-b storage cluster: Syntax Example Note If there are multiple secondary storage clusters, this only needs to be done from the secondary storage cluster where it was promoted. Promote the formerly primary images located on the site-a storage cluster by running the following commands on a Ceph Monitor node in the site-a storage cluster: Syntax Example Check the status of the images from a Ceph Monitor node in the site-a storage cluster. They should show a status of up+stopped and the description should say local image is primary : Syntax Example 6.6.8. Remove two-way mirroring After fail back is complete, you can remove two-way mirroring and disable the Ceph block device mirroring service. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Remove the site-b storage cluster as a peer from the site-a storage cluster: Example Stop and disable the rbd-mirror daemon on the site-a client: Syntax Example
[ "cephadm shell cephadm shell", "ceph orch apply rbd-mirror --placement= NODENAME", "ceph orch apply rbd-mirror --placement=host04", "rbd create IMAGE_NAME --size MEGABYTES --pool POOL_NAME --image-feature FEATURE FEATURE", "rbd create image1 --size 1024 --pool data --image-feature exclusive-lock,journaling", "one or more requested features are already enabled (22) Invalid argument", "rbd feature enable POOL_NAME / IMAGE_NAME FEATURE , FEATURE", "rbd feature enable data/image1 exclusive-lock, journaling", "ceph config set global rbd_default_features SUM_OF_FEATURE_NUMERIC_VALUES ceph config show HOST01 rbd_default_features", "ceph config set global rbd_default_features 125 ceph config show mon.host01 rbd_default_features", "rbd mirror pool enable POOL_NAME MODE", "rbd mirror pool enable data pool rbd mirror pool enable data pool", "rbd mirror pool enable POOL_NAME MODE", "rbd mirror pool enable data image rbd mirror pool enable data image", "rbd mirror pool info POOL_NAME", "rbd mirror pool info data Mode: pool Site Name: c13d8065-b33d-4cb5-b35f-127a02768e7f Peer Sites: none rbd mirror pool info data Mode: pool Site Name: a4c667e2-b635-47ad-b462-6faeeee78df7 Peer Sites: none", "rbd mirror pool peer bootstrap create --site-name PRIMARY_LOCAL_SITE_NAME POOL_NAME > PATH_TO_BOOTSTRAP_TOKEN", "rbd mirror pool peer bootstrap create --site-name site-a data > /root/bootstrap_token_site-a", "rbd mirror pool peer bootstrap import --site-name SECONDARY_LOCAL_SITE_NAME --direction rx-only POOL_NAME PATH_TO_BOOTSTRAP_TOKEN", "rbd mirror pool peer bootstrap import --site-name site-b --direction rx-only data /root/bootstrap_token_site-a", "rbd mirror image status POOL_NAME / IMAGE_NAME", "rbd mirror image status data/image1 image1: global_id: c13d8065-b33d-4cb5-b35f-127a02768e7f state: up+stopped description: remote image is non-primary service: host03.yuoosv on host03 last_update: 2021-10-06 09:13:58", "rbd mirror image status data/image1 image1: global_id: c13d8065-b33d-4cb5-b35f-127a02768e7f", "cephadm shell cephadm shell", "ceph orch apply rbd-mirror --placement=host01", "ceph orch apply rbd-mirror --placement= NODENAME", "ceph orch apply rbd-mirror --placement=host04", "rbd create IMAGE_NAME --size MEGABYTES --pool POOL_NAME --image-feature FEATURE FEATURE", "rbd create image1 --size 1024 --pool data --image-feature exclusive-lock,journaling", "one or more requested features are already enabled (22) Invalid argument", "rbd feature enable POOL_NAME / IMAGE_NAME FEATURE , FEATURE", "rbd feature enable data/image1 exclusive-lock, journaling", "ceph config set global rbd_default_features SUM_OF_FEATURE_NUMERIC_VALUES ceph config show HOST01 rbd_default_features", "ceph config set global rbd_default_features 125 ceph config show mon.host01 rbd_default_features", "rbd mirror pool enable POOL_NAME MODE", "rbd mirror pool enable data pool rbd mirror pool enable data pool", "rbd mirror pool enable POOL_NAME MODE", "rbd mirror pool enable data image rbd mirror pool enable data image", "rbd mirror pool info POOL_NAME", "rbd mirror pool info data Mode: pool Site Name: c13d8065-b33d-4cb5-b35f-127a02768e7f Peer Sites: none rbd mirror pool info data Mode: pool Site Name: a4c667e2-b635-47ad-b462-6faeeee78df7 Peer Sites: none", "rbd mirror pool peer bootstrap create --site-name PRIMARY_LOCAL_SITE_NAME POOL_NAME > PATH_TO_BOOTSTRAP_TOKEN", "rbd mirror pool peer bootstrap create --site-name site-a data > /root/bootstrap_token_site-a", "rbd mirror pool peer bootstrap import --site-name SECONDARY_LOCAL_SITE_NAME --direction rx-tx POOL_NAME PATH_TO_BOOTSTRAP_TOKEN", "rbd mirror pool peer bootstrap import --site-name site-b --direction rx-tx data /root/bootstrap_token_site-a", "rbd mirror image status POOL_NAME / IMAGE_NAME", "rbd mirror image status data/image1 image1: global_id: a4c667e2-b635-47ad-b462-6faeeee78df7 state: up+stopped description: local image is primary service: host03.glsdbv on host03.ceph.redhat.com last_update: 2021-09-16 10:55:58 peer_sites: name: a state: up+stopped description: replaying, {\"bytes_per_second\":0.0,\"entries_behind_primary\":0,\"entries_per_second\":0.0,\"non_primary_position\":{\"entry_tid\":3,\"object_number\":3,\"tag_tid\":1},\"primary_position\":{\"entry_tid\":3,\"object_number\":3,\"tag_tid\":1}} last_update: 2021-09-16 10:55:50", "rbd mirror image status data/image1 image1: global_id: a4c667e2-b635-47ad-b462-6faeeee78df7 state: up+replaying description: replaying, {\"bytes_per_second\":0.0,\"entries_behind_primary\":0,\"entries_per_second\":0.0,\"non_primary_position\":{\"entry_tid\":3,\"object_number\":3,\"tag_tid\":1},\"primary_position\":{\"entry_tid\":3,\"object_number\":3,\"tag_tid\":1}} service: host05.dtisty on host05 last_update: 2021-09-16 10:57:20 peer_sites: name: b state: up+stopped description: local image is primary last_update: 2021-09-16 10:57:28", "rbd mirror pool info POOL_NAME", "rbd mirror pool info data Mode: pool Site Name: a Peer Sites: UUID: 950ddadf-f995-47b7-9416-b9bb233f66e3 Name: b Mirror UUID: 4696cd9d-1466-4f98-a97a-3748b6b722b3 Direction: rx-tx Client: client.rbd-mirror-peer", "rbd mirror pool enable POOL_NAME MODE", "rbd mirror pool enable data pool", "rbd mirror pool enable data image", "rbd mirror pool disable POOL_NAME", "rbd mirror pool disable data", "rbd mirror image enable POOL_NAME / IMAGE_NAME", "rbd mirror image enable data/image2", "rbd mirror image disable POOL_NAME / IMAGE_NAME", "rbd mirror image disable data/image2", "rbd mirror image demote POOL_NAME / IMAGE_NAME", "rbd mirror image demote data/image2", "rbd mirror image promote POOL_NAME / IMAGE_NAME", "rbd mirror image promote data/image2", "rbd mirror image promote --force POOL_NAME / IMAGE_NAME", "rbd mirror image promote --force data/image2", "rbd mirror image resync POOL_NAME / IMAGE_NAME", "rbd mirror image resync data/image2", "rbd mirror pool status POOL_NAME", "rbd mirror pool status data health: OK daemon health: OK image health: OK images: 1 total 1 replaying", "rbd mirror image status POOL_NAME / IMAGE_NAME", "rbd mirror image status data/image2 image2: global_id: 1e3422a2-433e-4316-9e43-1827f8dbe0ef state: up+unknown description: remote image is non-primary service: pluto008.yuoosv on pluto008 last_update: 2021-10-06 09:37:58", "rbd image-meta set POOL_NAME / IMAGE_NAME conf_rbd_mirroring_replay_delay MINIMUM_DELAY_IN_SECONDS", "rbd image-meta set vms/vm-1 conf_rbd_mirroring_replay_delay 600", "cephadm shell", "rbd mirror image disable POOL_NAME / IMAGE_NAME", "rbd mirror image disable mirror_pool/mirror_image Mirroring disabled", "rbd mirror image enable POOL_NAME / IMAGE_NAME snapshot", "rbd mirror image enable mirror_pool/mirror_image snapshot Mirroring enabled", "rbd --cluster CLUSTER_NAME mirror image snapshot POOL_NAME / IMAGE_NAME", "rbd mirror image snapshot data/image1", "rbd --cluster CLUSTER_NAME mirror snapshot schedule add --pool POOL_NAME --image IMAGE_NAME INTERVAL [ START_TIME ]", "rbd mirror snapshot schedule add --pool data --image image1 6h", "rbd mirror snapshot schedule add --pool data 24h 14:00:00-05:00", "rbd mirror snapshot schedule add 48h", "rbd --cluster site-a mirror snapshot schedule ls --pool POOL_NAME --recursive", "rbd mirror snapshot schedule ls --pool data --recursive POOL NAMESPACE IMAGE SCHEDULE data - - every 1d starting at 14:00:00-05:00 data - image1 every 6h", "rbd --cluster CLUSTER_NAME mirror snapshot schedule remove --pool POOL_NAME --image IMAGE_NAME INTERVAL START_TIME", "rbd mirror snapshot schedule remove --pool data --image image1 6h", "rbd mirror snapshot schedule remove --pool data --image image1 24h 14:00:00-05:00", "rbd --cluster site-a mirror snapshot schedule status [--pool POOL_NAME ] [--image IMAGE_NAME ]", "rbd mirror snapshot schedule status SCHEDULE TIME IMAGE 2021-09-21 18:00:00 data/image1", "rbd mirror image demote POOL_NAME / IMAGE_NAME", "rbd mirror image demote data/image1 rbd mirror image demote data/image2", "rbd mirror image promote POOL_NAME / IMAGE_NAME", "rbd mirror image promote data/image1 rbd mirror image promote data/image2", "rbd mirror image status data/image1 image1: global_id: 08027096-d267-47f8-b52e-59de1353a034 state: up+stopped description: local image is primary last_update: 2019-04-17 16:04:37 rbd mirror image status data/image2 image2: global_id: 596f41bc-874b-4cd4-aefe-4929578cc834 state: up+stopped description: local image is primary last_update: 2019-04-17 16:04:37", "rbd mirror image promote --force POOL_NAME / IMAGE_NAME", "rbd mirror image promote --force data/image1 rbd mirror image promote --force data/image2", "rbd mirror image status data/image1 image1: global_id: 08027096-d267-47f8-b52e-59de1353a034 state: up+stopping_replay description: force promoted last_update: 2023-04-17 13:25:06 rbd mirror image status data/image1 image1: global_id: 08027096-d267-47f8-b52e-59de1353a034 state: up+stopped description: force promoted last_update: 2023-04-17 13:25:06", "cephadm shell", "ceph orch apply rbd-mirror --placement=host01", "ceph osd pool create POOL_NAME PG_NUM ceph osd pool application enable POOL_NAME rbd rbd pool init -p POOL_NAME", "ceph osd pool create pool1 ceph osd pool application enable pool1 rbd rbd pool init -p pool1", "rbd mirror pool peer bootstrap create --site-name LOCAL_SITE_NAME POOL_NAME > PATH_TO_BOOTSTRAP_TOKEN", "rbd mirror pool peer bootstrap create --site-name site-a data > /root/bootstrap_token_site-a", "rbd mirror pool peer bootstrap import --site-name LOCAL_SITE_NAME --direction rx-only POOL_NAME PATH_TO_BOOTSTRAP_TOKEN", "rbd mirror pool peer bootstrap import --site-name site-b --direction rx-only data /root/bootstrap_token_site-a", "rbd mirror pool info -p data Mode: image Peers: UUID NAME CLIENT d2ae0594-a43b-4c67-a167-a36c646e8643 site-b client.site-b", "rbd mirror image status data/image1 image1: global_id: 08027096-d267-47f8-b52e-59de1353a034 state: up+stopped description: local image is primary last_update: 2019-04-22 17:37:48 rbd mirror image status data/image2 image2: global_id: 08027096-d267-47f8-b52e-59de1353a034 state: up+stopped description: local image is primary last_update: 2019-04-22 17:38:18", "rbd mirror pool info POOL_NAME / IMAGE_NAME", "rbd info data/image1 rbd info data/image2", "rbd mirror image demote POOL_NAME / IMAGE_NAME", "rbd mirror image demote data/image1", "rbd mirror image resync POOL_NAME / IMAGE_NAME", "rbd mirror image resync data/image1 Flagged image for resync from primary rbd mirror image resync data/image2 Flagged image for resync from primary", "rbd mirror image status POOL_NAME / IMAGE_NAME", "rbd mirror image status data/image1 rbd mirror image status data/image2", "rbd mirror image demote POOL_NAME / IMAGE_NAME", "rbd mirror image demote data/image1 rbd mirror image demote data/image2", "rbd mirror image promote POOL_NAME / IMAGE_NAME", "rbd mirror image promote data/image1 rbd mirror image promote data/image2", "rbd mirror image status POOL_NAME / IMAGE_NAME", "rbd mirror image status data/image1 image1: global_id: 08027096-d267-47f8-b52e-59de1353a034 state: up+stopped description: local image is primary last_update: 2019-04-22 11:14:51 rbd mirror image status data/image2 image2: global_id: 596f41bc-874b-4cd4-aefe-4929578cc834 state: up+stopped description: local image is primary last_update: 2019-04-22 11:14:51", "rbd mirror pool peer remove data client.remote@remote --cluster local rbd --cluster site-a mirror pool peer remove data client.site-b@site-b -n client.site-a", "systemctl stop ceph-rbd-mirror@ CLIENT_ID systemctl disable ceph-rbd-mirror@ CLIENT_ID systemctl disable ceph-rbd-mirror.target", "systemctl stop ceph-rbd-mirror@site-a systemctl disable ceph-rbd-mirror@site-a systemctl disable ceph-rbd-mirror.target" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/block_device_guide/mirroring-ceph-block-devices
Chapter 4. OpenShift Data Foundation installation overview
Chapter 4. OpenShift Data Foundation installation overview OpenShift Data Foundation consists of multiple components managed by multiple operators. 4.1. Installed Operators When you install OpenShift Data Foundation from the Operator Hub, the following four separate Deployments are created: odf-operator : Defines the odf-operator Pod ocs-operator : Defines the ocs-operator Pod which runs processes for ocs-operator and its metrics-exporter in the same container. rook-ceph-operator : Defines the rook-ceph-operator Pod. mcg-operator : Defines the mcg-operator Pod. These operators run independently and interact with each other by creating customer resources (CRs) watched by the other operators. The ocs-operator is primarily responsible for creating the CRs to configure Ceph storage and Multicloud Object Gateway. The mcg-operator sometimes creates Ceph volumes for use by its components. 4.2. OpenShift Container Storage initialization The OpenShift Data Foundation bundle also defines an external plugin to the OpenShift Container Platform Console, adding new screens and functionality not otherwise available in the Console. This plugin runs as a web server in the odf-console-plugin Pod, which is managed by a Deployment created by the OLM at the time of installation. The ocs-operator automatically creates an OCSInitialization CR after it gets created. Only one OCSInitialization CR exists at any point in time. It controls the ocs-operator behaviors that are not restricted to the scope of a single StorageCluster , but only performs them once. When you delete the OCSInitialization CR, the ocs-operator creates it again and this allows you to re-trigger its initialization operations. The OCSInitialization CR controls the following behaviors: SecurityContextConstraints (SCCs) After the OCSInitialization CR is created, the ocs-operator creates various SCCs for use by the component Pods. Ceph Toolbox Deployment You can use the OCSInitialization to deploy the Ceph Toolbox Pod for the advanced Ceph operations. Rook-Ceph Operator Configuration This configuration creates the rook-ceph-operator-config ConfigMap that governs the overall configuration for rook-ceph-operator behavior. 4.3. Storage cluster creation The OpenShift Data Foundation operators themselves provide no storage functionality, and the desired storage configuration must be defined. After you install the operators, create a new StorageCluster , using either the OpenShift Container Platform console wizard or the CLI and the ocs-operator reconciles this StorageCluster . OpenShift Data Foundation supports a single StorageCluster per installation. Any StorageCluster CRs created after the first one is ignored by ocs-operator reconciliation. OpenShift Data Foundation allows the following StorageCluster configurations: Internal In the Internal mode, all the components run containerized within the OpenShift Container Platform cluster and uses dynamically provisioned persistent volumes (PVs) created against the StorageClass specified by the administrator in the installation wizard. Internal-attached This mode is similar to the Internal mode but the administrator is required to define the local storage devices directly attached to the cluster nodes that the Ceph uses for its backing storage. Also, the administrator need to create the CRs that the local storage operator reconciles to provide the StorageClass . The ocs-operator uses this StorageClass as the backing storage for Ceph. External In this mode, Ceph components do not run inside the OpenShift Container Platform cluster instead connectivity is provided to an external OpenShift Container Storage installation for which the applications can create PVs. The other components run within the cluster as required. MCG Standalone This mode facilitates the installation of a Multicloud Object Gateway system without an accompanying CephCluster. After a StorageCluster CR is found, ocs-operator validates it and begins to create subsequent resources to define the storage components. 4.3.1. Internal mode storage cluster Both internal and internal-attached storage clusters have the same setup process as follows: StorageClasses Create the storage classes that cluster applications use to create Ceph volumes. SnapshotClasses Create the volume snapshot classes that the cluster applications use to create snapshots of Ceph volumes. Ceph RGW configuration Create various Ceph object CRs to enable and provide access to the Ceph RGW object storage endpoint. Ceph RBD Configuration Create the CephBlockPool CR to enable RBD storage. CephFS Configuration Create the CephFilesystem CR to enable CephFS storage. Rook-Ceph Configuration Create the rook-config-override ConfigMap that governs the overall behavior of the underlying Ceph cluster. CephCluster Create the CephCluster CR to trigger Ceph reconciliation from rook-ceph-operator . For more information, see Rook-Ceph operator . NoobaaSystem Create the NooBaa CR to trigger reconciliation from mcg-operator . For more information, see MCG operator . Job templates Create OpenShift Template CRs that define Jobs to run administrative operations for OpenShift Container Storage. Quickstarts Create the QuickStart CRs that display the quickstart guides in the Web Console. 4.3.1.1. Cluster Creation After the ocs-operator creates the CephCluster CR, the rook-operator creates the Ceph cluster according to the desired configuration. The rook-operator configures the following components: Ceph mon daemons Three Ceph mon daemons are started on different nodes in the cluster. They manage the core metadata for the Ceph cluster and they must form a majority quorum. The metadata for each mon is backed either by a PV if it is in a cloud environment or a path on the local host if it is in a local storage device environment. Ceph mgr daemon This daemon is started and it gathers metrics for the cluster and report them to Prometheus. Ceph OSDs These OSDs are created according to the configuration of the storageClassDeviceSets . Each OSD consumes a PV that stores the user data. By default, Ceph maintains three replicas of the application data across different OSDs for high durability and availability using the CRUSH algorithm. CSI provisioners These provisioners are started for RBD and CephFS . When volumes are requested for the storage classes of OpenShift Container Storage, the requests are directed to the Ceph-CSI driver to provision the volumes in Ceph. CSI volume plugins and CephFS The CSI volume plugins for RBD and CephFS are started on each node in the cluster. The volume plugin needs to be running wherever the Ceph volumes are required to be mounted by the applications. After the CephCluster CR is configured, Rook reconciles the remaining Ceph CRs to complete the setup: CephBlockPool The CephBlockPool CR provides the configuration for Rook operator to create Ceph pools for RWO volumes. CephFilesystem The CephFilesystem CR instructs the Rook operator to configure a shared file system with CephFS, typically for RWX volumes. The CephFS metadata server (MDS) is started to manage the shared volumes. CephObjectStore The CephObjectStore CR instructs the Rook operator to configure an object store with the RGW service CephObjectStoreUser CR The CephObjectStoreUser CR instructs the Rook operator to configure an object store user for NooBaa to consume, publishing access/private key as well as the CephObjectStore endpoint. The operator monitors the Ceph health to ensure that storage platform remains healthy. If a mon daemon goes down for too long a period (10 minutes), Rook starts a new mon in its place so that the full quorum can be fully restored. When the ocs-operator updates the CephCluster CR, Rook immediately responds to the requested changes to update the cluster configuration. 4.3.1.2. NooBaa System creation When a NooBaa system is created, the mcg-operator reconciles the following: Default BackingStore Depending on the platform that OpenShift Container Platform and OpenShift Data Foundation are deployed on, a default backing store resource is created so that buckets can use it for their placement policy. The different options are as follows: Amazon Web Services (AWS) deployment The mcg-operator uses the CloudCredentialsOperator (CCO) to mint credentials in order to create a new AWS::S3 bucket and creates a BackingStore on top of that bucket. Microsoft Azure deployment The mcg-operator uses the CCO to mint credentials in order to create a new Azure Blob and creates a BackingStore on top of that bucket. Google Cloud Platform (GCP) deployment The mcg-operator uses the CCO to mint credentials in order to create a new GCP bucket and will create a BackingStore on top of that bucket. On-prem deployment If RGW exists, the mcg-operator creates a new CephUser and a new bucket on top of RGW and create a BackingStore on top of that bucket. None of the previously mentioned deployments are applicable The mcg-operator creates a pv-pool based on the default storage class and creates a BackingStore on top of that bucket. Default BucketClass A BucketClass with a placement policy to the default BackingStore is created. NooBaa pods The following NooBaa pods are created and started: Database (DB) This is a Postgres DB holding metadata, statistics, events, and so on. However, it does not hold the actual data being stored. Core This is the pod that handles configuration, background processes, metadata management, statistics, and so on. Endpoints These pods perform the actual I/O-related work such as deduplication and compression, communicating with different services to write and read data, and so on. The endpoints are integrated with the HorizonalPodAutoscaler and their number increases and decreases according to the CPU usage observed on the existing endpoint pods. Route A Route for the NooBaa S3 interface is created for applications that uses S3. Service A Service for the NooBaa S3 interface is created for applications that uses S3. 4.3.2. External mode storage cluster For external storage clusters, ocs-operator follows a slightly different setup process. The ocs-operator looks for the existence of the rook-ceph-external-cluster-details ConfigMap , which must be created by someone else, either the administrator or the Console. For information about how to create the ConfigMap , see Creating an OpenShift Data Foundation Cluster for external mode . The ocs-operator then creates some or all of the following resources, as specified in the ConfigMap : External Ceph Configuration A ConfigMap that specifies the endpoints of the external mons . External Ceph Credentials Secret A Secret that contains the credentials to connect to the external Ceph instance. External Ceph StorageClasses One or more StorageClasses to enable the creation of volumes for RBD, CephFS, and/or RGW. Enable CephFS CSI Driver If a CephFS StorageClass is specified, configure rook-ceph-operator to deploy the CephFS CSI Pods. Ceph RGW Configuration If an RGW StorageClass is specified, create various Ceph Object CRs to enable and provide access to the Ceph RGW object storage endpoint. After creating the resources specified in the ConfigMap , the StorageCluster creation process proceeds as follows: CephCluster Create the CephCluster CR to trigger Ceph reconciliation from rook-ceph-operator (see subsequent sections). SnapshotClasses Create the SnapshotClasses that applications use to create snapshots of Ceph volumes. NoobaaSystem Create the NooBaa CR to trigger reconciliation from noobaa-operator (see subsequent sections). QuickStarts Create the Quickstart CRs that display the quickstart guides in the Console. 4.3.2.1. Cluster Creation The Rook operator performs the following operations when the CephCluster CR is created in external mode: The operator validates that a connection is available to the remote Ceph cluster. The connection requires mon endpoints and secrets to be imported into the local cluster. The CSI driver is configured with the remote connection to Ceph. The RBD and CephFS provisioners and volume plugins are started similarly to the CSI driver when configured in internal mode, the connection to Ceph happens to be external to the OpenShift cluster. Periodically watch for monitor address changes and update the Ceph-CSI configuration accordingly. 4.3.2.2. NooBaa System creation When a NooBaa system is created, the mcg-operator reconciles the following: Default BackingStore Depending on the platform that OpenShift Container Platform and OpenShift Data Foundation are deployed on, a default backing store resource is created so that buckets can use it for their placement policy. The different options are as follows: Amazon Web Services (AWS) deployment The mcg-operator uses the CloudCredentialsOperator (CCO) to mint credentials in order to create a new AWS::S3 bucket and creates a BackingStore on top of that bucket. Microsoft Azure deployment The mcg-operator uses the CCO to mint credentials in order to create a new Azure Blob and creates a BackingStore on top of that bucket. Google Cloud Platform (GCP) deployment The mcg-operator uses the CCO to mint credentials in order to create a new GCP bucket and will create a BackingStore on top of that bucket. On-prem deployment If RGW exists, the mcg-operator creates a new CephUser and a new bucket on top of RGW and create a BackingStore on top of that bucket. None of the previously mentioned deployments are applicable The mcg-operator creates a pv-pool based on the default storage class and creates a BackingStore on top of that bucket. Default BucketClass A BucketClass with a placement policy to the default BackingStore is created. NooBaa pods The following NooBaa pods are created and started: Database (DB) This is a Postgres DB holding metadata, statistics, events, and so on. However, it does not hold the actual data being stored. Core This is the pod that handles configuration, background processes, metadata management, statistics, and so on. Endpoints These pods perform the actual I/O-related work such as deduplication and compression, communicating with different services to write and read data, and so on. The endpoints are integrated with the HorizonalPodAutoscaler and their number increases and decreases according to the CPU usage observed on the existing endpoint pods. Route A Route for the NooBaa S3 interface is created for applications that uses S3. Service A Service for the NooBaa S3 interface is created for applications that uses S3. 4.3.3. MCG Standalone StorageCluster In this mode, no CephCluster is created. Instead a NooBaa system CR is created using default values to take advantage of pre-existing StorageClasses in the OpenShift Container Platform. dashboards. 4.3.3.1. NooBaa System creation When a NooBaa system is created, the mcg-operator reconciles the following: Default BackingStore Depending on the platform that OpenShift Container Platform and OpenShift Data Foundation are deployed on, a default backing store resource is created so that buckets can use it for their placement policy. The different options are as follows: Amazon Web Services (AWS) deployment The mcg-operator uses the CloudCredentialsOperator (CCO) to mint credentials in order to create a new AWS::S3 bucket and creates a BackingStore on top of that bucket. Microsoft Azure deployment The mcg-operator uses the CCO to mint credentials in order to create a new Azure Blob and creates a BackingStore on top of that bucket. Google Cloud Platform (GCP) deployment The mcg-operator uses the CCO to mint credentials in order to create a new GCP bucket and will create a BackingStore on top of that bucket. On-prem deployment If RGW exists, the mcg-operator creates a new CephUser and a new bucket on top of RGW and create a BackingStore on top of that bucket. None of the previously mentioned deployments are applicable The mcg-operator creates a pv-pool based on the default storage class and creates a BackingStore on top of that bucket. Default BucketClass A BucketClass with a placement policy to the default BackingStore is created. NooBaa pods The following NooBaa pods are created and started: Database (DB) This is a Postgres DB holding metadata, statistics, events, and so on. However, it does not hold the actual data being stored. Core This is the pod that handles configuration, background processes, metadata management, statistics, and so on. Endpoints These pods perform the actual I/O-related work such as deduplication and compression, communicating with different services to write and read data, and so on. The endpoints are integrated with the HorizonalPodAutoscaler and their number increases and decreases according to the CPU usage observed on the existing endpoint pods. Route A Route for the NooBaa S3 interface is created for applications that uses S3. Service A Service for the NooBaa S3 interface is created for applications that uses S3. 4.3.3.2. StorageSystem Creation As a part of the StorageCluster creation, odf-operator automatically creates a corresponding StorageSystem CR, which exposes the StorageCluster to the OpenShift Data Foundation.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/red_hat_openshift_data_foundation_architecture/openshift_data_foundation_installation_overview
Chapter 71. KafkaConnectSpec schema reference
Chapter 71. KafkaConnectSpec schema reference Used in: KafkaConnect Full list of KafkaConnectSpec schema properties Configures a Kafka Connect cluster. 71.1. config Use the config properties to configure Kafka Connect options as keys. The values can be one of the following JSON types: String Number Boolean Certain options have default values: group.id with default value connect-cluster offset.storage.topic with default value connect-cluster-offsets config.storage.topic with default value connect-cluster-configs status.storage.topic with default value connect-cluster-status key.converter with default value org.apache.kafka.connect.json.JsonConverter value.converter with default value org.apache.kafka.connect.json.JsonConverter These options are automatically configured in case they are not present in the KafkaConnect.spec.config properties. Exceptions You can specify and configure the options listed in the Apache Kafka documentation . However, Streams for Apache Kafka takes care of configuring and managing options related to the following, which cannot be changed: Kafka cluster bootstrap address Security (encryption, authentication, and authorization) Listener and REST interface configuration Plugin path configuration Properties with the following prefixes cannot be set: bootstrap.servers consumer.interceptor.classes listeners. plugin.path producer.interceptor.classes rest. sasl. security. ssl. If the config property contains an option that cannot be changed, it is disregarded, and a warning message is logged to the Cluster Operator log file. All other supported options are forwarded to Kafka Connect, including the following exceptions to the options configured by Streams for Apache Kafka: Any ssl configuration for supported TLS versions and cipher suites Example Kafka Connect configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... config: group.id: my-connect-cluster offset.storage.topic: my-connect-cluster-offsets config.storage.topic: my-connect-cluster-configs status.storage.topic: my-connect-cluster-status key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: true value.converter.schemas.enable: true config.storage.replication.factor: 3 offset.storage.replication.factor: 3 status.storage.replication.factor: 3 # ... Important The Cluster Operator does not validate keys or values in the config object provided. If an invalid configuration is provided, the Kafka Connect cluster might not start or might become unstable. In this case, fix the configuration so that the Cluster Operator can roll out the new configuration to all Kafka Connect nodes. 71.2. logging Kafka Connect has its own configurable loggers: connect.root.logger.level log4j.logger.org.reflections Further loggers are added depending on the Kafka Connect plugins running. Use a curl request to get a complete list of Kafka Connect loggers running from any Kafka broker pod: curl -s http://<connect-cluster-name>-connect-api:8083/admin/loggers/ Kafka Connect uses the Apache log4j logger implementation. Use the logging property to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties . Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services . Here we see examples of inline and external logging. The inline logging specifies the root logger level. You can also set log levels for specific classes or loggers by adding them to the loggers property. Inline logging apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect spec: # ... logging: type: inline loggers: connect.root.logger.level: INFO log4j.logger.org.apache.kafka.connect.runtime.WorkerSourceTask: TRACE log4j.logger.org.apache.kafka.connect.runtime.WorkerSinkTask: DEBUG # ... Note Setting a log level to DEBUG may result in a large amount of log output and may have performance implications. External logging apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect spec: # ... logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: connect-logging.log4j # ... Any available loggers that are not configured have their level set to OFF . If Kafka Connect was deployed using the Cluster Operator, changes to Kafka Connect logging levels are applied dynamically. If you use external logging, a rolling update is triggered when logging appenders are changed. Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . 71.3. KafkaConnectSpec schema properties Property Property type Description version string The Kafka Connect version. Defaults to the latest version. Consult the user documentation to understand the process required to upgrade or downgrade the version. replicas integer The number of pods in the Kafka Connect group. Defaults to 3 . image string The container image used for Kafka Connect pods. If no image name is explicitly specified, it is determined based on the spec.version configuration. The image names are specifically mapped to corresponding versions in the Cluster Operator configuration. bootstrapServers string Bootstrap servers to connect to. This should be given as a comma separated list of <hostname> :_<port>_ pairs. tls ClientTls TLS configuration. authentication KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha256 , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth Authentication configuration for Kafka Connect. config map The Kafka Connect configuration. Properties with the following prefixes cannot be set: ssl., sasl., security., listeners, plugin.path, rest., bootstrap.servers, consumer.interceptor.classes, producer.interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). resources ResourceRequirements The maximum limits for CPU and memory resources and the requested initial resources. livenessProbe Probe Pod liveness checking. readinessProbe Probe Pod readiness checking. jvmOptions JvmOptions JVM Options for pods. jmxOptions KafkaJmxOptions JMX Options. logging InlineLogging , ExternalLogging Logging configuration for Kafka Connect. clientRackInitImage string The image of the init container used for initializing the client.rack . rack Rack Configuration of the node label which will be used as the client.rack consumer configuration. tracing JaegerTracing , OpenTelemetryTracing The configuration of tracing in Kafka Connect. template KafkaConnectTemplate Template for Kafka Connect and Kafka Mirror Maker 2 resources. The template allows users to specify how the Pods , Service , and other services are generated. externalConfiguration ExternalConfiguration Pass data from Secrets or ConfigMaps to the Kafka Connect pods and use them to configure connectors. build Build Configures how the Connect container image should be built. Optional. metricsConfig JmxPrometheusExporterMetrics Metrics configuration.
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # config: group.id: my-connect-cluster offset.storage.topic: my-connect-cluster-offsets config.storage.topic: my-connect-cluster-configs status.storage.topic: my-connect-cluster-status key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: true value.converter.schemas.enable: true config.storage.replication.factor: 3 offset.storage.replication.factor: 3 status.storage.replication.factor: 3 #", "curl -s http://<connect-cluster-name>-connect-api:8083/admin/loggers/", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect spec: # logging: type: inline loggers: connect.root.logger.level: INFO log4j.logger.org.apache.kafka.connect.runtime.WorkerSourceTask: TRACE log4j.logger.org.apache.kafka.connect.runtime.WorkerSinkTask: DEBUG #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect spec: # logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: connect-logging.log4j #" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-kafkaconnectspec-reference
probe::tcp.receive
probe::tcp.receive Name probe::tcp.receive - Called when a TCP packet is received Synopsis tcp.receive Values psh TCP PSH flag ack TCP ACK flag daddr A string representing the destination IP address syn TCP SYN flag rst TCP RST flag sport TCP source port protocol Packet protocol from driver urg TCP URG flag name Name of the probe point family IP address family fin TCP FIN flag saddr A string representing the source IP address iphdr IP header address dport TCP destination port
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-tcp-receive
Part II. Technology Previews
Part II. Technology Previews This chapter provides a list of all available Technology Previews in Red Hat Enterprise Linux 6.8. Technology Preview features are currently not supported under Red Hat Enterprise Linux subscription services, may not be functionally complete, and are generally not suitable for production use. However, these features are included as a customer convenience and to provide the feature with wider exposure. Customers may find these features useful in a non-production environment. Customers are also free to provide feedback and functionality suggestions for a Technology Preview feature before it becomes fully supported. Errata will be provided for high-severity security issues. During the development of a Technology Preview feature, additional components may become available to the public for testing. It is the intention of Red Hat clustering to fully support Technology Preview features in a future release. For information about the Technology Preview features support scope, see https://access.redhat.com/support/offerings/techpreview/ .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.8_technical_notes/part-red_hat_enterprise_linux-6.8_technical_notes-technology_previews
Red Hat Single Sign-On for OpenShift on Eclipse OpenJ9
Red Hat Single Sign-On for OpenShift on Eclipse OpenJ9 Red Hat Single Sign-On 7.4 For use with Red Hat Single Sign-On 7.4 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/red_hat_single_sign-on_for_openshift_on_eclipse_openj9/index
Chapter 4. Knative CLI for use with OpenShift Serverless
Chapter 4. Knative CLI for use with OpenShift Serverless The Knative ( kn ) CLI enables simple interaction with Knative components on OpenShift Container Platform. 4.1. Key features The Knative ( kn ) CLI is designed to make serverless computing tasks simple and concise. Key features of the Knative CLI include: Deploy serverless applications from the command line. Manage features of Knative Serving, such as services, revisions, and traffic-splitting. Create and manage Knative Eventing components, such as event sources and triggers. Create sink bindings to connect existing Kubernetes applications and Knative services. Extend the Knative CLI with flexible plugin architecture, similar to the kubectl CLI. Configure autoscaling parameters for Knative services. Scripted usage, such as waiting for the results of an operation, or deploying custom rollout and rollback strategies. 4.2. Installing the Knative CLI See Installing the Knative CLI .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/cli_tools/kn-cli-tools
Installing on IBM Power Virtual Server
Installing on IBM Power Virtual Server OpenShift Container Platform 4.13 Installing OpenShift Container Platform on IBM Power Virtual Server Red Hat OpenShift Documentation Team
[ "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "ibmcloud plugin install cis", "ibmcloud login", "ibmcloud cis instance-create <instance_name> standard 1", "ibmcloud cis instance-set <instance_CRN> 1", "ibmcloud cis domain-add <domain_name> 1", "ibmcloud resource service-instance <workspace name>", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "export IBMCLOUD_API_KEY=<api_key>", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: {} replicas: 3 controlPlane: 4 5 architecture: ppc64le hyperthreading: Enabled 6 name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: example-cluster-name networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 7 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id region: powervs-region zone: powervs-zone powervsResourceGroup: \"ibmcloud-resource-group\" 8 serviceInstanceID: \"powervs-region-service-instance-id\" vpcRegion : vpc-region publish: External pullSecret: '{\"auths\": ...}' 9 sshKey: ssh-ed25519 AAAA... 10", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled", "./openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --cloud=<provider_name> \\ 1 --to=<path_to_credential_requests_directory> 2", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer", "ccoctl ibmcloud create-service-id --credentials-requests-dir <path_to_credential_requests_directory> \\ 1 --name <cluster_name> \\ 2 --output-dir <installation_directory> --resource-group-name <resource_group_name> 3", "grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "export IBMCLOUD_API_KEY=<api_key>", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: {} replicas: 3 controlPlane: 4 5 architecture: ppc64le hyperthreading: Enabled 6 name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: example-cluster-existing-vpc networking: clusterNetwork: - cidr: 10.128.0.0/14 7 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 8 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id powervsResourceGroup: \"ibmcloud-resource-group\" region: powervs-region vpcRegion : vpc-region vpcName: name-of-existing-vpc 9 vpcSubnets: 10 - powervs-region-example-subnet-1 zone: powervs-zone serviceInstanceID: \"powervs-region-service-instance-id\" credentialsMode: Manual publish: External 11 pullSecret: '{\"auths\": ...}' 12 fips: false sshKey: ssh-ed25519 AAAA... 13", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled", "./openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --cloud=<provider_name> \\ 1 --to=<path_to_credential_requests_directory> 2", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer", "ccoctl ibmcloud create-service-id --credentials-requests-dir <path_to_credential_requests_directory> \\ 1 --name <cluster_name> \\ 2 --output-dir <installation_directory> --resource-group-name <resource_group_name> 3", "grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "export IBMCLOUD_API_KEY=<api_key>", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: {} replicas: 3 controlPlane: 4 5 architecture: ppc64le hyperthreading: Enabled 6 name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: example-private-cluster-name networking: clusterNetwork: - cidr: 10.128.0.0/14 7 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 8 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id powervsResourceGroup: \"ibmcloud-resource-group\" region: powervs-region vpcName: name-of-existing-vpc 9 cloudConnectionName: powervs-region-example-cloud-con-priv vpcSubnets: - powervs-region-example-subnet-1 vpcRegion : vpc-region zone: powervs-zone serviceInstanceID: \"powervs-region-service-instance-id\" publish: Internal 10 pullSecret: '{\"auths\": ...}' 11 sshKey: ssh-ed25519 AAAA... 12", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled", "./openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --cloud=<provider_name> \\ 1 --to=<path_to_credential_requests_directory> 2", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer", "ccoctl ibmcloud create-service-id --credentials-requests-dir <path_to_credential_requests_directory> \\ 1 --name <cluster_name> \\ 2 --output-dir <installation_directory> --resource-group-name <resource_group_name> 3", "grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "export IBMCLOUD_API_KEY=<api_key>", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "vpcName: <existing_vpc> vpcSubnets: <vpcSubnet>", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: example-restricted-cluster-name 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 10 networkType: OVNKubernetes 11 serviceNetwork: - 192.168.0.0/24 platform: powervs: userid: ibm-user-id powervsResourceGroup: \"ibmcloud-resource-group\" 12 region: \"powervs-region\" vpcRegion: \"vpc-region\" vpcName: name-of-existing-vpc 13 vpcSubnets: 14 - name-of-existing-vpc-subnet zone: \"powervs-zone\" serviceInstanceID: \"service-instance-id\" publish: Internal credentialsMode: Manual pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: ssh-ed25519 AAAA... 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled", "./openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --cloud=<provider_name> \\ 1 --to=<path_to_credential_requests_directory> 2", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer", "ccoctl ibmcloud create-service-id --credentials-requests-dir <path_to_credential_requests_directory> \\ 1 --name <cluster_name> \\ 2 --output-dir <installation_directory> --resource-group-name <resource_group_name> 3", "grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "ibmcloud is volumes --resource-group-name <infrastructure_id>", "ibmcloud is volume-delete --force <volume_id>", "export IBMCLOUD_API_KEY=<api_key>", "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2", "ccoctl ibmcloud delete-service-id --credentials-requests-dir <path_to_credential_requests_directory> --name <cluster_name>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/installing_on_ibm_power_virtual_server/index
Chapter 2. Container security
Chapter 2. Container security 2.1. Understanding container security Securing a containerized application relies on multiple levels of security: Container security begins with a trusted base container image and continues through the container build process as it moves through your CI/CD pipeline. Important Image streams by default do not automatically update. This default behavior might create a security issue because security updates to images referenced by an image stream do not automatically occur. For information about how to override this default behavior, see Configuring periodic importing of imagestreamtags . When a container is deployed, its security depends on it running on secure operating systems and networks, and establishing firm boundaries between the container itself and the users and hosts that interact with it. Continued security relies on being able to scan container images for vulnerabilities and having an efficient way to correct and replace vulnerable images. Beyond what a platform such as OpenShift Container Platform offers out of the box, your organization will likely have its own security demands. Some level of compliance verification might be needed before you can even bring OpenShift Container Platform into your data center. Likewise, you may need to add your own agents, specialized hardware drivers, or encryption features to OpenShift Container Platform, before it can meet your organization's security standards. This guide provides a high-level walkthrough of the container security measures available in OpenShift Container Platform, including solutions for the host layer, the container and orchestration layer, and the build and application layer. It then points you to specific OpenShift Container Platform documentation to help you achieve those security measures. This guide contains the following information: Why container security is important and how it compares with existing security standards. Which container security measures are provided by the host (RHCOS and RHEL) layer and which are provided by OpenShift Container Platform. How to evaluate your container content and sources for vulnerabilities. How to design your build and deployment process to proactively check container content. How to control access to containers through authentication and authorization. How networking and attached storage are secured in OpenShift Container Platform. Containerized solutions for API management and SSO. The goal of this guide is to understand the incredible security benefits of using OpenShift Container Platform for your containerized workloads and how the entire Red Hat ecosystem plays a part in making and keeping containers secure. It will also help you understand how you can engage with the OpenShift Container Platform to achieve your organization's security goals. 2.1.1. What are containers? Containers package an application and all its dependencies into a single image that can be promoted from development, to test, to production, without change. A container might be part of a larger application that works closely with other containers. Containers provide consistency across environments and multiple deployment targets: physical servers, virtual machines (VMs), and private or public cloud. Some of the benefits of using containers include: Infrastructure Applications Sandboxed application processes on a shared Linux operating system kernel Package my application and all of its dependencies Simpler, lighter, and denser than virtual machines Deploy to any environment in seconds and enable CI/CD Portable across different environments Easily access and share containerized components See Understanding Linux containers from the Red Hat Customer Portal to find out more about Linux containers. To learn about RHEL container tools, see Building, running, and managing containers in the RHEL product documentation. 2.1.2. What is OpenShift Container Platform? Automating how containerized applications are deployed, run, and managed is the job of a platform such as OpenShift Container Platform. At its core, OpenShift Container Platform relies on the Kubernetes project to provide the engine for orchestrating containers across many nodes in scalable data centers. Kubernetes is a project, which can run using different operating systems and add-on components that offer no guarantees of supportability from the project. As a result, the security of different Kubernetes platforms can vary. OpenShift Container Platform is designed to lock down Kubernetes security and integrate the platform with a variety of extended components. To do this, OpenShift Container Platform draws on the extensive Red Hat ecosystem of open source technologies that include the operating systems, authentication, storage, networking, development tools, base container images, and many other components. OpenShift Container Platform can leverage Red Hat's experience in uncovering and rapidly deploying fixes for vulnerabilities in the platform itself as well as the containerized applications running on the platform. Red Hat's experience also extends to efficiently integrating new components with OpenShift Container Platform as they become available and adapting technologies to individual customer needs. Additional resources OpenShift Container Platform architecture OpenShift Security Guide 2.2. Understanding host and VM security Both containers and virtual machines provide ways of separating applications running on a host from the operating system itself. Understanding RHCOS, which is the operating system used by OpenShift Container Platform, will help you see how the host systems protect containers and hosts from each other. 2.2.1. Securing containers on Red Hat Enterprise Linux CoreOS (RHCOS) Containers simplify the act of deploying many applications to run on the same host, using the same kernel and container runtime to spin up each container. The applications can be owned by many users and, because they are kept separate, can run different, and even incompatible, versions of those applications at the same time without issue. In Linux, containers are just a special type of process, so securing containers is similar in many ways to securing any other running process. An environment for running containers starts with an operating system that can secure the host kernel from containers and other processes running on the host, as well as secure containers from each other. Because OpenShift Container Platform 4.15 runs on RHCOS hosts, with the option of using Red Hat Enterprise Linux (RHEL) as worker nodes, the following concepts apply by default to any deployed OpenShift Container Platform cluster. These RHEL security features are at the core of what makes running containers in OpenShift Container Platform more secure: Linux namespaces enable creating an abstraction of a particular global system resource to make it appear as a separate instance to processes within a namespace. Consequently, several containers can use the same computing resource simultaneously without creating a conflict. Container namespaces that are separate from the host by default include mount table, process table, network interface, user, control group, UTS, and IPC namespaces. Those containers that need direct access to host namespaces need to have elevated permissions to request that access. See Overview of Containers in Red Hat Systems from the RHEL 8 container documentation for details on the types of namespaces. SELinux provides an additional layer of security to keep containers isolated from each other and from the host. SELinux allows administrators to enforce mandatory access controls (MAC) for every user, application, process, and file. Warning Disabling SELinux on RHCOS is not supported. CGroups (control groups) limit, account for, and isolate the resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes. CGroups are used to ensure that containers on the same host are not impacted by each other. Secure computing mode (seccomp) profiles can be associated with a container to restrict available system calls. See page 94 of the OpenShift Security Guide for details about seccomp. Deploying containers using RHCOS reduces the attack surface by minimizing the host environment and tuning it for containers. The CRI-O container engine further reduces that attack surface by implementing only those features required by Kubernetes and OpenShift Container Platform to run and manage containers, as opposed to other container engines that implement desktop-oriented standalone features. RHCOS is a version of Red Hat Enterprise Linux (RHEL) that is specially configured to work as control plane (master) and worker nodes on OpenShift Container Platform clusters. So RHCOS is tuned to efficiently run container workloads, along with Kubernetes and OpenShift Container Platform services. To further protect RHCOS systems in OpenShift Container Platform clusters, most containers, except those managing or monitoring the host system itself, should run as a non-root user. Dropping the privilege level or creating containers with the least amount of privileges possible is recommended best practice for protecting your own OpenShift Container Platform clusters. Additional resources How nodes enforce resource constraints Managing security context constraints Supported platforms for OpenShift clusters Requirements for a cluster with user-provisioned infrastructure Choosing how to configure RHCOS Ignition Kernel arguments Kernel modules Disk encryption Chrony time service About the OpenShift Update Service FIPS cryptography 2.2.2. Comparing virtualization and containers Traditional virtualization provides another way to keep application environments separate on the same physical host. However, virtual machines work in a different way than containers. Virtualization relies on a hypervisor spinning up guest virtual machines (VMs), each of which has its own operating system (OS), represented by a running kernel, as well as the running application and its dependencies. With VMs, the hypervisor isolates the guests from each other and from the host kernel. Fewer individuals and processes have access to the hypervisor, reducing the attack surface on the physical server. That said, security must still be monitored: one guest VM might be able to use hypervisor bugs to gain access to another VM or the host kernel. And, when the OS needs to be patched, it must be patched on all guest VMs using that OS. Containers can be run inside guest VMs, and there might be use cases where this is desirable. For example, you might be deploying a traditional application in a container, perhaps to lift-and-shift an application to the cloud. Container separation on a single host, however, provides a more lightweight, flexible, and easier-to-scale deployment solution. This deployment model is particularly appropriate for cloud-native applications. Containers are generally much smaller than VMs and consume less memory and CPU. See Linux Containers Compared to KVM Virtualization in the RHEL 7 container documentation to learn about the differences between container and VMs. 2.2.3. Securing OpenShift Container Platform When you deploy OpenShift Container Platform, you have the choice of an installer-provisioned infrastructure (there are several available platforms) or your own user-provisioned infrastructure. Some low-level security-related configuration, such as enabling FIPS mode or adding kernel modules required at first boot, might benefit from a user-provisioned infrastructure. Likewise, user-provisioned infrastructure is appropriate for disconnected OpenShift Container Platform deployments. Keep in mind that, when it comes to making security enhancements and other configuration changes to OpenShift Container Platform, the goals should include: Keeping the underlying nodes as generic as possible. You want to be able to easily throw away and spin up similar nodes quickly and in prescriptive ways. Managing modifications to nodes through OpenShift Container Platform as much as possible, rather than making direct, one-off changes to the nodes. In pursuit of those goals, most node changes should be done during installation through Ignition or later using MachineConfigs that are applied to sets of nodes by the Machine Config Operator. Examples of security-related configuration changes you can do in this way include: Adding kernel arguments Adding kernel modules Enabling support for FIPS cryptography Configuring disk encryption Configuring the chrony time service Besides the Machine Config Operator, there are several other Operators available to configure OpenShift Container Platform infrastructure that are managed by the Cluster Version Operator (CVO). The CVO is able to automate many aspects of OpenShift Container Platform cluster updates. Additional resources FIPS cryptography 2.3. Hardening RHCOS RHCOS was created and tuned to be deployed in OpenShift Container Platform with few if any changes needed to RHCOS nodes. Every organization adopting OpenShift Container Platform has its own requirements for system hardening. As a RHEL system with OpenShift-specific modifications and features added (such as Ignition, ostree, and a read-only /usr to provide limited immutability), RHCOS can be hardened just as you would any RHEL system. Differences lie in the ways you manage the hardening. A key feature of OpenShift Container Platform and its Kubernetes engine is to be able to quickly scale applications and infrastructure up and down as needed. Unless it is unavoidable, you do not want to make direct changes to RHCOS by logging into a host and adding software or changing settings. You want to have the OpenShift Container Platform installer and control plane manage changes to RHCOS so new nodes can be spun up without manual intervention. So, if you are setting out to harden RHCOS nodes in OpenShift Container Platform to meet your security needs, you should consider both what to harden and how to go about doing that hardening. 2.3.1. Choosing what to harden in RHCOS The RHEL 9 Security Hardening guide describes how you should approach security for any RHEL system. Use this guide to learn how to approach cryptography, evaluate vulnerabilities, and assess threats to various services. Likewise, you can learn how to scan for compliance standards, check file integrity, perform auditing, and encrypt storage devices. With the knowledge of what features you want to harden, you can then decide how to harden them in RHCOS. 2.3.2. Choosing how to harden RHCOS Direct modification of RHCOS systems in OpenShift Container Platform is discouraged. Instead, you should think of modifying systems in pools of nodes, such as worker nodes and control plane nodes. When a new node is needed, in non-bare metal installs, you can request a new node of the type you want and it will be created from an RHCOS image plus the modifications you created earlier. There are opportunities for modifying RHCOS before installation, during installation, and after the cluster is up and running. 2.3.2.1. Hardening before installation For bare metal installations, you can add hardening features to RHCOS before beginning the OpenShift Container Platform installation. For example, you can add kernel options when you boot the RHCOS installer to turn security features on or off, such as various SELinux booleans or low-level settings, such as symmetric multithreading. Warning Disabling SELinux on RHCOS nodes is not supported. Although bare metal RHCOS installations are more difficult, they offer the opportunity of getting operating system changes in place before starting the OpenShift Container Platform installation. This can be important when you need to ensure that certain features, such as disk encryption or special networking settings, be set up at the earliest possible moment. 2.3.2.2. Hardening during installation You can interrupt the OpenShift Container Platform installation process and change Ignition configs. Through Ignition configs, you can add your own files and systemd services to the RHCOS nodes. You can also make some basic security-related changes to the install-config.yaml file used for installation. Contents added in this way are available at each node's first boot. 2.3.2.3. Hardening after the cluster is running After the OpenShift Container Platform cluster is up and running, there are several ways to apply hardening features to RHCOS: Daemon set: If you need a service to run on every node, you can add that service with a Kubernetes DaemonSet object . Machine config: MachineConfig objects contain a subset of Ignition configs in the same format. By applying machine configs to all worker or control plane nodes, you can ensure that the node of the same type that is added to the cluster has the same changes applied. All of the features noted here are described in the OpenShift Container Platform product documentation. Additional resources OpenShift Security Guide Choosing how to configure RHCOS Modifying Nodes Manually creating the installation configuration file Creating the Kubernetes manifest and Ignition config files Installing RHCOS by using an ISO image Customizing nodes Adding kernel arguments to nodes Installation configuration parameters - see fips Support for FIPS cryptography RHEL core crypto components 2.4. Container image signatures Red Hat delivers signatures for the images in the Red Hat Container Registries. Those signatures can be automatically verified when being pulled to OpenShift Container Platform 4 clusters by using the Machine Config Operator (MCO). Quay.io serves most of the images that make up OpenShift Container Platform, and only the release image is signed. Release images refer to the approved OpenShift Container Platform images, offering a degree of protection against supply chain attacks. However, some extensions to OpenShift Container Platform, such as logging, monitoring, and service mesh, are shipped as Operators from the Operator Lifecycle Manager (OLM). Those images ship from the Red Hat Ecosystem Catalog Container images registry. To verify the integrity of those images between Red Hat registries and your infrastructure, enable signature verification. 2.4.1. Enabling signature verification for Red Hat Container Registries Enabling container signature validation for Red Hat Container Registries requires writing a signature verification policy file specifying the keys to verify images from these registries. For RHEL8 nodes, the registries are already defined in /etc/containers/registries.d by default. Procedure Create a Butane config file, 51-worker-rh-registry-trust.bu , containing the necessary configuration for the worker nodes. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.15.0 metadata: name: 51-worker-rh-registry-trust labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/containers/policy.json mode: 0644 overwrite: true contents: inline: | { "default": [ { "type": "insecureAcceptAnything" } ], "transports": { "docker": { "registry.access.redhat.com": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ], "registry.redhat.io": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ] }, "docker-daemon": { "": [ { "type": "insecureAcceptAnything" } ] } } } Use Butane to generate a machine config YAML file, 51-worker-rh-registry-trust.yaml , containing the file to be written to disk on the worker nodes: USD butane 51-worker-rh-registry-trust.bu -o 51-worker-rh-registry-trust.yaml Apply the created machine config: USD oc apply -f 51-worker-rh-registry-trust.yaml Check that the worker machine config pool has rolled out with the new machine config: Check that the new machine config was created: USD oc get mc Sample output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 00-worker a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 51-master-rh-registry-trust 3.2.0 13s 51-worker-rh-registry-trust 3.2.0 53s 1 99-master-generated-crio-seccomp-use-default 3.2.0 25m 99-master-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-master-ssh 3.2.0 28m 99-worker-generated-crio-seccomp-use-default 3.2.0 25m 99-worker-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-worker-ssh 3.2.0 28m rendered-master-af1e7ff78da0a9c851bab4be2777773b a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 8s rendered-master-cd51fd0c47e91812bfef2765c52ec7e6 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-2b52f75684fbc711bd1652dd86fd0b82 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-be3b3bce4f4aa52a62902304bac9da3c a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 48s 2 1 New machine config 2 New rendered machine config Check that the worker machine config pool is updating with the new machine config: USD oc get mcp Sample output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-af1e7ff78da0a9c851bab4be2777773b True False False 3 3 3 0 30m worker rendered-worker-be3b3bce4f4aa52a62902304bac9da3c False True False 3 0 0 0 30m 1 1 When the UPDATING field is True , the machine config pool is updating with the new machine config. When the field becomes False , the worker machine config pool has rolled out to the new machine config. If your cluster uses any RHEL7 worker nodes, when the worker machine config pool is updated, create YAML files on those nodes in the /etc/containers/registries.d directory, which specify the location of the detached signatures for a given registry server. The following example works only for images hosted in registry.access.redhat.com and registry.redhat.io . Start a debug session to each RHEL7 worker node: USD oc debug node/<node_name> Change your root directory to /host : sh-4.2# chroot /host Create a /etc/containers/registries.d/registry.redhat.io.yaml file that contains the following: docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore Create a /etc/containers/registries.d/registry.access.redhat.com.yaml file that contains the following: docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore Exit the debug session. 2.4.2. Verifying the signature verification configuration After you apply the machine configs to the cluster, the Machine Config Controller detects the new MachineConfig object and generates a new rendered-worker-<hash> version. Prerequisites You enabled signature verification by using a machine config file. Procedure On the command line, run the following command to display information about a desired worker: USD oc describe machineconfigpool/worker Example output of initial worker monitoring Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Metadata: Creation Timestamp: 2019-12-19T02:02:12Z Generation: 3 Resource Version: 16229 Self Link: /apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker UID: 92697796-2203-11ea-b48c-fa163e3940e5 Spec: Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Machine Config Selector: Match Labels: machineconfiguration.openshift.io/role: worker Node Selector: Match Labels: node-role.kubernetes.io/worker: Paused: false Status: Conditions: Last Transition Time: 2019-12-19T02:03:27Z Message: Reason: Status: False Type: RenderDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: NodeDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: Degraded Last Transition Time: 2019-12-19T02:28:23Z Message: Reason: Status: False Type: Updated Last Transition Time: 2019-12-19T02:28:23Z Message: All nodes are updating to rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updating Configuration: Name: rendered-worker-d9b3f4ffcfd65c30dcf591a0e8cf9b2e Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 1 Observed Generation: 3 Ready Machine Count: 0 Unavailable Machine Count: 1 Updated Machine Count: 0 Events: <none> Run the oc describe command again: USD oc describe machineconfigpool/worker Example output after the worker is updated ... Last Transition Time: 2019-12-19T04:53:09Z Message: All nodes are updated with rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updated Last Transition Time: 2019-12-19T04:53:09Z Message: Reason: Status: False Type: Updating Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 4 Ready Machine Count: 3 Unavailable Machine Count: 0 Updated Machine Count: 3 ... Note The Observed Generation parameter shows an increased count based on the generation of the controller-produced configuration. This controller updates this value even if it fails to process the specification and generate a revision. The Configuration Source value points to the 51-worker-rh-registry-trust configuration. Confirm that the policy.json file exists with the following command: USD oc debug node/<node> -- chroot /host cat /etc/containers/policy.json Example output Starting pod/<node>-debug ... To use host binaries, run `chroot /host` { "default": [ { "type": "insecureAcceptAnything" } ], "transports": { "docker": { "registry.access.redhat.com": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ], "registry.redhat.io": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ] }, "docker-daemon": { "": [ { "type": "insecureAcceptAnything" } ] } } } Confirm that the registry.redhat.io.yaml file exists with the following command: USD oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.redhat.io.yaml Example output Starting pod/<node>-debug ... To use host binaries, run `chroot /host` docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore Confirm that the registry.access.redhat.com.yaml file exists with the following command: USD oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.access.redhat.com.yaml Example output Starting pod/<node>-debug ... To use host binaries, run `chroot /host` docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore 2.4.3. Understanding the verification of container images lacking verifiable signatures Each OpenShift Container Platform release image is immutable and signed with a Red Hat production key. During an OpenShift Container Platform update or installation, a release image might deploy container images that do not have verifiable signatures. Each signed release image digest is immutable. Each reference in the release image is to the immutable digest of another image, so the contents can be trusted transitively. In other words, the signature on the release image validates all release contents. For example, the image references lacking a verifiable signature are contained in the signed OpenShift Container Platform release image: Example release info output USD oc adm release info quay.io/openshift-release-dev/ocp-release@sha256:2309578b68c5666dad62aed696f1f9d778ae1a089ee461060ba7b9514b7ca417 -o pullspec 1 quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9aafb914d5d7d0dec4edd800d02f811d7383a7d49e500af548eab5d00c1bffdb 2 1 Signed release image SHA. 2 Container image lacking a verifiable signature included in the release. 2.4.3.1. Automated verification during updates Verification of signatures is automatic. The OpenShift Cluster Version Operator (CVO) verifies signatures on the release images during an OpenShift Container Platform update. This is an internal process. An OpenShift Container Platform installation or update fails if the automated verification fails. Verification of signatures can also be done manually using the skopeo command-line utility. Additional resources Introduction to OpenShift Updates 2.4.3.2. Using skopeo to verify signatures of Red Hat container images You can verify the signatures for container images included in an OpenShift Container Platform release image by pulling those signatures from OCP release mirror site . Because the signatures on the mirror site are not in a format readily understood by Podman or CRI-O, you can use the skopeo standalone-verify command to verify that the your release images are signed by Red Hat. Prerequisites You have installed the skopeo command-line utility. Procedure Get the full SHA for your release by running the following command: USD oc adm release info <release_version> \ 1 1 Substitute <release_version> with your release number, for example, 4.14.3 . Example output snippet --- Pull From: quay.io/openshift-release-dev/ocp-release@sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 --- Pull down the Red Hat release key by running the following command: USD curl -o pub.key https://access.redhat.com/security/data/fd431d51.txt Get the signature file for the specific release that you want to verify by running the following command: USD curl -o signature-1 https://mirror.openshift.com/pub/openshift-v4/signatures/openshift-release-dev/ocp-release/sha256%<sha_from_version>/signature-1 \ 1 1 Replace <sha_from_version> with SHA value from the full link to the mirror site that matches the SHA of your release. For example, the link to the signature for the 4.12.23 release is https://mirror.openshift.com/pub/openshift-v4/signatures/openshift-release-dev/ocp-release/sha256%e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55/signature-1 , and the SHA value is e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 . Get the manifest for the release image by running the following command: USD skopeo inspect --raw docker://<quay_link_to_release> > manifest.json \ 1 1 Replace <quay_link_to_release> with the output of the oc adm release info command. For example, quay.io/openshift-release-dev/ocp-release@sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 . Use skopeo to verify the signature: USD skopeo standalone-verify manifest.json quay.io/openshift-release-dev/ocp-release:<release_number>-<arch> any signature-1 --public-key-file pub.key where: <release_number> Specifies the release number, for example 4.14.3 . <arch> Specifies the architecture, for example x86_64 . Example output Signature verified using fingerprint 567E347AD0044ADE55BA8A5F199E2F91FD431D51, digest sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 2.4.4. Additional resources Machine Config Overview 2.5. Understanding compliance For many OpenShift Container Platform customers, regulatory readiness, or compliance, on some level is required before any systems can be put into production. That regulatory readiness can be imposed by national standards, industry standards or the organization's corporate governance framework. 2.5.1. Understanding compliance and risk management FIPS compliance is one of the most critical components required in highly secure environments, to ensure that only supported cryptographic technologies are allowed on nodes. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. To understand Red Hat's view of OpenShift Container Platform compliance frameworks, refer to the Risk Management and Regulatory Readiness chapter of the OpenShift Security Guide Book . Additional resources Installing a cluster in FIPS mode 2.6. Securing container content To ensure the security of the content inside your containers you need to start with trusted base images, such as Red Hat Universal Base Images, and add trusted software. To check the ongoing security of your container images, there are both Red Hat and third-party tools for scanning images. 2.6.1. Securing inside the container Applications and infrastructures are composed of readily available components, many of which are open source packages such as, the Linux operating system, JBoss Web Server, PostgreSQL, and Node.js. Containerized versions of these packages are also available. However, you need to know where the packages originally came from, what versions are used, who built them, and whether there is any malicious code inside them. Some questions to answer include: Will what is inside the containers compromise your infrastructure? Are there known vulnerabilities in the application layer? Are the runtime and operating system layers current? By building your containers from Red Hat Universal Base Images (UBI) you are assured of a foundation for your container images that consists of the same RPM-packaged software that is included in Red Hat Enterprise Linux. No subscriptions are required to either use or redistribute UBI images. To assure ongoing security of the containers themselves, security scanning features, used directly from RHEL or added to OpenShift Container Platform, can alert you when an image you are using has vulnerabilities. OpenSCAP image scanning is available in RHEL and the Red Hat Quay Container Security Operator can be added to check container images used in OpenShift Container Platform. 2.6.2. Creating redistributable images with UBI To create containerized applications, you typically start with a trusted base image that offers the components that are usually provided by the operating system. These include the libraries, utilities, and other features the application expects to see in the operating system's file system. Red Hat Universal Base Images (UBI) were created to encourage anyone building their own containers to start with one that is made entirely from Red Hat Enterprise Linux rpm packages and other content. These UBI images are updated regularly to keep up with security patches and free to use and redistribute with container images built to include your own software. Search the Red Hat Ecosystem Catalog to both find and check the health of different UBI images. As someone creating secure container images, you might be interested in these two general types of UBI images: UBI : There are standard UBI images for RHEL 7, 8, and 9 ( ubi7/ubi , ubi8/ubi , and ubi9/ubi ), as well as minimal images based on those systems ( ubi7/ubi-minimal , ubi8/ubi-mimimal , and ubi9/ubi-minimal). All of these images are preconfigured to point to free repositories of RHEL software that you can add to the container images you build, using standard yum and dnf commands. Red Hat encourages people to use these images on other distributions, such as Fedora and Ubuntu. Red Hat Software Collections : Search the Red Hat Ecosystem Catalog for rhscl/ to find images created to use as base images for specific types of applications. For example, there are Apache httpd ( rhscl/httpd-* ), Python ( rhscl/python-* ), Ruby ( rhscl/ruby-* ), Node.js ( rhscl/nodejs-* ) and Perl ( rhscl/perl-* ) rhscl images. Keep in mind that while UBI images are freely available and redistributable, Red Hat support for these images is only available through Red Hat product subscriptions. See Using Red Hat Universal Base Images in the Red Hat Enterprise Linux documentation for information on how to use and build on standard, minimal and init UBI images. 2.6.3. Security scanning in RHEL For Red Hat Enterprise Linux (RHEL) systems, OpenSCAP scanning is available from the openscap-utils package. In RHEL, you can use the openscap-podman command to scan images for vulnerabilities. See Scanning containers and container images for vulnerabilities in the Red Hat Enterprise Linux documentation. OpenShift Container Platform enables you to leverage RHEL scanners with your CI/CD process. For example, you can integrate static code analysis tools that test for security flaws in your source code and software composition analysis tools that identify open source libraries to provide metadata on those libraries such as known vulnerabilities. 2.6.3.1. Scanning OpenShift images For the container images that are running in OpenShift Container Platform and are pulled from Red Hat Quay registries, you can use an Operator to list the vulnerabilities of those images. The Red Hat Quay Container Security Operator can be added to OpenShift Container Platform to provide vulnerability reporting for images added to selected namespaces. Container image scanning for Red Hat Quay is performed by the Clair . In Red Hat Quay, Clair can search for and report vulnerabilities in images built from RHEL, CentOS, Oracle, Alpine, Debian, and Ubuntu operating system software. 2.6.4. Integrating external scanning OpenShift Container Platform makes use of object annotations to extend functionality. External tools, such as vulnerability scanners, can annotate image objects with metadata to summarize results and control pod execution. This section describes the recognized format of this annotation so it can be reliably used in consoles to display useful data to users. 2.6.4.1. Image metadata There are different types of image quality data, including package vulnerabilities and open source software (OSS) license compliance. Additionally, there may be more than one provider of this metadata. To that end, the following annotation format has been reserved: Table 2.1. Annotation key format Component Description Acceptable values qualityType Metadata type vulnerability license operations policy providerId Provider ID string openscap redhatcatalog redhatinsights blackduck jfrog 2.6.4.1.1. Example annotation keys The value of the image quality annotation is structured data that must adhere to the following format: Table 2.2. Annotation value format Field Required? Description Type name Yes Provider display name String timestamp Yes Scan timestamp String description No Short description String reference Yes URL of information source or more details. Required so user may validate the data. String scannerVersion No Scanner version String compliant No Compliance pass or fail Boolean summary No Summary of issues found List (see table below) The summary field must adhere to the following format: Table 2.3. Summary field value format Field Description Type label Display label for component (for example, "critical," "important," "moderate," "low," or "health") String data Data for this component (for example, count of vulnerabilities found or score) String severityIndex Component index allowing for ordering and assigning graphical representation. The value is range 0..3 where 0 = low. Integer reference URL of information source or more details. Optional. String 2.6.4.1.2. Example annotation values This example shows an OpenSCAP annotation for an image with vulnerability summary data and a compliance boolean: OpenSCAP annotation { "name": "OpenSCAP", "description": "OpenSCAP vulnerability score", "timestamp": "2016-09-08T05:04:46Z", "reference": "https://www.open-scap.org/930492", "compliant": true, "scannerVersion": "1.2", "summary": [ { "label": "critical", "data": "4", "severityIndex": 3, "reference": null }, { "label": "important", "data": "12", "severityIndex": 2, "reference": null }, { "label": "moderate", "data": "8", "severityIndex": 1, "reference": null }, { "label": "low", "data": "26", "severityIndex": 0, "reference": null } ] } This example shows the Container images section of the Red Hat Ecosystem Catalog annotation for an image with health index data with an external URL for additional details: Red Hat Ecosystem Catalog annotation { "name": "Red Hat Ecosystem Catalog", "description": "Container health index", "timestamp": "2016-09-08T05:04:46Z", "reference": "https://access.redhat.com/errata/RHBA-2016:1566", "compliant": null, "scannerVersion": "1.2", "summary": [ { "label": "Health index", "data": "B", "severityIndex": 1, "reference": null } ] } 2.6.4.2. Annotating image objects While image stream objects are what an end user of OpenShift Container Platform operates against, image objects are annotated with security metadata. Image objects are cluster-scoped, pointing to a single image that may be referenced by many image streams and tags. 2.6.4.2.1. Example annotate CLI command Replace <image> with an image digest, for example sha256:401e359e0f45bfdcf004e258b72e253fd07fba8cc5c6f2ed4f4608fb119ecc2 : USD oc annotate image <image> \ quality.images.openshift.io/vulnerability.redhatcatalog='{ \ "name": "Red Hat Ecosystem Catalog", \ "description": "Container health index", \ "timestamp": "2020-06-01T05:04:46Z", \ "compliant": null, \ "scannerVersion": "1.2", \ "reference": "https://access.redhat.com/errata/RHBA-2020:2347", \ "summary": "[ \ { "label": "Health index", "data": "B", "severityIndex": 1, "reference": null } ]" }' 2.6.4.3. Controlling pod execution Use the images.openshift.io/deny-execution image policy to programmatically control if an image can be run. 2.6.4.3.1. Example annotation annotations: images.openshift.io/deny-execution: true 2.6.4.4. Integration reference In most cases, external tools such as vulnerability scanners develop a script or plugin that watches for image updates, performs scanning, and annotates the associated image object with the results. Typically this automation calls the OpenShift Container Platform 4.15 REST APIs to write the annotation. See OpenShift Container Platform REST APIs for general information on the REST APIs. 2.6.4.4.1. Example REST API call The following example call using curl overrides the value of the annotation. Be sure to replace the values for <token> , <openshift_server> , <image_id> , and <image_annotation> . Patch API call USD curl -X PATCH \ -H "Authorization: Bearer <token>" \ -H "Content-Type: application/merge-patch+json" \ https://<openshift_server>:6443/apis/image.openshift.io/v1/images/<image_id> \ --data '{ <image_annotation> }' The following is an example of PATCH payload data: Patch call data { "metadata": { "annotations": { "quality.images.openshift.io/vulnerability.redhatcatalog": "{ 'name': 'Red Hat Ecosystem Catalog', 'description': 'Container health index', 'timestamp': '2020-06-01T05:04:46Z', 'compliant': null, 'reference': 'https://access.redhat.com/errata/RHBA-2020:2347', 'summary': [{'label': 'Health index', 'data': '4', 'severityIndex': 1, 'reference': null}] }" } } } Additional resources Image stream objects 2.7. Using container registries securely Container registries store container images to: Make images accessible to others Organize images into repositories that can include multiple versions of an image Optionally limit access to images, based on different authentication methods, or make them publicly available There are public container registries, such as Quay.io and Docker Hub where many people and organizations share their images. The Red Hat Registry offers supported Red Hat and partner images, while the Red Hat Ecosystem Catalog offers detailed descriptions and health checks for those images. To manage your own registry, you could purchase a container registry such as Red Hat Quay . From a security standpoint, some registries provide special features to check and improve the health of your containers. For example, Red Hat Quay offers container vulnerability scanning with Clair security scanner, build triggers to automatically rebuild images when source code changes in GitHub and other locations, and the ability to use role-based access control (RBAC) to secure access to images. 2.7.1. Knowing where containers come from? There are tools you can use to scan and track the contents of your downloaded and deployed container images. However, there are many public sources of container images. When using public container registries, you can add a layer of protection by using trusted sources. 2.7.2. Immutable and certified containers Consuming security updates is particularly important when managing immutable containers . Immutable containers are containers that will never be changed while running. When you deploy immutable containers, you do not step into the running container to replace one or more binaries. From an operational standpoint, you rebuild and redeploy an updated container image to replace a container instead of changing it. Red Hat certified images are: Free of known vulnerabilities in the platform components or layers Compatible across the RHEL platforms, from bare metal to cloud Supported by Red Hat The list of known vulnerabilities is constantly evolving, so you must track the contents of your deployed container images, as well as newly downloaded images, over time. You can use Red Hat Security Advisories (RHSAs) to alert you to any newly discovered issues in Red Hat certified container images, and direct you to the updated image. Alternatively, you can go to the Red Hat Ecosystem Catalog to look up that and other security-related issues for each Red Hat image. 2.7.3. Getting containers from Red Hat Registry and Ecosystem Catalog Red Hat lists certified container images for Red Hat products and partner offerings from the Container Images section of the Red Hat Ecosystem Catalog. From that catalog, you can see details of each image, including CVE, software packages listings, and health scores. Red Hat images are actually stored in what is referred to as the Red Hat Registry , which is represented by a public container registry ( registry.access.redhat.com ) and an authenticated registry ( registry.redhat.io ). Both include basically the same set of container images, with registry.redhat.io including some additional images that require authentication with Red Hat subscription credentials. Container content is monitored for vulnerabilities by Red Hat and updated regularly. When Red Hat releases security updates, such as fixes to glibc , DROWN , or Dirty Cow , any affected container images are also rebuilt and pushed to the Red Hat Registry. Red Hat uses a health index to reflect the security risk for each container provided through the Red Hat Ecosystem Catalog. Because containers consume software provided by Red Hat and the errata process, old, stale containers are insecure whereas new, fresh containers are more secure. To illustrate the age of containers, the Red Hat Ecosystem Catalog uses a grading system. A freshness grade is a measure of the oldest and most severe security errata available for an image. "A" is more up to date than "F". See Container Health Index grades as used inside the Red Hat Ecosystem Catalog for more details on this grading system. See the Red Hat Product Security Center for details on security updates and vulnerabilities related to Red Hat software. Check out Red Hat Security Advisories to search for specific advisories and CVEs. 2.7.4. OpenShift Container Registry OpenShift Container Platform includes the OpenShift Container Registry , a private registry running as an integrated component of the platform that you can use to manage your container images. The OpenShift Container Registry provides role-based access controls that allow you to manage who can pull and push which container images. OpenShift Container Platform also supports integration with other private registries that you might already be using, such as Red Hat Quay. Additional resources Integrated OpenShift image registry 2.7.5. Storing containers using Red Hat Quay Red Hat Quay is an enterprise-quality container registry product from Red Hat. Development for Red Hat Quay is done through the upstream Project Quay . Red Hat Quay is available to deploy on-premise or through the hosted version of Red Hat Quay at Quay.io . Security-related features of Red Hat Quay include: Time machine : Allows images with older tags to expire after a set period of time or based on a user-selected expiration time. Repository mirroring : Lets you mirror other registries for security reasons, such hosting a public repository on Red Hat Quay behind a company firewall, or for performance reasons, to keep registries closer to where they are used. Action log storage : Save Red Hat Quay logging output to Elasticsearch storage or Splunk to allow for later search and analysis. Clair : Scan images against a variety of Linux vulnerability databases, based on the origins of each container image. Internal authentication : Use the default local database to handle RBAC authentication to Red Hat Quay or choose from LDAP, Keystone (OpenStack), JWT Custom Authentication, or External Application Token authentication. External authorization (OAuth) : Allow authorization to Red Hat Quay from GitHub, GitHub Enterprise, or Google Authentication. Access settings : Generate tokens to allow access to Red Hat Quay from docker, rkt, anonymous access, user-created accounts, encrypted client passwords, or prefix username autocompletion. Ongoing integration of Red Hat Quay with OpenShift Container Platform continues, with several OpenShift Container Platform Operators of particular interest. The Quay Bridge Operator lets you replace the internal OpenShift image registry with Red Hat Quay. The Red Hat Quay Container Security Operator lets you check vulnerabilities of images running in OpenShift Container Platform that were pulled from Red Hat Quay registries. 2.8. Securing the build process In a container environment, the software build process is the stage in the life cycle where application code is integrated with the required runtime libraries. Managing this build process is key to securing the software stack. 2.8.1. Building once, deploying everywhere Using OpenShift Container Platform as the standard platform for container builds enables you to guarantee the security of the build environment. Adhering to a "build once, deploy everywhere" philosophy ensures that the product of the build process is exactly what is deployed in production. It is also important to maintain the immutability of your containers. You should not patch running containers, but rebuild and redeploy them. As your software moves through the stages of building, testing, and production, it is important that the tools making up your software supply chain be trusted. The following figure illustrates the process and tools that could be incorporated into a trusted software supply chain for containerized software: OpenShift Container Platform can be integrated with trusted code repositories (such as GitHub) and development platforms (such as Che) for creating and managing secure code. Unit testing could rely on Cucumber and JUnit . You could inspect your containers for vulnerabilities and compliance issues with Anchore or Twistlock, and use image scanning tools such as AtomicScan or Clair. Tools such as Sysdig could provide ongoing monitoring of your containerized applications. 2.8.2. Managing builds You can use Source-to-Image (S2I) to combine source code and base images. Builder images make use of S2I to enable your development and operations teams to collaborate on a reproducible build environment. With Red Hat S2I images available as Universal Base Image (UBI) images, you can now freely redistribute your software with base images built from real RHEL RPM packages. Red Hat has removed subscription restrictions to allow this. When developers commit code with Git for an application using build images, OpenShift Container Platform can perform the following functions: Trigger, either by using webhooks on the code repository or other automated continuous integration (CI) process, to automatically assemble a new image from available artifacts, the S2I builder image, and the newly committed code. Automatically deploy the newly built image for testing. Promote the tested image to production where it can be automatically deployed using a CI process. You can use the integrated OpenShift Container Registry to manage access to final images. Both S2I and native build images are automatically pushed to your OpenShift Container Registry. In addition to the included Jenkins for CI, you can also integrate your own build and CI environment with OpenShift Container Platform using RESTful APIs, as well as use any API-compliant image registry. 2.8.3. Securing inputs during builds In some scenarios, build operations require credentials to access dependent resources, but it is undesirable for those credentials to be available in the final application image produced by the build. You can define input secrets for this purpose. For example, when building a Node.js application, you can set up your private mirror for Node.js modules. To download modules from that private mirror, you must supply a custom .npmrc file for the build that contains a URL, user name, and password. For security reasons, you do not want to expose your credentials in the application image. Using this example scenario, you can add an input secret to a new BuildConfig object: Create the secret, if it does not exist: USD oc create secret generic secret-npmrc --from-file=.npmrc=~/.npmrc This creates a new secret named secret-npmrc , which contains the base64 encoded content of the ~/.npmrc file. Add the secret to the source section in the existing BuildConfig object: source: git: uri: https://github.com/sclorg/nodejs-ex.git secrets: - destinationDir: . secret: name: secret-npmrc To include the secret in a new BuildConfig object, run the following command: USD oc new-build \ openshift/nodejs-010-centos7~https://github.com/sclorg/nodejs-ex.git \ --build-secret secret-npmrc 2.8.4. Designing your build process You can design your container image management and build process to use container layers so that you can separate control. For example, an operations team manages base images, while architects manage middleware, runtimes, databases, and other solutions. Developers can then focus on application layers and focus on writing code. Because new vulnerabilities are identified daily, you need to proactively check container content over time. To do this, you should integrate automated security testing into your build or CI process. For example: SAST / DAST - Static and Dynamic security testing tools. Scanners for real-time checking against known vulnerabilities. Tools like these catalog the open source packages in your container, notify you of any known vulnerabilities, and update you when new vulnerabilities are discovered in previously scanned packages. Your CI process should include policies that flag builds with issues discovered by security scans so that your team can take appropriate action to address those issues. You should sign your custom built containers to ensure that nothing is tampered with between build and deployment. Using GitOps methodology, you can use the same CI/CD mechanisms to manage not only your application configurations, but also your OpenShift Container Platform infrastructure. 2.8.5. Building Knative serverless applications Relying on Kubernetes and Kourier, you can build, deploy, and manage serverless applications by using OpenShift Serverless in OpenShift Container Platform. As with other builds, you can use S2I images to build your containers, then serve them using Knative services. View Knative application builds through the Topology view of the OpenShift Container Platform web console. 2.8.6. Additional resources Understanding image builds Triggering and modifying builds Creating build inputs Input secrets and config maps OpenShift Serverless overview Viewing application composition using the Topology view 2.9. Deploying containers You can use a variety of techniques to make sure that the containers you deploy hold the latest production-quality content and that they have not been tampered with. These techniques include setting up build triggers to incorporate the latest code and using signatures to ensure that the container comes from a trusted source and has not been modified. 2.9.1. Controlling container deployments with triggers If something happens during the build process, or if a vulnerability is discovered after an image has been deployed, you can use tooling for automated, policy-based deployment to remediate. You can use triggers to rebuild and replace images, ensuring the immutable containers process, instead of patching running containers, which is not recommended. For example, you build an application using three container image layers: core, middleware, and applications. An issue is discovered in the core image and that image is rebuilt. After the build is complete, the image is pushed to your OpenShift Container Registry. OpenShift Container Platform detects that the image has changed and automatically rebuilds and deploys the application image, based on the defined triggers. This change incorporates the fixed libraries and ensures that the production code is identical to the most current image. You can use the oc set triggers command to set a deployment trigger. For example, to set a trigger for a deployment called deployment-example: USD oc set triggers deploy/deployment-example \ --from-image=example:latest \ --containers=web 2.9.2. Controlling what image sources can be deployed It is important that the intended images are actually being deployed, that the images including the contained content are from trusted sources, and they have not been altered. Cryptographic signing provides this assurance. OpenShift Container Platform enables cluster administrators to apply security policy that is broad or narrow, reflecting deployment environment and security requirements. Two parameters define this policy: one or more registries, with optional project namespace trust type, such as accept, reject, or require public key(s) You can use these policy parameters to allow, deny, or require a trust relationship for entire registries, parts of registries, or individual images. Using trusted public keys, you can ensure that the source is cryptographically verified. The policy rules apply to nodes. Policy may be applied uniformly across all nodes or targeted for different node workloads (for example, build, zone, or environment). Example image signature policy file { "default": [{"type": "reject"}], "transports": { "docker": { "access.redhat.com": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ] }, "atomic": { "172.30.1.1:5000/openshift": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ], "172.30.1.1:5000/production": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/example.com/pubkey" } ], "172.30.1.1:5000": [{"type": "reject"}] } } } The policy can be saved onto a node as /etc/containers/policy.json . Saving this file to a node is best accomplished using a new MachineConfig object. This example enforces the following rules: Require images from the Red Hat Registry ( registry.access.redhat.com ) to be signed by the Red Hat public key. Require images from your OpenShift Container Registry in the openshift namespace to be signed by the Red Hat public key. Require images from your OpenShift Container Registry in the production namespace to be signed by the public key for example.com . Reject all other registries not specified by the global default definition. 2.9.3. Using signature transports A signature transport is a way to store and retrieve the binary signature blob. There are two types of signature transports. atomic : Managed by the OpenShift Container Platform API. docker : Served as a local file or by a web server. The OpenShift Container Platform API manages signatures that use the atomic transport type. You must store the images that use this signature type in your OpenShift Container Registry. Because the docker/distribution extensions API auto-discovers the image signature endpoint, no additional configuration is required. Signatures that use the docker transport type are served by local file or web server. These signatures are more flexible; you can serve images from any container image registry and use an independent server to deliver binary signatures. However, the docker transport type requires additional configuration. You must configure the nodes with the URI of the signature server by placing arbitrarily-named YAML files into a directory on the host system, /etc/containers/registries.d by default. The YAML configuration files contain a registry URI and a signature server URI, or sigstore : Example registries.d file docker: access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore In this example, the Red Hat Registry, access.redhat.com , is the signature server that provides signatures for the docker transport type. Its URI is defined in the sigstore parameter. You might name this file /etc/containers/registries.d/redhat.com.yaml and use the Machine Config Operator to automatically place the file on each node in your cluster. No service restart is required since policy and registries.d files are dynamically loaded by the container runtime. 2.9.4. Creating secrets and config maps The Secret object type provides a mechanism to hold sensitive information such as passwords, OpenShift Container Platform client configuration files, dockercfg files, and private source repository credentials. Secrets decouple sensitive content from pods. You can mount secrets into containers using a volume plugin or the system can use secrets to perform actions on behalf of a pod. For example, to add a secret to your deployment configuration so that it can access a private image repository, do the following: Procedure Log in to the OpenShift Container Platform web console. Create a new project. Navigate to Resources Secrets and create a new secret. Set Secret Type to Image Secret and Authentication Type to Image Registry Credentials to enter credentials for accessing a private image repository. When creating a deployment configuration (for example, from the Add to Project Deploy Image page), set the Pull Secret to your new secret. Config maps are similar to secrets, but are designed to support working with strings that do not contain sensitive information. The ConfigMap object holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data for system components such as controllers. 2.9.5. Automating continuous deployment You can integrate your own continuous deployment (CD) tooling with OpenShift Container Platform. By leveraging CI/CD and OpenShift Container Platform, you can automate the process of rebuilding the application to incorporate the latest fixes, testing, and ensuring that it is deployed everywhere within the environment. Additional resources Input secrets and config maps 2.10. Securing the container platform OpenShift Container Platform and Kubernetes APIs are key to automating container management at scale. APIs are used to: Validate and configure the data for pods, services, and replication controllers. Perform project validation on incoming requests and invoke triggers on other major system components. Security-related features in OpenShift Container Platform that are based on Kubernetes include: Multitenancy, which combines Role-Based Access Controls and network policies to isolate containers at multiple levels. Admission plugins, which form boundaries between an API and those making requests to the API. OpenShift Container Platform uses Operators to automate and simplify the management of Kubernetes-level security features. 2.10.1. Isolating containers with multitenancy Multitenancy allows applications on an OpenShift Container Platform cluster that are owned by multiple users, and run across multiple hosts and namespaces, to remain isolated from each other and from outside attacks. You obtain multitenancy by applying role-based access control (RBAC) to Kubernetes namespaces. In Kubernetes, namespaces are areas where applications can run in ways that are separate from other applications. OpenShift Container Platform uses and extends namespaces by adding extra annotations, including MCS labeling in SELinux, and identifying these extended namespaces as projects . Within the scope of a project, users can maintain their own cluster resources, including service accounts, policies, constraints, and various other objects. RBAC objects are assigned to projects to authorize selected users to have access to those projects. That authorization takes the form of rules, roles, and bindings: Rules define what a user can create or access in a project. Roles are collections of rules that you can bind to selected users or groups. Bindings define the association between users or groups and roles. Local RBAC roles and bindings attach a user or group to a particular project. Cluster RBAC can attach cluster-wide roles and bindings to all projects in a cluster. There are default cluster roles that can be assigned to provide admin , basic-user , cluster-admin , and cluster-status access. 2.10.2. Protecting control plane with admission plugins While RBAC controls access rules between users and groups and available projects, admission plugins define access to the OpenShift Container Platform master API. Admission plugins form a chain of rules that consist of: Default admissions plugins: These implement a default set of policies and resources limits that are applied to components of the OpenShift Container Platform control plane. Mutating admission plugins: These plugins dynamically extend the admission chain. They call out to a webhook server and can both authenticate a request and modify the selected resource. Validating admission plugins: These validate requests for a selected resource and can both validate the request and ensure that the resource does not change again. API requests go through admissions plugins in a chain, with any failure along the way causing the request to be rejected. Each admission plugin is associated with particular resources and only responds to requests for those resources. 2.10.2.1. Security context constraints (SCCs) You can use security context constraints (SCCs) to define a set of conditions that a pod must run with to be accepted into the system. Some aspects that can be managed by SCCs include: Running of privileged containers Capabilities a container can request to be added Use of host directories as volumes SELinux context of the container Container user ID If you have the required permissions, you can adjust the default SCC policies to be more permissive, if required. 2.10.2.2. Granting roles to service accounts You can assign roles to service accounts, in the same way that users are assigned role-based access. There are three default service accounts created for each project. A service account: is limited in scope to a particular project derives its name from its project is automatically assigned an API token and credentials to access the OpenShift Container Registry Service accounts associated with platform components automatically have their keys rotated. 2.10.3. Authentication and authorization 2.10.3.1. Controlling access using OAuth You can use API access control via authentication and authorization for securing your container platform. The OpenShift Container Platform master includes a built-in OAuth server. Users can obtain OAuth access tokens to authenticate themselves to the API. As an administrator, you can configure OAuth to authenticate using an identity provider , such as LDAP, GitHub, or Google. The identity provider is used by default for new OpenShift Container Platform deployments, but you can configure this at initial installation time or postinstallation. 2.10.3.2. API access control and management Applications can have multiple, independent API services which have different endpoints that require management. OpenShift Container Platform includes a containerized version of the 3scale API gateway so that you can manage your APIs and control access. 3scale gives you a variety of standard options for API authentication and security, which can be used alone or in combination to issue credentials and control access: standard API keys, application ID and key pair, and OAuth 2.0. You can restrict access to specific endpoints, methods, and services and apply access policy for groups of users. Application plans allow you to set rate limits for API usage and control traffic flow for groups of developers. For a tutorial on using APIcast v2, the containerized 3scale API Gateway, see Running APIcast on Red Hat OpenShift in the 3scale documentation. 2.10.3.3. Red Hat Single Sign-On The Red Hat Single Sign-On server enables you to secure your applications by providing web single sign-on capabilities based on standards, including SAML 2.0, OpenID Connect, and OAuth 2.0. The server can act as a SAML or OpenID Connect-based identity provider (IdP), mediating with your enterprise user directory or third-party identity provider for identity information and your applications using standards-based tokens. You can integrate Red Hat Single Sign-On with LDAP-based directory services including Microsoft Active Directory and Red Hat Enterprise Linux Identity Management. 2.10.3.4. Secure self-service web console OpenShift Container Platform provides a self-service web console to ensure that teams do not access other environments without authorization. OpenShift Container Platform ensures a secure multitenant master by providing the following: Access to the master uses Transport Layer Security (TLS) Access to the API Server uses X.509 certificates or OAuth access tokens Project quota limits the damage that a rogue token could do The etcd service is not exposed directly to the cluster 2.10.4. Managing certificates for the platform OpenShift Container Platform has multiple components within its framework that use REST-based HTTPS communication leveraging encryption via TLS certificates. OpenShift Container Platform's installer configures these certificates during installation. There are some primary components that generate this traffic: masters (API server and controllers) etcd nodes registry router 2.10.4.1. Configuring custom certificates You can configure custom serving certificates for the public hostnames of the API server and web console during initial installation or when redeploying certificates. You can also use a custom CA. Additional resources Introduction to OpenShift Container Platform Using RBAC to define and apply permissions About admission plugins Managing security context constraints SCC reference commands Examples of granting roles to service accounts Configuring the internal OAuth server Understanding identity provider configuration Certificate types and descriptions Proxy certificates 2.11. Securing networks Network security can be managed at several levels. At the pod level, network namespaces can prevent containers from seeing other pods or the host system by restricting network access. Network policies give you control over allowing and rejecting connections. You can manage ingress and egress traffic to and from your containerized applications. 2.11.1. Using network namespaces OpenShift Container Platform uses software-defined networking (SDN) to provide a unified cluster network that enables communication between containers across the cluster. Network policy mode, by default, makes all pods in a project accessible from other pods and network endpoints. To isolate one or more pods in a project, you can create NetworkPolicy objects in that project to indicate the allowed incoming connections. Using multitenant mode, you can provide project-level isolation for pods and services. 2.11.2. Isolating pods with network policies Using network policies , you can isolate pods from each other in the same project. Network policies can deny all network access to a pod, only allow connections for the Ingress Controller, reject connections from pods in other projects, or set similar rules for how networks behave. Additional resources About network policy 2.11.3. Using multiple pod networks Each running container has only one network interface by default. The Multus CNI plugin lets you create multiple CNI networks, and then attach any of those networks to your pods. In that way, you can do things like separate private data onto a more restricted network and have multiple network interfaces on each node. Additional resources Using multiple networks 2.11.4. Isolating applications OpenShift Container Platform enables you to segment network traffic on a single cluster to make multitenant clusters that isolate users, teams, applications, and environments from non-global resources. Additional resources Configuring network isolation using OpenShiftSDN 2.11.5. Securing ingress traffic There are many security implications related to how you configure access to your Kubernetes services from outside of your OpenShift Container Platform cluster. Besides exposing HTTP and HTTPS routes, ingress routing allows you to set up NodePort or LoadBalancer ingress types. NodePort exposes an application's service API object from each cluster worker. LoadBalancer lets you assign an external load balancer to an associated service API object in your OpenShift Container Platform cluster. Additional resources Configuring ingress cluster traffic 2.11.6. Securing egress traffic OpenShift Container Platform provides the ability to control egress traffic using either a router or firewall method. For example, you can use IP whitelisting to control database access. A cluster administrator can assign one or more egress IP addresses to a project in an OpenShift Container Platform SDN network provider. Likewise, a cluster administrator can prevent egress traffic from going outside of an OpenShift Container Platform cluster using an egress firewall. By assigning a fixed egress IP address, you can have all outgoing traffic assigned to that IP address for a particular project. With the egress firewall, you can prevent a pod from connecting to an external network, prevent a pod from connecting to an internal network, or limit a pod's access to specific internal subnets. Additional resources Configuring an egress firewall to control access to external IP addresses Configuring egress IPs for a project 2.12. Securing attached storage OpenShift Container Platform supports multiple types of storage, both for on-premise and cloud providers. In particular, OpenShift Container Platform can use storage types that support the Container Storage Interface. 2.12.1. Persistent volume plugins Containers are useful for both stateless and stateful applications. Protecting attached storage is a key element of securing stateful services. Using the Container Storage Interface (CSI), OpenShift Container Platform can incorporate storage from any storage back end that supports the CSI interface. OpenShift Container Platform provides plugins for multiple types of storage, including: Red Hat OpenShift Data Foundation * AWS Elastic Block Stores (EBS) * AWS Elastic File System (EFS) * Azure Disk * Azure File * OpenStack Cinder * GCE Persistent Disks * VMware vSphere * Network File System (NFS) FlexVolume Fibre Channel iSCSI Plugins for those storage types with dynamic provisioning are marked with an asterisk (*). Data in transit is encrypted via HTTPS for all OpenShift Container Platform components communicating with each other. You can mount a persistent volume (PV) on a host in any way supported by your storage type. Different types of storage have different capabilities and each PV's access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read-only. Each PV has its own set of access modes describing that specific PV's capabilities, such as ReadWriteOnce , ReadOnlyMany , and ReadWriteMany . 2.12.2. Shared storage For shared storage providers like NFS, the PV registers its group ID (GID) as an annotation on the PV resource. Then, when the PV is claimed by the pod, the annotated GID is added to the supplemental groups of the pod, giving that pod access to the contents of the shared storage. 2.12.3. Block storage For block storage providers like AWS Elastic Block Store (EBS), GCE Persistent Disks, and iSCSI, OpenShift Container Platform uses SELinux capabilities to secure the root of the mounted volume for non-privileged pods, making the mounted volume owned by and only visible to the container with which it is associated. Additional resources Understanding persistent storage Configuring CSI volumes Dynamic provisioning Persistent storage using NFS Persistent storage using AWS Elastic Block Store Persistent storage using GCE Persistent Disk 2.13. Monitoring cluster events and logs The ability to monitor and audit an OpenShift Container Platform cluster is an important part of safeguarding the cluster and its users against inappropriate usage. There are two main sources of cluster-level information that are useful for this purpose: events and logging. 2.13.1. Watching cluster events Cluster administrators are encouraged to familiarize themselves with the Event resource type and review the list of system events to determine which events are of interest. Events are associated with a namespace, either the namespace of the resource they are related to or, for cluster events, the default namespace. The default namespace holds relevant events for monitoring or auditing a cluster, such as node events and resource events related to infrastructure components. The master API and oc command do not provide parameters to scope a listing of events to only those related to nodes. A simple approach would be to use grep : USD oc get event -n default | grep Node Example output 1h 20h 3 origin-node-1.example.local Node Normal NodeHasDiskPressure ... A more flexible approach is to output the events in a form that other tools can process. For example, the following example uses the jq tool against JSON output to extract only NodeHasDiskPressure events: USD oc get events -n default -o json \ | jq '.items[] | select(.involvedObject.kind == "Node" and .reason == "NodeHasDiskPressure")' Example output { "apiVersion": "v1", "count": 3, "involvedObject": { "kind": "Node", "name": "origin-node-1.example.local", "uid": "origin-node-1.example.local" }, "kind": "Event", "reason": "NodeHasDiskPressure", ... } Events related to resource creation, modification, or deletion can also be good candidates for detecting misuse of the cluster. The following query, for example, can be used to look for excessive pulling of images: USD oc get events --all-namespaces -o json \ | jq '[.items[] | select(.involvedObject.kind == "Pod" and .reason == "Pulling")] | length' Example output 4 Note When a namespace is deleted, its events are deleted as well. Events can also expire and are deleted to prevent filling up etcd storage. Events are not stored as a permanent record and frequent polling is necessary to capture statistics over time. 2.13.2. Logging Using the oc log command, you can view container logs, build configs and deployments in real time. Different can users have access different access to logs: Users who have access to a project are able to see the logs for that project by default. Users with admin roles can access all container logs. To save your logs for further audit and analysis, you can enable the cluster-logging add-on feature to collect, manage, and view system, container, and audit logs. You can deploy, manage, and upgrade OpenShift Logging through the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator. 2.13.3. Audit logs With audit logs , you can follow a sequence of activities associated with how a user, administrator, or other OpenShift Container Platform component is behaving. API audit logging is done on each server. Additional resources List of system events Understanding OpenShift Logging Viewing audit logs
[ "variant: openshift version: 4.15.0 metadata: name: 51-worker-rh-registry-trust labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/containers/policy.json mode: 0644 overwrite: true contents: inline: | { \"default\": [ { \"type\": \"insecureAcceptAnything\" } ], \"transports\": { \"docker\": { \"registry.access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"registry.redhat.io\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"docker-daemon\": { \"\": [ { \"type\": \"insecureAcceptAnything\" } ] } } }", "butane 51-worker-rh-registry-trust.bu -o 51-worker-rh-registry-trust.yaml", "oc apply -f 51-worker-rh-registry-trust.yaml", "oc get mc", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 00-worker a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 51-master-rh-registry-trust 3.2.0 13s 51-worker-rh-registry-trust 3.2.0 53s 1 99-master-generated-crio-seccomp-use-default 3.2.0 25m 99-master-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-master-ssh 3.2.0 28m 99-worker-generated-crio-seccomp-use-default 3.2.0 25m 99-worker-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-worker-ssh 3.2.0 28m rendered-master-af1e7ff78da0a9c851bab4be2777773b a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 8s rendered-master-cd51fd0c47e91812bfef2765c52ec7e6 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-2b52f75684fbc711bd1652dd86fd0b82 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-be3b3bce4f4aa52a62902304bac9da3c a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 48s 2", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-af1e7ff78da0a9c851bab4be2777773b True False False 3 3 3 0 30m worker rendered-worker-be3b3bce4f4aa52a62902304bac9da3c False True False 3 0 0 0 30m 1", "oc debug node/<node_name>", "sh-4.2# chroot /host", "docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore", "docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore", "oc describe machineconfigpool/worker", "Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Metadata: Creation Timestamp: 2019-12-19T02:02:12Z Generation: 3 Resource Version: 16229 Self Link: /apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker UID: 92697796-2203-11ea-b48c-fa163e3940e5 Spec: Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Machine Config Selector: Match Labels: machineconfiguration.openshift.io/role: worker Node Selector: Match Labels: node-role.kubernetes.io/worker: Paused: false Status: Conditions: Last Transition Time: 2019-12-19T02:03:27Z Message: Reason: Status: False Type: RenderDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: NodeDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: Degraded Last Transition Time: 2019-12-19T02:28:23Z Message: Reason: Status: False Type: Updated Last Transition Time: 2019-12-19T02:28:23Z Message: All nodes are updating to rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updating Configuration: Name: rendered-worker-d9b3f4ffcfd65c30dcf591a0e8cf9b2e Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 1 Observed Generation: 3 Ready Machine Count: 0 Unavailable Machine Count: 1 Updated Machine Count: 0 Events: <none>", "oc describe machineconfigpool/worker", "Last Transition Time: 2019-12-19T04:53:09Z Message: All nodes are updated with rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updated Last Transition Time: 2019-12-19T04:53:09Z Message: Reason: Status: False Type: Updating Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 4 Ready Machine Count: 3 Unavailable Machine Count: 0 Updated Machine Count: 3", "oc debug node/<node> -- chroot /host cat /etc/containers/policy.json", "Starting pod/<node>-debug To use host binaries, run `chroot /host` { \"default\": [ { \"type\": \"insecureAcceptAnything\" } ], \"transports\": { \"docker\": { \"registry.access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"registry.redhat.io\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"docker-daemon\": { \"\": [ { \"type\": \"insecureAcceptAnything\" } ] } } }", "oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.redhat.io.yaml", "Starting pod/<node>-debug To use host binaries, run `chroot /host` docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore", "oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.access.redhat.com.yaml", "Starting pod/<node>-debug To use host binaries, run `chroot /host` docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore", "oc adm release info quay.io/openshift-release-dev/ocp-release@sha256:2309578b68c5666dad62aed696f1f9d778ae1a089ee461060ba7b9514b7ca417 -o pullspec 1 quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9aafb914d5d7d0dec4edd800d02f811d7383a7d49e500af548eab5d00c1bffdb 2", "oc adm release info <release_version> \\ 1", "--- Pull From: quay.io/openshift-release-dev/ocp-release@sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 ---", "curl -o pub.key https://access.redhat.com/security/data/fd431d51.txt", "curl -o signature-1 https://mirror.openshift.com/pub/openshift-v4/signatures/openshift-release-dev/ocp-release/sha256%<sha_from_version>/signature-1 \\ 1", "skopeo inspect --raw docker://<quay_link_to_release> > manifest.json \\ 1", "skopeo standalone-verify manifest.json quay.io/openshift-release-dev/ocp-release:<release_number>-<arch> any signature-1 --public-key-file pub.key", "Signature verified using fingerprint 567E347AD0044ADE55BA8A5F199E2F91FD431D51, digest sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55", "quality.images.openshift.io/<qualityType>.<providerId>: {}", "quality.images.openshift.io/vulnerability.blackduck: {} quality.images.openshift.io/vulnerability.jfrog: {} quality.images.openshift.io/license.blackduck: {} quality.images.openshift.io/vulnerability.openscap: {}", "{ \"name\": \"OpenSCAP\", \"description\": \"OpenSCAP vulnerability score\", \"timestamp\": \"2016-09-08T05:04:46Z\", \"reference\": \"https://www.open-scap.org/930492\", \"compliant\": true, \"scannerVersion\": \"1.2\", \"summary\": [ { \"label\": \"critical\", \"data\": \"4\", \"severityIndex\": 3, \"reference\": null }, { \"label\": \"important\", \"data\": \"12\", \"severityIndex\": 2, \"reference\": null }, { \"label\": \"moderate\", \"data\": \"8\", \"severityIndex\": 1, \"reference\": null }, { \"label\": \"low\", \"data\": \"26\", \"severityIndex\": 0, \"reference\": null } ] }", "{ \"name\": \"Red Hat Ecosystem Catalog\", \"description\": \"Container health index\", \"timestamp\": \"2016-09-08T05:04:46Z\", \"reference\": \"https://access.redhat.com/errata/RHBA-2016:1566\", \"compliant\": null, \"scannerVersion\": \"1.2\", \"summary\": [ { \"label\": \"Health index\", \"data\": \"B\", \"severityIndex\": 1, \"reference\": null } ] }", "oc annotate image <image> quality.images.openshift.io/vulnerability.redhatcatalog='{ \"name\": \"Red Hat Ecosystem Catalog\", \"description\": \"Container health index\", \"timestamp\": \"2020-06-01T05:04:46Z\", \"compliant\": null, \"scannerVersion\": \"1.2\", \"reference\": \"https://access.redhat.com/errata/RHBA-2020:2347\", \"summary\": \"[ { \"label\": \"Health index\", \"data\": \"B\", \"severityIndex\": 1, \"reference\": null } ]\" }'", "annotations: images.openshift.io/deny-execution: true", "curl -X PATCH -H \"Authorization: Bearer <token>\" -H \"Content-Type: application/merge-patch+json\" https://<openshift_server>:6443/apis/image.openshift.io/v1/images/<image_id> --data '{ <image_annotation> }'", "{ \"metadata\": { \"annotations\": { \"quality.images.openshift.io/vulnerability.redhatcatalog\": \"{ 'name': 'Red Hat Ecosystem Catalog', 'description': 'Container health index', 'timestamp': '2020-06-01T05:04:46Z', 'compliant': null, 'reference': 'https://access.redhat.com/errata/RHBA-2020:2347', 'summary': [{'label': 'Health index', 'data': '4', 'severityIndex': 1, 'reference': null}] }\" } } }", "oc create secret generic secret-npmrc --from-file=.npmrc=~/.npmrc", "source: git: uri: https://github.com/sclorg/nodejs-ex.git secrets: - destinationDir: . secret: name: secret-npmrc", "oc new-build openshift/nodejs-010-centos7~https://github.com/sclorg/nodejs-ex.git --build-secret secret-npmrc", "oc set triggers deploy/deployment-example --from-image=example:latest --containers=web", "{ \"default\": [{\"type\": \"reject\"}], \"transports\": { \"docker\": { \"access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"atomic\": { \"172.30.1.1:5000/openshift\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"172.30.1.1:5000/production\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/example.com/pubkey\" } ], \"172.30.1.1:5000\": [{\"type\": \"reject\"}] } } }", "docker: access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore", "oc get event -n default | grep Node", "1h 20h 3 origin-node-1.example.local Node Normal NodeHasDiskPressure", "oc get events -n default -o json | jq '.items[] | select(.involvedObject.kind == \"Node\" and .reason == \"NodeHasDiskPressure\")'", "{ \"apiVersion\": \"v1\", \"count\": 3, \"involvedObject\": { \"kind\": \"Node\", \"name\": \"origin-node-1.example.local\", \"uid\": \"origin-node-1.example.local\" }, \"kind\": \"Event\", \"reason\": \"NodeHasDiskPressure\", }", "oc get events --all-namespaces -o json | jq '[.items[] | select(.involvedObject.kind == \"Pod\" and .reason == \"Pulling\")] | length'", "4" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/security_and_compliance/container-security-1
9.5. Configuring Red Hat JBoss Data Grid for Authorization
9.5. Configuring Red Hat JBoss Data Grid for Authorization Authorization is configured at two levels: the cache container (CacheManager), and at the single cache. CacheManager The following is an example configuration for authorization at the CacheManager level: Example 9.4. CacheManager Authorization (Declarative Configuration) Each cache container determines: whether to use authorization. a class which will map principals to a set of roles. a set of named roles and the permissions they represent. You can choose to use only a subset of the roles defined at the container level. Roles Roles may be applied on a cache-per-cache basis, using the roles defined at the cache-container level, as follows: Example 9.5. Defining Roles Important Any cache that is intended to require authentication must have a listing of roles defined; otherwise authentication is not enforced as the no-anonymous policy is defined by the cache's authorization. Programmatic CacheManager Authorization (Library Mode) The following example shows how to set up the same authorization parameters for Library mode using programmatic configuration: Example 9.6. CacheManager Authorization Programmatic Configuration Important The REST protocol is not supported for use with authorization, and any attempts to access a cache with authorization enabled will result in a SecurityException . Report a bug
[ "<cache-container name=\"local\" default-cache=\"default\"> <security> <authorization> <identity-role-mapper /> <role name=\"admin\" permissions=\"ALL\"/> <role name=\"reader\" permissions=\"READ\"/> <role name=\"writer\" permissions=\"WRITE\"/> <role name=\"supervisor\" permissions=\"ALL_READ ALL_WRITE\"/> </authorization> </security> </cache-container>", "<local-cache name=\"secured\"> <security> <authorization roles=\"admin reader writer supervisor\"/> </security> </local-cache>", "GlobalConfigurationBuilder global = new GlobalConfigurationBuilder(); global .security() .authorization() .principalRoleMapper(new IdentityRoleMapper()) .role(\"admin\") .permission(CachePermission.ALL) .role(\"supervisor\") .permission(CachePermission.EXEC) .permission(CachePermission.READ) .permission(CachePermission.WRITE) .role(\"reader\") .permission(CachePermission.READ); ConfigurationBuilder config = new ConfigurationBuilder(); config .security() .enable() .authorization() .role(\"admin\") .role(\"supervisor\") .role(\"reader\");" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/Configuring_Red_Hat_JBoss_Data_Grid_for_Authorization
6.8. Attaching a Red Hat Subscription and Enabling the Certificate System Package Repository
6.8. Attaching a Red Hat Subscription and Enabling the Certificate System Package Repository Before you can install and update Certificate System, you must enable the corresponding repository: Attach the Red Hat subscriptions to the system: Skip this step, if your system is already registered or has a subscription that provides Certificate Server attached. Register the system to the Red Hat subscription management service. You can use the --auto-attach option to automatically apply an available subscription for the operating system. List the available subscriptions and note the pool ID providing the Red Hat Certificate System. For example: In case you have a lot of subscriptions, the output of the command can be very long. You can optionally redirect the output to a file: Attach the Certificate System subscription to the system using the pool ID from the step: Enable the Certificate System repository: Enable the Certificate System module stream: Installing the required packages is described in the Chapter 7, Installing and Configuring Certificate System chapter. Note For compliance, only enable Red Hat approved repositories. Only Red Hat approved repositories can be enabled through subscription-manager utility.
[ "subscription-manager register --auto-attach Username: [email protected] Password: The system has been registered with id: 566629db-a4ec-43e1-aa02-9cbaa6177c3f Installed Product Current Status: Product Name: Red Hat Enterprise Linux Server Status: Subscribed", "subscription-manager list --available --all Subscription Name: Red Hat Enterprise Linux Developer Suite Provides: Red Hat Certificate System Pool ID: 7aba89677a6a38fc0bba7dac673f7993 Available: 1", "subscription-manager list --available --all > /root/subscriptions.txt", "subscription-manager attach --pool= 7aba89677a6a38fc0bba7dac673f7993 Successfully attached a subscription for: Red Hat Enterprise Linux Developer Suite", "subscription-manager repos --enable certsys-10-for-rhel-8-x86_64-eus-rpms Repository 'certsys-10-for-rhel-8-x86_64-eus-rpms' is enabled for this system.", "dnf module enable redhat-pki" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/enabling_the_cs_repository
Block Storage Backup Guide
Block Storage Backup Guide Red Hat OpenStack Platform 16.2 Understanding, using, and managing the Block Storage backup service in Red Hat OpenStack Platform OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/block_storage_backup_guide/index
Chapter 35. KafkaClusterTemplate schema reference
Chapter 35. KafkaClusterTemplate schema reference Used in: KafkaClusterSpec Property Description statefulset The statefulset property has been deprecated. Support for StatefulSets was removed in AMQ Streams 2.5. This property is ignored. Template for Kafka StatefulSet . StatefulSetTemplate pod Template for Kafka Pods . PodTemplate bootstrapService Template for Kafka bootstrap Service . InternalServiceTemplate brokersService Template for Kafka broker Service . InternalServiceTemplate externalBootstrapService Template for Kafka external bootstrap Service . ResourceTemplate perPodService Template for Kafka per-pod Services used for access from outside of OpenShift. ResourceTemplate externalBootstrapRoute Template for Kafka external bootstrap Route . ResourceTemplate perPodRoute Template for Kafka per-pod Routes used for access from outside of OpenShift. ResourceTemplate externalBootstrapIngress Template for Kafka external bootstrap Ingress . ResourceTemplate perPodIngress Template for Kafka per-pod Ingress used for access from outside of OpenShift. ResourceTemplate persistentVolumeClaim Template for all Kafka PersistentVolumeClaims . ResourceTemplate podDisruptionBudget Template for Kafka PodDisruptionBudget . PodDisruptionBudgetTemplate kafkaContainer Template for the Kafka broker container. ContainerTemplate initContainer Template for the Kafka init container. ContainerTemplate clusterCaCert Template for Secret with Kafka Cluster certificate public key. ResourceTemplate serviceAccount Template for the Kafka service account. ResourceTemplate jmxSecret Template for Secret of the Kafka Cluster JMX authentication. ResourceTemplate clusterRoleBinding Template for the Kafka ClusterRoleBinding. ResourceTemplate podSet Template for Kafka StrimziPodSet resource. ResourceTemplate
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-KafkaClusterTemplate-reference
Appendix E. Audit events
Appendix E. Audit events This appendix contains two parts. The first part, Section E.1, "Required audit events and their examples" , contains a list of required audit events grouped by the requirement ID from the CA Protection Profile V2.1, where each audit event is accompanied by one or more examples. The second part, Section E.2, "Audit Event Descriptions" provides individual audit event and their parameter description and format. Every audit event in the log is accompanied by the following information: The Java identifier of the thread. For example: The time stamp the event occurred at. For example: The log source (14 is SIGNED_AUDIT): The current log level (6 is Security-related events. See 13.1.2 Log Levels (Message Categories) in the Planning, Installation and Deployment Guide (Common Criteria Edition) . For example: The information about the log event (which is log event specific; see Section E.2, "Audit Event Descriptions" for information about each field in a particular log event). For example: E.1. Required audit events and their examples This section contains all required audit events per Common Criteria CA Protection Profile v.2.1. For audit events descriptions, see Section E.2, "Audit Event Descriptions" . FAU_GEN.1 Start-up of the TSF audit functions AUDIT_LOG_STARTUP Test case: start up a CS instance. All administrative actions invoked through the TFS interface CONFIG_CERT_PROFILE Test case: modifying a profile via CLI or console. CERT_PROFILE_APPROVAL Test case: as a CA admin, enabling a profile (e.g. caUserCert ) via console or CLI. Then as a CA agent, approving the profile from the agent portal in the WebUI. CONFIG_OCSP_PROFILE Test case: changing OCSP parameters via console, e.g. includeNextUpdate (make sure you revert changes after each test). CONFIG_CRL_PROFILE Test case: in the console, selecting Certificate Manager > CRL Issuing Points > MasterCRL > Updates > and modifying the Update CRL every field as well as the update race period and update as this update extension fields. CONFIG_AUTH Test case: in the console, selecting Authentication > Authentication Instance > and adding a new authentication instance by entering a new Auth Instance ID. For example, AgentCertAuth and then entering AgentCertAuth2 for the instance name. CONFIG_ROLE(success) Test case: adding an user, e.g. # pki -d /root/.dogtag/pki_rsa_bootstrap/certs_db/ -c SECret.123 -p 31443 -n 'rsa_SubCA_AdminV' ca-user-add Test_UserV --fullName Testuser --password SECret.123. CONFIG_ROLE(Failure) Test case: adding an existing user, e.g. # pki -d /root/.dogtag/pki_rsa_bootstrap/certs_db/ -c SECret.123 -p 31443 -n 'rsa_SubCA_AdminV' ca-user-add Test_UserV --fullName Testuser --password SECret.123. CONFIG_ACL CA Test case: in the console, clicking Access Control List and removing a variable (adding it back afterwards). CONFIG_SIGNED_AUDIT ( FAU_SEL.1 ) CA Test case: disabling, e.g. # pki -U https://rhcs10.example.com:21443 -d /root/.dogtag/pki_ecc_bootstrap/certs_db -c SECret.123 -n ecc_SubCA_AdminV ca-audit-mod --action disable. Test case: reenabling, e.g. # pki -U https://rhcs10.example.com:21443 -d /root/.dogtag/pki_ecc_bootstrap/certs_db -c SECret.123 -n ecc_SubCA_AdminV ca-audit-mod --action enable. KRA Test case: disabling audit using the pki kra-audit-mod command: # pki -p 28443 -d /root/.dogtag/pki_rsa_bootstrap/certs_db/ -c SECret.123 -n "PKI KRA Administrator for RSA-KRA" kra-audit-mod --action disable. OCSP Test case: in the console, selecting Log > Log Event Listener Management tab > SignedAudit > Edit/View > and changing the flushInterval value. TKS Test case: disabling audit using the pki tps-audit-mod command, after importing the TKS admin cert into the db: # pki -d /root/.dogtag/pki_rsa_bootstrap/certs_db/ client-cert-import --pkcs12 /opt/pki_rsa/rhcs10-RSA-TKS/tks_admin_cert.p12 --pkcs12-password SECret.123 then # pki -p 24443 -d /root/.dogtag/pki_rsa_bootstrap/certs_db/ -c SECret.123 -n "PKI TKS Administrator for RSA-TKS" tks-audit-mod --action disable. TPS Test case: disabling audit using the pki tps-audit-mod command, after importing the TPS admin cert into the db: # pki -d /root/.dogtag/pki_rsa_bootstrap/certs_db/ client-cert-import --pkcs12 /opt/pki_rsa/rhcs10-RSA-TPS/tks_admin_cert.p12 --pkcs12-password SECret.123 then # pki -p 24443 -d /root/.dogtag/pki_rsa_bootstrap/certs_db/ -c SECret.123 -n "PKI TPS Administrator for RSA-TPS" tps-audit-mod --action disable. CONFIG_DRM Test case: in the console, clicking Configuration > Data Recovery Manager > General Settings > and setting the number of required recovery agents to 2. OCSP_ADD_CA_REQUEST_PROCESSED Success Test case: in the WebUI, clicking Agent Services > Add Certificate Authority > then entering a valid CA certificate in PEM format. Failure Test case: in the WebUI, clicking Agent Services > Add Certificate Authority > then not entering anything valid. OCSP_REMOVE_CA_REQUEST_PROCESSED Test case: in the WebUI, clicking Agent Services > List Certificate Authorities > then clicking Remove CA (Remember to add it back after the test). SECURITY_DOMAIN_UPDATE Operation: Issue_token Test case: checking the CA logs when other subsystems are added to or removed from the security domain. Operation: Add Test case: checking the CA logs when other subsystems are added to or removed from the security domain. CONFIG_SERIAL_NUMBER CA Test case: creating a RootCA subsystem clone. KRA Test case: creating a KRA subsystem clone. FDP_CER_EXT.1 (extended) Certificate generation CERT_REQUEST_PROCESSED (success) Test case: a successful CMC request using SharedSecret (with cmc.popLinkWitnessRequired=true ). FDP_CER_EXT.2 (extended) Linking of certificates to certificate requests Test case: a successful CMC request signed and issued by a CA agent (with cmc.popLinkWitnessRequired=false ): PROFILE_CERT_REQUEST CERT_REQUEST_PROCESSED (Success) Note In the success case, the ReqID field effectively links to the ReqID field of a successful CERT_REQUEST_PROCESSED event where the CertSerialNum field contains the certificate serial number. FFDP_CER_EXT.3 FDP_CER_EXT.2 (Failure) Failed certificate approvals A failed CMC request using SharedSecret (with cmc.popLinkWitnessRequired=true ) with wrong witness.sharedSecret CMC_REQUEST_RECEIVED CERT_REQUEST_PROCESSED (failure) Note The concurrent occurrence of CMC_REQUEST_RECEIVED and CERT_REQUEST_PROCESSED linked the request object with the failure. FIA_X509_EXT.1, FIA_X509_EXT.2 Failed certificate validations; failed authentications ACCESS_SESSION_ESTABLISH (failure) User with revoked cert trying to perform an operation. Test case: # pki -d /root/.dogtag/pki_ecc_bootstrap/certs_db/ -c SECret.123 -p 21443 -n 'ecc_SubCA_AgentR' ca-cert-find. User with expired cert trying to perform an operation. Test case: # pki -d /root/.dogtag/pki_ecc_bootstrap/certs_db/ -c SECret.123 -p 21443 -n 'ecc_SubCA_AgentE' ca-cert-find. CMC enrollment request submitted using a TLS client cert issued by an unknown CA. Test case: Adding a client cert issued by unknown CA to nssdb and running # HttpClient /root/.dogtag/pki_ecc_bootstrap/certs_db/HttpClient-cmc-p10.self.cfg. No common encryption algorithm(s). Test case: changing the ciphers in the ECC CA's server.xml to RSA ciphers, then running # pki -d /root/.dogtag/pki_ecc_bootstrap/certs_db/ -c SECret.123 -p 21443 -n 'ecc_SubCA_AdminV' ca-user-find. FIA_UIA_EXT.1 FIA_UAU_EXT.1 Privileged user identification and authentication ACCESS_SESSION_ESTABLISH -> The ClientIP field of the ACCESS_SESSION_ESTABLISH audit event contains the IP address of the client. The SubjectID field of the ACCESS_SESSION_ESTABLISH audit event contains the identity of the entity. CA Test case: # pki -d /root/.dogtag/pki_ecc_bootstrap/certs_db/ -c SECret.123 -p 21443 -n 'ecc_SubCA_AdminV' ca-user-find. TPS Test case: # pki -d /root/.dogtag/pki_rsa_bootstrap/certs_db/ -c SECret.123 -p 25443 -n 'TPS_AdminV' tps-user-find. AUTH The AuthMgr field contains the authentication mechanism in the AUTH audit event. CA Test case: # pki -d /root/.dogtag/pki_rsa_bootstrap/certs_db/ -c SECret.123 -P https -p 31443 -n 'rsa_SubCA_AdminV'. TPS Test case: # pki -d /root/.dogtag/pki_rsa_bootstrap/certs_db/ -c SECret.123 -p 25443 -n 'PKI TPS Administrator for RSA-TPS' tps-user-find. AUTHZ CA Test case: # pki -d /root/.dogtag/pki_rsa_bootstrap/certs_db/ -c SECret.123 -P https -p 31443 -n 'rsa_SubCA_AuditV' ca-audit-file-find. TPS Test case: # pki -d /root/.dogtag/pki_rsa_bootstrap/certs_db/ -c SECret.123 -p 25443 -n 'PKI TPS Administrator for RSA-TPS' tps-user-show tpsadmin. ROLE_ASSUME The Role field of the ROLE_ASSUME audit event contains the name of the role that the user is assuming. CA Test case: logging in to pkiconsole with valid credentials, e.g.: # pkiconsole -d /home/jgenie/.redhat-idm-console -n rsa_SubCA_AdminV. TPS Test case: accessing the TPS Web UI Agent page using the TPS_AgentV certificate. FMT_SMR.2 Modifications to the group of users that are part of a role CONFIG_ROLE See CONFIG_ROLE event above. FPT_FLS.1 Failure with preservation of secure state SELFTESTS_EXECUTION Test case: pointing the OCSP signing certificate to a non-existing certificate. E.g. ca.cert.ocsp_signing.nickname=NHSM-CONN-XC:non-existing certificate . CA CA_AUDIT SELFTESTS.LOG FPT_KST_EXT.2 Private/secret keys are stored by the HSM and the only operations to "access" those keys are through the TSF as signing operations. N/a: Under normal circumstances, HSM authentication is done at RHCS system startup time (server will not start if failed to authenticate), so once the system is up, there is no need to authenticate (no loggable cause of failure). FPT_RCV.1 The fact that a failure or service discontinuity occurred. Resumption of the regular operation. Failure: SELFTESTS_EXECUTION (failure) CA Test case: adding a bogus cert nickname in the config file and restarting the server, e.g.: ca.cert.sslserver.nickname=Bogus Server-Cert . TPS Test case: adding a bogus cert nickname in the config file and restarting the server, e.g.: selftests.plugin.TPSPresence.nickname=bogusCert . Self-test log, see "Configuring Self-Tests" in the Installation Guide. Resumption (e.g. fixing the bogus certificate nickname and restarting): AUDIT_LOG_STARTUP; SELFTESTS_EXECUTION (success) TPS CA FPT_STM.1 Changes to the time. Timestamps in the audit log for each event are provided by the Operational Environment, e.g.: Changes to the time on the OS level are audited. See Section 12.2.3.3, "Displaying time change events" . Test steps: following "Enable OS-level audit logs" in the post-installation section (Installation Guide) and executing # ausearch -k rhcs_audit_time_change . To change the timezone, run # timedatectl list-timezones to list the zones then set the desired zone using timedatectl set-timezone . E.g.: Running the time change audit command will result in similar logs: FPT_TUD_EXT.1 Initiation of update. See Section 12.2.3.4, "Displaying package update events" . Test case: assuming some prior package updates were done, use the # ausearch -m SOFTWARE_UPDATE | grep pki command: FTA_SSL.4 The termination of an interactive session. ACCESS_SESSION_TERMINATED CA Test case: # pki -d /root/.dogtag/pki_ecc_bootstrap/certs_db/ -c SECret.123 -p 21443 -n 'ecc_SubCA_AgentV' ca-cert-find. TPS Test case: # pki -d /root/.dogtag/pki_rsa_bootstrap/certs_db/ -c SECret.123 -p 25443 -n 'TPS_AdminV' tps-user-find. FTP_TRP.1 Initiation of the trusted channel. Termination of the trusted channel. Failures of the trusted path functions. ACCESS_SESSION_ESTABLISH CA Test case: adding client certificate issued by unknown CA to nssdb and use it for running # HttpClient /root/.dogtag/pki_ecc_bootstrap/certs_db/HttpClient-cmc-p10.self.cfg. TPS Test case: # pki -d /root/.dogtag/pki_rsa_bootstrap/certs_db/ -c SECret.123 -p 25443 -n 'PKI TPS Administrator for RSA-TPS' tps-token-find. ACCESS_SESSION_TERMINATED CA Test case: # pki -d /root/.dogtag/pki_ecc_bootstrap/certs_db/ -c SECret.123 -p 21443 -n 'ecc_SubCA_AgentV' ca-cert-find. Test case: logging in to the CA Agent page using the role user and closing the browser. TPS Test case: # pki -d /root/.dogtag/pki_rsa_bootstrap/certs_db/ -c SECret.123 -p 25443 -n 'TPS_AdminV' tps-user-find or login to the TPS Agent page using a role user and close the browser. FCS_CKM.1 and FCS_CKM.2 Not available. There are no TOE-related functions where a TOE subsystem generates (or requests the OE to generate) a non-ephemeral key. All system certificates are generated in the same manner as user keys during the installation, before the TOE is running and, thus, before it can audit. FCS_CKM_EXT.4 Not available FCS_COP.1(2) All occurrences of signature generation using a CA signing key. CERT_SIGNING_INFO records CA signing certificate key info at system startup CERT_REQUEST_PROCESSED (success) Test case: See CERT_REQUEST_PROCESSED (success) above. OCSP_SIGNING_INFO records OCSP signing certificate key info at system startup OCSP_GENERATION (success) Test case: following the procedure in TBD "Testing CRL publishing" to run OCSPClient in order to trigger an OCSP response. CRL_SIGNING_INFO records CRL signing certificate key info at system startup FULL_CRL_GENERATION (success) Test case: removing the filters log.instance.SignedAudit.filters.FULL_CRL_GENERATION=(Outcome=Failure) and setting the revocation buffer auths.revocationChecking.bufferSize to 0 and ca.crl.MasterCRL.alwaysUpdate to true . Then revoking a certificate and invoking the UpdateCRL endpoint as per the procedure in "Testing CRL publishing" in the Installation Guide. DELTA_CRL_GENERATION (success) Test case: following all the configuration of the case and enabling the DELTA CRL ( ca.crl.MasterCRL.extension.DeltaCRLIndicator.enable to true ). Then revoking a certificate and invoking the UpdateCRL endpoint as per the procedure in "Testing CRL publishing" in the Installation Guide. Failure in signature generation. CERT_REQUEST_PROCESSED (failure) Test case: follow the CMC enrollment procedure described above, but use the profile caCMCUserCert instead of caCMCECUserCert when composing the HttpClient configuration file. OCSP_GENERATION (failure) FCS_HTTPS_EXT.1 and FCS_TLSS_EXT.2 Failure to establish a HTTPS/TLS session. ACCESS_SESSION_ESTABLISH (failure) See FTP_TRP.1 Establishment/termination of a HTTPS/TLS session ACCESS_SESSION_TERMINATED See FIA_UIA_EXT.1 FCS_TLSC_EXT.2 Failure to establish a TLS session. CLIENT_ACCESS_SESSION_ESTABLISH (failure) When Server is not reachable by Client and Session ran into failures. In this scenario, CA acts as a client for KRA during Key Archival and KRA is not reachable by CA. Test case: disabling the KRA and perform a HttpClient request. E.g. following the procedure in "Test key archival" in the Installation Guide. When CA's subsystem cert is revoked and it tried to access KRA. Test case: revoking the CA system certificate and performing the KRA test. KRA Test case: marking the CA's subsystem certificate on-hold and performing the Key archival ( CA KRA ). HttpClient triggers the event in the KRA's audit logging file. CA Test case: revoking the CA System certificate and performing the KRA test. Establishment/termination of a TLS session. CLIENT_ACCESS_SESSION_TERMINATED Test case: attempting to sign into a PKI Console without setting up CA Admin cert. FDP_CRL_EXT.1 Failure to generate a CRL. FULL_CRL_GENERATION (failure) Test case: as an agent, logging in on a CA agent WebUI portal, clicking on Update Revocation List and under Signature algorithm, selecting SHA1withRSA . Counting on SHA1withRSA still being an option in the UI, although no longer allowed. FDP_OCSPG_EXT.1 (extended) Failure to generate certificate status information. OCSP_GENERATION (failure) Test case: setting ca.ocsp=false to disable OCSP service in the CA and run OCSPClient . FIA_AFL.1 The reaching of the threshold for the Unsuccessful Authentication Attempts. The action Taken. The re-enablement of disabled non-administrative accounts. Not available. For password authentication only. Certificate System provides certificate-based authentication only. FIA_CMCS_EXT.1 CMC requests (generated or received) containing certificate requests or revocation requests. CMC responses issued. CMC_SIGNED_REQUEST_SIG_VERIFY Test case: Removing the log.instance.SignedAudit.filters.CMC_SIGNED_REQUEST_SIG_VERIFY parameter from CS.cfg and restarting the CA. Then creating and submitting an agent-signed CMC request, e.g. the procedure for the issuance of user1 's certificate under "Testing CRL publishing" in the Installation Guide. CMC_USER_SIGNED_REQUEST_SIG_VERIFY Successful request: Test case: submitting a CMC (user-signed or self-signed) certificate enrollment or revocation request and verifying the signature. E.g: Removing the log.instance.SignedAudit.filters.CMC_SIGNED_REQUEST_SIG_VERIFY parameter from CS.cfg and restarting the CA. Then creating and submitting an user-signed (shared token) request, e.g. by following 7.8.4.3 "Test the CMC Shared Token" in the Installation Guide. CMC_REQUEST_RECEIVED Successful request: Test case: a successful CMC request using SharedSecret (with cmc.popLinkWitnessRequired=true ). PROOF_OF_POSSESSION (Enrollment Event) Test case: a successful CMC request using SharedSecret (with cmc.popLinkWitnessRequired=true ). PROFILE_CERT_REQUEST (Enrollment Event) Test case: a successful CMC request signed and issued by a CA agent (with cmc.popLinkWitnessRequired=false ). CERT_STATUS_CHANGE_REQUEST Success: Test case: following the example in "Testing CRL publishing" of the Installation Guide to issue and then revoke certificate for user2 . Failure: CERT_REQUEST_PROCESSED Successful request: Test case: compelting certificate status change (revoked, expired, on-hold, off-hold). CERT_STATUS_CHANGE_REQUEST_PROCESSED Successful request: Test case: completing certificate status change (revoked, expired, on-hold, off-hold). Failed request: Completing a revocation, shrTok not found. Test case: Completing a revocation, cert issuer and request issuer do not match. Test case: Completing a revocation, on-hold cert status update. Test case: following "Testing CRL publishing" in the Installation Guide to revoke a certificate as with user2 in the example, but instead of creating/revoking an actual certificate, just editing the CMC request file so that revRequest.serial is assigned a non-existent serial number, e.g. revRequest.serial=1111111 . CMC_RESPONSE_SENT Enrollment Successful response Test case: creating a CSR by following Section 5.2, "Creating certificate signing requests (CSR)" , then creating a CMCRequest config file by following Section 5.3.1, "The CMC enrollment process" then submitting the request using HttpClient . Revocation Successful revocation Test case: revoking a certificate, for example by following the procedure in Section 6.2.1.1, "Revoking a certificate using CMCRequest " . Failed revocation Revocation does not happen Test case: revoking a non-existing certificate, for example by following the procedure in Section 6.2.1.1, "Revoking a certificate using CMCRequest " . FPT_SKY_EXT.1(2)/OTH AUTHZ Failure: Agent user attempts to retrieve audit log: Test case: # pki -d /root/.dogtag/pki_rsa_bootstrap/certs_db/ -c SECret.123 -p 31443 -n 'rsa_SubCA_AdminV' ca-audit-file-find. Success: Auditor user retrieved audit log: Test case: # pki -d /root/.dogtag/pki_rsa_bootstrap/certs_db/ -c SECret.123 -p 31443 -n 'rsa_SubCA_AuditV' ca-audit-file-find. FTP_ITC.1 Initiation of the trusted channel. Termination of the trusted channel. Failure of the trusted channel functions. See FCS_HTTPS_EXT.1 See FCS_TLSC_EXT.2 E.2. Audit Event Descriptions This section provides descriptions to audit events. For required audit events and their examples, see Section E.1, "Required audit events and their examples" . E.2.1. TOE Environment audit events This section provides the format description of TOE (Target of Evaluation) Environment audit events. E.2.2. Operational Environment audit events For Operational Environment audit events format descriptions, please see https://access.redhat.com/articles/4409591 . In addition, for events relevant to RHCS, please reference "Enable OS-level audit logs" in the Installation Guide.
[ "0.localhost-startStop-1", "[21/May/2023:17:53:00 IST]", "[14]", "[6]", "[AuditEvent=AUDIT_LOG_STARTUP][SubjectID=USDSystemUSD][Outcome=Success] audit function startup", "0.main - [17/Mar/2023:04:31:50 EDT] [14] [6] [AuditEvent=AUDIT_LOG_STARTUP][SubjectID=USDSystemUSD][Outcome=Success] audit function startup", "0.https-jsse-nio-31443-exec-11 - [25/Apr/2023:05:59:44 EDT] [14] [6] [AuditEvent=CONFIG_CERT_PROFILE][SubjectID=caadmin][Outcome=Success][ParamNameValPairs=Scope;;rules+Operation;;OP_ADD+Resource;;caFullCMCUserCertFoobar+class_id;;caEnrollImpl] certificate profile configuration parameter(s) change", "0.https-jsse-nio-31443-exec-1 - [28/Apr/2023:02:13:21 EDT] [14] [6] [AuditEvent=CERT_PROFILE_APPROVAL][SubjectID=rsa_SubCA_AgentV][Outcome=Success][ProfileID=caUserCert][Op=approve] certificate profile approval", "0.https-jsse-nio-32443-exec-20 - [11/May/2023:18:32:39 EDT] [14] [6] [AuditEvent=CONFIG_OCSP_PROFILE][SubjectID=ocspadmin][Outcome=Success][ParamNameValPairs=Scope;;ocspStoresRules+Operation;;OP_MODIFY+Resource;;defStore+includeNextUpdate;;false+byName;;true+implName;;com.netscape.cms.ocsp.DefStore+notFoundAsGood;;true] OCSP profile configuration parameter(s) change", "0.https-jsse-nio-31443-exec-17 - [11/May/2023:18:37:05 EDT] [14] [6] [AuditEvent=CONFIG_CRL_PROFILE][SubjectID=caadmin][Outcome=Success][ParamNameValPairs=Scope;;crl+Operation;;OP_MODIFY+Resource;;MasterCRL+enableCRLUpdates;;true+updateSchema;;1+extendedNextUpdate;;true+alwaysUpdate;;true+enableDailyUpdates;;true+dailyUpdates;;1:00+enableUpdateInterval;;true+autoUpdateInterval;;241+nextUpdateGracePeriod;;1+nextAsThisUpdateExtension;;1] CRL profile configuration parameter(s) change", "0.https-jsse-nio-31443-exec-18 - [11/May/2023:19:13:09 EDT] [14] [6] [AuditEvent=CONFIG_AUTH][SubjectID=caadmin][Outcome=Success][ParamNameValPairs=Scope;;instance+Operation;;OP_ADD+Resource;;AgentCertAuth+implName;;AgentCertAuth] authentication configuration parameter(s) change", "0.https-jsse-nio-31443-exec-24 - [26/Apr/2023:08:29:25 EDT] [14] [6] [AuditEvent=CONFIG_ROLE][SubjectID=rsa_SubCA_AdminV][Outcome=Success][ParamNameValPairs=Scope;;users+Operation;;OP_ADD+Resource;;Test_UserV+password;; **+phone;;<null>+fullname;;Testuser+state;;<null>+userType;;<null>+email;;<null>] role configuration parameter(s) change", "0.https-jsse-nio-31443-exec-5 - [26/Apr/2023:08:31:53 EDT] [14] [6] [AuditEvent=CONFIG_ROLE][SubjectID=rsa_SubCA_AdminV][Outcome=Failure][ParamNameValPairs=Scope;;users+Operation;;OP_ADD+Resource;;Test_UserV+password;; **+phone;;<null>+fullname;;Testuser+state;;<null>+userType;;<null>+email;;<null>] role configuration parameter(s) change", "0.https-jsse-nio-31443-exec-9 - [11/May/2023:18:13:52 EDT] [14] [6] [AuditEvent=CONFIG_ACL][SubjectID=caadmin][Outcome=Success][ParamNameValPairs=Scope;;acls+Operation;;OP_MODIFY+Resource;;certServer.ca.crl+aci;;allow (read,update) group=\"Certificate Manager Agents\"+desc;;Certificate Manager agents may read or update crl+rights;;read] ACL configuration parameter(s) change", "0.https-jsse-jss-nio-21443-exec-5 - [23/Oct/2023:04:38:52 EDT] [14] [6] [AuditEvent=CONFIG_SIGNED_AUDIT][SubjectID=ecc_SubCA_AdminV][Outcome=Success][ParamNameValPairs=Action;;disable] signed audit configuration parameter(s) change", "0.https-jsse-jss-nio-21443-exec-10 - [23/Oct/2023:04:47:23 EDT] [14] [6] [AuditEvent=CONFIG_SIGNED_AUDIT][SubjectID=ecc_SubCA_AdminV][Outcome=Success][ParamNameValPairs=Action;;enable] signed audit configuration parameter(s) change", "0.https-jsse-nio-28443-exec-17 - [15/May/2023:18:30:44 EDT] [14] [6] [AuditEvent=CONFIG_SIGNED_AUDIT][SubjectID=kraadmin][Outcome=Success][ParamNameValPairs=Action;;disable] signed audit configuration parameter(s) change", "0.https-jsse-nio-31443-exec-15 - [11/May/2023:19:42:24 EDT] [14] [6] [AuditEvent=CONFIG_SIGNED_AUDIT][SubjectID=caadmin][Outcome=Success][ParamNameValPairs=Scope;;logRule+Operation;;OP_MODIFY+Resource;;SignedAudit+level;;Information+rolloverInterval;;Monthly+flushInterval;;5+mandatory.events;;<null>+bufferSize;;512+maxFileSize;;2000+fileName;;/var/lib/pki/rhcs10-RSA-SubCA/logs/ca/signedAudit/ca_audit+enable;;true+signedAuditCertNickname;;NHSM-CONN-XC:auditSigningCert cert-rhcs10-RSA-SubCA CA+implName;;file+type;;signedAudit+logSigning;;true+events;;ACCESS_SESSION_ESTABLISH,ACCESS_SESSION_TERMINATED,AUDIT_LOG_SIGNING,AUDIT_LOG_STARTUP,AUTH,AUTHORITY_CONFIG,AUTHZ,CERT_PROFILE_APPROVAL,CERT_REQUEST_PROCESSED,CERT_SIGNING_INFO,CERT_STATUS_CHANGE_REQUEST,CERT_STATUS_CHANGE_REQUEST_PROCESSED,CLIENT_ACCESS_SESSION_ESTABLISH,CLIENT_ACCESS_SESSION_TERMINATED,CMC_REQUEST_RECEIVED,CMC_RESPONSE_SENT,CMC_SIGNED_REQUEST_SIG_VERIFY,CMC_USER_SIGNED_REQUEST_SIG_VERIFY,CONFIG_ACL,CONFIG_AUTH,CONFIG_CERT_PROFILE,CONFIG_CRL_PROFILE,CONFIG_ENCRYPTION,CONFIG_ROLE,CONFIG_SERIAL_NUMBER,CONFIG_SIGNED_AUDIT,CONFIG_TRUSTED_PUBLIC_KEY,CRL_SIGNING_INFO,DELTA_CRL_GENERATION,FULL_CRL_GENERATION,KEY_GEN_ASYMMETRIC,LOG_PATH_CHANGE,OCSP_GENERATION,OCSP_SIGNING_INFO,PROFILE_CERT_REQUEST,PROOF_OF_POSSESSION,RANDOM_GENERATION,ROLE_ASSUME,SCHEDULE_CRL_GENERATION,SECURITY_DOMAIN_UPDATE,SELFTESTS_EXECUTION,SERVER_SIDE_KEYGEN_REQUEST,SERVER_SIDE_KEYGEN_REQUEST_PROCESSED] signed audit configuration parameter(s) change", "0.https-jsse-nio-24443-exec-4 - [15/May/2023:18:23:02 EDT] [14] [6] [AuditEvent=CONFIG_SIGNED_AUDIT][SubjectID=tksadmin][Outcome=Success][ParamNameValPairs=Action;;disable] signed audit configuration parameter(s) change", "0.https-jsse-nio-25443-exec-23 - [15/May/2023:18:39:02 EDT] [14] [6] [AuditEvent=CONFIG_SIGNED_AUDIT][SubjectID=tpsadmin][Outcome=Success][ParamNameValPairs=Action;;enable] signed audit configuration parameter(s) change", "0.https-jsse-nio-28443-exec-19 - [20/Jun/2023:19:43:36 EDT] [14] [6] [AuditEvent=CONFIG_DRM][SubjectID=kraadmin][Outcome=Success][ParamNameValPairs=Scope;;general+Operation;;OP_MODIFY+Resource;;RS_ID_CONFIG+noOfRequiredRecoveryAgents;;8] DRM configuration parameter(s) change", "0.https-jsse-jss-nio-22443-exec-8 - [08/Sep/2023:13:01:19 EDT] [14] [6] [AuditEvent=OCSP_ADD_CA_REQUEST_PROCESSED][SubjectID=OCSP_AgentV][Outcome=Success][CASubjectDN=CN=CA Signing Certificate,OU=rhcs10-ECC-SubCA,O=Example-rhcs10-ECC-RootCA] Add CA for OCSP Responder", "0.https-jsse-jss-nio-22443-exec-14 - [08/Sep/2023:13:04:06 EDT] [14] [6] [AuditEvent=OCSP_ADD_CA_REQUEST_PROCESSED][SubjectID=OCSP_AgentV][Outcome=Failure][CASubjectDN=<null>] Add CA for OCSP Responder", "0.https-jsse-jss-nio-22443-exec-21 - [08/Sep/2023:13:06:04 EDT] [14] [6] [AuditEvent=OCSP_REMOVE_CA_REQUEST_PROCESSED][SubjectID=OCSP_AgentV][Outcome=Success][CASubjectDN=CN=CA Signing Certificate,OU=rhcs10-ECC-SubCA,O=Example-rhcs10-ECC-RootCA] Remove CA for OCSP Responder is successful", "0.https-jsse-nio-31443-exec-15 - [28/Apr/2023:09:52:30 EDT] [14] [6] [AuditEvent=SECURITY_DOMAIN_UPDATE][SubjectID=caadmin][Outcome=Success][ParamNameValPairs=operation;;issue_token+token;;2094141712918570861+ip;;10.0.188.59+uid;;caadmin+groupname;;Enterprise TKS Administrators] security domain update", "0.https-jsse-nio-31443-exec-15 - [28/Apr/2023:09:53:10 EDT] [14] [6] [AuditEvent=SECURITY_DOMAIN_UPDATE][SubjectID=caadmin][Outcome=Success][ParamNameValPairs=host;;ccrsa-1.rhcs10.example.com+name;;TKS ccrsa-1.rhcs10.example.com 24443+sport;;24443+clone;;false+type;;TKS+operation;;add] security domain update", "0.https-jsse-jss-nio-8443-exec-13 - [18/Sep/2023:08:11:13 EDT] [14] [6] [AuditEvent=CONFIG_SERIAL_NUMBER][SubjectID=caadmin][Outcome=Success][ParamNameValPairs=source;;updateNumberRange+type;;request+beginNumber;;9990001+endNumber;;10000000] serial number range update", "0.https-jsse-jss-nio-21443-exec-8 - [18/Sep/2023:11:04:18 EDT] [14] [6] [AuditEvent=CONFIG_SERIAL_NUMBER][SubjectID=caadmin][Outcome=Success][ParamNameValPairs=source;;updateNumberRange+type;;request+beginNumber;;9990001+endNumber;;10000000] serial number range update", "0.https-jsse-jss-nio-21443-exec-8 - [21/Nov/2023:16:49:57 EST] [14] [6] [AuditEvent=CERT_REQUEST_PROCESSED][SubjectID=USDUnidentifiedUSD][Outcome=Success][ReqID=86][CertSerialNum=229508606] certificate request processed", "0.https-jsse-jss-nio-21443-exec-3 - [21/Nov/2023:16:58:45 EST] [14] [6] [AuditEvent=PROFILE_CERT_REQUEST][SubjectID=caadmin][Outcome=Success][ReqID=87][ProfileID=caECFullCMCUserCert][CertSubject=CN=ecc test ecc-user1,UID=ecc-ecc-user1] certificate request made with certificate profiles", "0.https-jsse-jss-nio-21443-exec-3 - [21/Nov/2023:16:58:45 EST] [14] [6] [AuditEvent=CERT_REQUEST_PROCESSED][SubjectID=caadmin][Outcome=Success][ReqID=87][CertSerialNum=87161545] certificate request processed", "0.https-jsse-jss-nio-21443-exec-9 - [21/Nov/2023:16:57:14 EST] [14] [6] [AuditEvent=CMC_REQUEST_RECEIVED][SubjectID=caadmin][Outcome=Success][CMCRequest=MIILQQYJKoZIhvcNAQcCoIILMjCCCy4CAQMxDzANBglghkgBZQ...] CMC request received", "0.https-jsse-jss-nio-21443-exec-3 - [29/Nov/2023:16:32:16 PST] [14] [6] [AuditEvent=CERT_REQUEST_PROCESSED][SubjectID=USDUnidentifiedUSD][Outcome=Failure][ReqID=USDUnidentifiedUSD][InfoName=rejectReason][InfoValue=Proof-of-Identification Verification Failed after verifyIdentityProofV2] certificate request processed", "0.https-jsse-jss-nio-21443-exec-18 - [10/Jun/2024:08:48:13 EDT] [14] [6] [AuditEvent=ACCESS_SESSION_ESTABLISH][ClientIP=10.0.188.72][ServerIP=10.0.188.72][SubjectID=CN=ecc_SubCA_AgentR,UID=ecc_SubCA_AgentR][CertSerialNum=135246246][IssuerDN=CN=CA Signing Certificate,OU=rhcs10-ECC-SubCA,O=Example-rhcs10-ECC-RootCA][Outcome=Failure][Info=serverAlertSent: CERTIFICATE_REVOKED] access session establish failure", "0.https-jsse-jss-nio-21443-exec-19 - [10/Jun/2024:08:49:54 EDT] [14] [6] [AuditEvent=ACCESS_SESSION_ESTABLISH][ClientIP=10.0.188.72][ServerIP=10.0.188.72][SubjectID=CN=ecc_SubCA_AgentE,UID=ecc_SubCA_AgentE][CertSerialNum=70705426][IssuerDN=CN=CA Signing Certificate,OU=rhcs10-ECC-SubCA,O=Example-rhcs10-ECC-RootCA][Outcome=Failure][Info=serverAlertSent: CERTIFICATE_EXPIRED] access session establish failure", "0.https-jsse-jss-nio-21443-exec-20 - [10/Jun/2024:09:20:34 EDT] [14] [6] [AuditEvent=ACCESS_SESSION_ESTABLISH][ClientIP=10.0.188.72][ServerIP=10.0.188.72][SubjectID=CN=PKI Administrator,[email protected],OU=topology-02-CA,O=topology-02_Foobarmaster.org][CertSerialNum=233456275785924569566051339521314398673][IssuerDN=CN=CA Signing Certificate,OU=topology-02-CA,O=topology-02_Foobarmaster.org][Outcome=Failure][Info=serverAlertSent: UNKNOWN_CA] access session establish failure", "0.https-jsse-jss-nio-21443-exec-1 - [10/Jun/2024:09:30:21 EDT] [14] [6] [AuditEvent=ACCESS_SESSION_ESTABLISH][ClientIP=10.0.188.72][ServerIP=10.0.188.72][SubjectID=--][CertSerialNum=--][IssuerDN=--][Outcome=Failure][Info=serverAlertSent: HANDSHAKE_FAILURE] access session establish failure", "0.https-jsse-jss-nio-21443-exec-7 - [10/Jun/2024:10:11:19 EDT] [14] [6] [AuditEvent=ACCESS_SESSION_ESTABLISH][ClientIP=10.0.188.72][ServerIP=10.0.188.72][SubjectID=CN=ecc_SubCA_AdminV,UID=ecc_SubCA_AdminV][CertSerialNum=195854754][IssuerDN=CN=CA Signing Certificate,OU=rhcs10-ECC-SubCA,O=Example-rhcs10-ECC-RootCA][Outcome=Success] access session establish success", "0.https-jsse-jss-nio-25443-exec-1 - [11/Jun/2024:05:56:34 EDT] [14] [6] [AuditEvent=ACCESS_SESSION_ESTABLISH][ClientIP=10.0.188.72][ServerIP=10.0.188.72][SubjectID=CN=TPS_AdminV,UID=TPS_AdminV][CertSerialNum=190384736][IssuerDN=CN=CA Signing Certificate,OU=rhcs10-RSA-SubCA,O=Example-rhcs10-RSA-RootCA][Outcome=Success] access session establish success", "0.https-jsse-nio-31443-exec-9 - [28/Apr/2023:06:16:11 EDT] [14] [6] [AuditEvent=AUTH][SubjectID=rsa_SubCA_AdminV][Outcome=Success][AuthMgr=certUserDBAuthMgr] authentication success", "0.https-jsse-nio-25443-exec-3 - [28/Apr/2023:06:13:46 EDT] [14] [6] [AuditEvent=AUTH][SubjectID=tpsadmin][Outcome=Success][AuthMgr=certUserDBAuthMgr] authentication success", "0.https-jsse-nio-31443-exec-10 - [28/Apr/2023:06:43:30 EDT] [14] [6] [AuditEvent=AUTHZ][SubjectID=rsa_SubCA_AuditV][Outcome=Success][aclResource=certServer.log.content.signedAudit][Op=read][Info=AuditResource.findAuditFiles] authorization success", "0.https-jsse-nio-25443-exec-20 - [28/Apr/2023:06:46:23 EDT] [14] [6] [AuditEvent=AUTHZ][SubjectID=tpsadmin][Outcome=Success][aclResource=certServer.tps.users][Op=execute][Info=UserResource.getUser] authorization success", "0.https-jsse-nio-31443-exec-4 - [28/Apr/2023:06:59:18 EDT] [14] [6] [AuditEvent=ROLE_ASSUME][SubjectID=rsa_SubCA_AdminV][Outcome=Success][Role=Administrators] assume privileged role", "0.https-jsse-jss-nio-25443-exec-25 - [20/Sep/2023:06:32:56 EDT] [14] [6] [AuditEvent=ROLE_ASSUME][SubjectID=TPS_AgentV][Outcome=Success][Role=TPS Agents] assume privileged role", "0.main - [02/May/2023:05:04:54 EDT] [14] [6] [AuditEvent=SELFTESTS_EXECUTION][SubjectID=USDSystemUSD][Outcome=Failure] self tests execution (see selftests.log for details)", "0.main - [01/Dec/2023:12:55:07 EST] [14] [6] [AuditEvent=SELFTESTS_EXECUTION][SubjectID=USDSystemUSD][Outcome=Failure] self tests execution (see selftests.log for details)", "0.main - [01/Dec/2023:12:55:07 EST] [20] [1] SystemCertsVerification: system certs verification failure: Unable to validate certificate NHSM-CONN-XC:non-existing certificate not found: NHSM-CONN-XC:non-existing certificate", "0.main - [01/Dec/2023:12:55:07 EST] [20] [1] SelfTestSubsystem: The CRITICAL self test plugin called selftests.container.instance.SystemCertsVerification running at startup FAILED!", "0.main - [02/May/2023:05:04:54 EDT] [14] [6] [AuditEvent=SELFTESTS_EXECUTION][SubjectID=USDSystemUSD][Outcome=Failure] self tests execution (see selftests.log for details)", "0.main - [02/May/2023:05:11:04 EDT] [14] [6] [AuditEvent=SELFTESTS_EXECUTION][SubjectID=USDSystemUSD][Outcome=Failure] self tests execution (see selftests.log for details)", "0.main - [27/Apr/2023:09:38:36 EDT] [14] [6] [AuditEvent=SELFTESTS_EXECUTION][SubjectID=USDSystemUSD][Outcome=Success] self tests execution (see selftests.log for details)", "0.main - [11/May/2023:02:35:32 EDT] [14] [6] [AuditEvent=AUDIT_LOG_STARTUP][SubjectID=USDSystemUSD][Outcome=Success] audit function startup", "0.main - [02/May/2023:05:20:27 EDT] [14] [6] [AuditEvent=AUDIT_LOG_STARTUP][SubjectID=USDSystemUSD][Outcome=Success] audit function startup", "0.main - [25/Apr/2023:02:30:14 EDT] [14] [6] [AuditEvent=SELFTESTS_EXECUTION][SubjectID=USDSystemUSD][Outcome=Success] self tests execution (see selftests.log for details)", "date Wed Nov 29 17:31:28 PST 2023", "timedatectl set-timezone America/Los_Angeles", "ausearch -k rhcs_audit_time_change time->Tue Nov 21 17:05:52 2023 type=PROCTITLE msg=audit(1700615152.687:92865): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 type=SYSCALL msg=audit(1700615152.687:92865): arch=c000003e syscall=44 success=yes exit=1080 a0=3 a1=7ffcba231970 a2=438 a3=0 items=0 ppid=1060472 pid=1060487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm=\"auditctl\" exe=\"/usr/sbin/auditctl\" subj=system_u:system_r:unconfined_service_t:s0 key=(null) type=CONFIG_CHANGE msg=audit(1700615152.687:92865): auid=4294967295 ses=4294967295 subj=system_u:system_r:unconfined_service_t:s0 op=add_rule key=\"rhcs_audit_time_change\" list=4 res=1 ---- time->Tue Nov 21 17:05:52 2023 type=PROCTITLE msg=audit(1700615152.687:92866): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 type=SOCKADDR msg=audit(1700615152.687:92866): saddr=100000000000000000000000 type=SYSCALL msg=audit(1700615152.687:92866): arch=c000003e syscall=44 success=yes exit=1080 a0=3 a1=7ffcba231970 a2=438 a3=0 items=0 ppid=1060472 pid=1060487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm=\"auditctl\" exe=\"/usr/sbin/auditctl\" subj=system_u:system_r:unconfined_service_t:s0 key=(null) type=CONFIG_CHANGE msg=audit(1700615152.687:92866): auid=4294967295 ses=4294967295 subj=system_u:system_r:unconfined_service_t:s0 op=add_rule key=\"rhcs_audit_time_change\" list=4 res=1 ---- time->Tue Nov 21 17:05:52 2023 type=PROCTITLE msg=audit(1700615152.687:92867): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 type=SOCKADDR msg=audit(1700615152.687:92867): saddr=100000000000000000000000 type=SYSCALL msg=audit(1700615152.687:92867): arch=c000003e syscall=44 success=yes exit=1080 a0=3 a1=7ffcba231970 a2=438 a3=0 items=0 ppid=1060472 pid=1060487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm=\"auditctl\" exe=\"/usr/sbin/auditctl\" subj=system_u:system_r:unconfined_service_t:s0 key=(null) type=CONFIG_CHANGE msg=audit(1700615152.687:92867): auid=4294967295 ses=4294967295 subj=system_u:system_r:unconfined_service_t:s0 op=add_rule key=\"rhcs_audit_time_change\" list=4 res=1 ---- time->Tue Nov 21 17:05:52 2023 type=PROCTITLE msg=audit(1700615152.687:92868): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 type=SOCKADDR msg=audit(1700615152.687:92868): saddr=100000000000000000000000 type=SYSCALL msg=audit(1700615152.687:92868): arch=c000003e syscall=44 success=yes exit=1080 a0=3 a1=7ffcba231970 a2=438 a3=0 items=0 ppid=1060472 pid=1060487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm=\"auditctl\" exe=\"/usr/sbin/auditctl\" subj=system_u:system_r:unconfined_service_t:s0 key=(null) type=CONFIG_CHANGE msg=audit(1700615152.687:92868): auid=4294967295 ses=4294967295 subj=system_u:system_r:unconfined_service_t:s0 op=add_rule key=\"rhcs_audit_time_change\" list=4 res=1 ---- <skipping over the \"op=add_rule key=\"rhcs_audit_time_change\"\" events> ---- time->Tue Nov 21 17:28:14 2023 type=PROCTITLE msg=audit(1700616494.023:92874): proctitle=\"/usr/sbin/timedatex\" type=PATH msg=audit(1700616494.023:92874): item=4 name=\"/etc/localtime\" inode=20037025 dev=fc:03 mode=0120777 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:locale_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 type=PATH msg=audit(1700616494.023:92874): item=3 name=\"/etc/localtime\" inode=16798494 dev=fc:03 mode=0120777 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:locale_t:s0 nametype=DELETE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 type=PATH msg=audit(1700616494.023:92874): item=2 name=\"/etc/localtime.855775472\" inode=20037025 dev=fc:03 mode=0120777 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:locale_t:s0 nametype=DELETE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 type=PATH msg=audit(1700616494.023:92874): item=1 name=\"/etc/\" inode=16798305 dev=fc:03 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:etc_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 type=PATH msg=audit(1700616494.023:92874): item=0 name=\"/etc/\" inode=16798305 dev=fc:03 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:etc_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 type=CWD msg=audit(1700616494.023:92874): cwd=\"/\" type=SYSCALL msg=audit(1700616494.023:92874): arch=c000003e syscall=82 success=yes exit=0 a0=7ffcb72d7a20 a1=55b57b9dcdaf a2=55b57d40cc00 a3=0 items=5 ppid=1 pid=1060749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm=\"timedatex\" exe=\"/usr/sbin/timedatex\" subj=system_u:system_r:timedatex_t:s0 key=\"rhcs_audit_time_change\" ---- time->Tue Nov 21 17:28:14 2023 type=PROCTITLE msg=audit(1700616494.024:92875): proctitle=\"/usr/sbin/timedatex\" type=SYSCALL msg=audit(1700616494.024:92875): arch=c000003e syscall=164 success=yes exit=0 a0=0 a1=7ffcb72d6a08 a2=fffffffffffffe1f a3=2ce33e6c02ce33e7 items=0 ppid=1 pid=1060749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm=\"timedatex\" exe=\"/usr/sbin/timedatex\" subj=system_u:system_r:timedatex_t:s0 key=\"rhcs_audit_time_change\"", "ausearch -m SOFTWARE_UPDATE | grep pki 30 type=SOFTWARE_UPDATE msg=audit(1682403837.928:1289): pid=5040 uid=0 auid=0 ses=5 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=install sw=\"pki-servlet-engine-1:9.0.30-3.module+el8.5.0+11388+9e95fe00.noarch\" sw_type=rpm key_enforce=0 gpg_res=0 root_dir=\"/\" comm=\"dnf\" exe=\"/usr/libexec/plat form-python3.6\" hostname=? addr=? terminal=? res=success' 31 type=SOFTWARE_UPDATE msg=audit(1682403837.928:1290): pid=5040 uid=0 auid=0 ses=5 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=install sw=\"tomcatjss-7.7.2-1.module+el8pki+14677+1ef79a68.noarch\" sw_type=rpm key_enforce=0 gpg_res=0 root_dir=\"/\" comm=\"dnf\" exe=\"/usr/libexec/platform-python3. 6\" hostname=? addr=? terminal=? res=success' 32 type=SOFTWARE_UPDATE msg=audit(1682403837.928:1291): pid=5040 uid=0 auid=0 ses=5 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=install sw=\"redhat-pki-server-10.13.5-2.module+el8pki+17707+69a21d82.noarch\" sw_type=rpm key_enforce=0 gpg_res=0 root_dir=\"/\" comm=\"dnf\" exe=\"/usr/libexec/platfor m-python3.6\" hostname=? addr=? terminal=? res=success' 33 type=SOFTWARE_UPDATE msg=audit(1682403837.928:1292): pid=5040 uid=0 auid=0 ses=5 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=install sw=\"redhat-pki-acme-10.13.5-2.module+el8pki+17707+69a21d82.noarch\" sw_type=rpm key_enforce=0 gpg_res=0 root_dir=\"/\" comm=\"dnf\" exe=\"/usr/libexec/platform- python3.6\" hostname=? addr=? terminal=? res=success' 34 type=SOFTWARE_UPDATE msg=audit(1682403837.928:1293): pid=5040 uid=0 auid=0 ses=5 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=install sw=\"redhat-pki-ca-10.13.5-2.module+el8pki+17707+69a21d82.noarch\" sw_type=rpm key_enforce=0 gpg_res=0 root_dir=\"/\" comm=\"dnf\" exe=\"/usr/libexec/platform-py thon3.6\" hostname=? addr=? terminal=? res=success' 35 type=SOFTWARE_UPDATE msg=audit(1682403837.928:1294): pid=5040 uid=0 auid=0 ses=5 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=install sw=\"redhat-pki-est-10.13.5-2.module+el8pki+17707+69a21d82.noarch\" sw_type=rpm key_enforce=0 gpg_res=0 root_dir=\"/\" comm=\"dnf\" exe=\"/usr/libexec/platform-p ython3.6\" hostname=? addr=? terminal=? res=success' 36 type=SOFTWARE_UPDATE msg=audit(1682403837.928:1295): pid=5040 uid=0 auid=0 ses=5 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=install sw=\"redhat-pki-kra-10.13.5-2.module+el8pki+17707+69a21d82.noarch\" sw_type=rpm key_enforce=0 gpg_res=0 root_dir=\"/\" comm=\"dnf\" exe=\"/usr/libexec/platform-p ython3.6\" hostname=? addr=? terminal=? res=success' 37 type=SOFTWARE_UPDATE msg=audit(1682403837.928:1296): pid=5040 uid=0 auid=0 ses=5 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=install sw=\"redhat-pki-ocsp-10.13.5-2.module+el8pki+17707+69a21d82.noarch\" sw_type=rpm key_enforce=0 gpg_res=0 root_dir=\"/\" comm=\"dnf\" exe=\"/usr/libexec/platform- python3.6\" hostname=? addr=? terminal=? res=success' 38 type=SOFTWARE_UPDATE msg=audit(1682403837.928:1297): pid=5040 uid=0 auid=0 ses=5 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=install sw=\"redhat-pki-tks-10.13.5-2.module+el8pki+17707+69a21d82.noarch\" sw_type=rpm key_enforce=0 gpg_res=0 root_dir=\"/\" comm=\"dnf\" exe=\"/usr/libexec/platform-p ython3.6\" hostname=? addr=? terminal=? res=success' 39 type=SOFTWARE_UPDATE msg=audit(1682403837.928:1298): pid=5040 uid=0 auid=0 ses=5 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=install sw=\"redhat-pki-tps-10.13.5-2.module+el8pki+17707+69a21d82.x86_64\" sw_type=rpm key_enforce=0 gpg_res=0 root_dir=\"/\" comm=\"dnf\" exe=\"/usr/libexec/platform-p ython3.6\" hostname=? addr=? terminal=? res=success' 40 type=SOFTWARE_UPDATE msg=audit(1682403837.928:1299): pid=5040 uid=0 auid=0 ses=5 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=install sw=\"redhat-pki-10.13.5-2.module+el8pki+17707+69a21d82.x86_64\" sw_type=rpm key_enforce=0 gpg_res=0 root_dir=\"/\" comm=\"dnf\" exe=\"/usr/libexec/platform-pytho n3.6\" hostname=? addr=? terminal=? res=success", "0.https-jsse-jss-nio-21443-exec-5 - [10/Jun/2024:13:18:54 EDT] [14] [6] [AuditEvent=ACCESS_SESSION_TERMINATED][ClientIP=10.0.188.72][ServerIP=10.0.188.72][SubjectID=CN=ecc_SubCA_AgentV,UID=ecc_SubCA_AgentV][CertSerialNum=72118278][IssuerDN=CN=CA Signing Certificate,OU=rhcs10-ECC-SubCA,O=Example-rhcs10-ECC-RootCA][Outcome=Success][Info=serverAlertSent: CLOSE_NOTIFY] access session terminated", "0.https-jsse-jss-nio-25443-exec-6 - [11/Jun/2024:05:56:36 EDT] [14] [6] [AuditEvent=ACCESS_SESSION_TERMINATED][ClientIP=10.0.188.72][ServerIP=10.0.188.72][SubjectID=CN=TPS_AdminV,UID=TPS_AdminV][CertSerialNum=190384736][IssuerDN=CN=CA Signing Certificate,OU=rhcs10-RSA-SubCA,O=Example-rhcs10-RSA-RootCA][Outcome=Success][Info=serverAlertSent: CLOSE_NOTIFY] access session terminated", "0.https-jsse-jss-nio-21443-exec-20 - [10/Jun/2024:09:20:34 EDT] [14] [6] [AuditEvent=ACCESS_SESSION_ESTABLISH][ClientIP=10.0.188.72][ServerIP=10.0.188.72][SubjectID=CN=PKI Administrator,[email protected],OU=topology-02-CA,O=topology-02_Foobarmaster.org][CertSerialNum=233456275785924569566051339521314398673][IssuerDN=CN=CA Signing Certificate,OU=topology-02-CA,O=topology-02_Foobarmaster.org][Outcome=Failure][Info=serverAlertSent: UNKNOWN_CA] access session establish failure", "0.https-jsse-jss-nio-25443-exec-7 - [11/Jun/2024:06:00:52 EDT] [14] [6] [AuditEvent=ACCESS_SESSION_ESTABLISH][ClientIP=10.0.188.72][ServerIP=10.0.188.72][SubjectID=CN=PKI Administrator,[email protected],OU=rhcs10-RSA-TPS,O=Example-SubCA][CertSerialNum=32899047][IssuerDN=CN=CA Signing Certificate,OU=rhcs10-RSA-SubCA,O=Example-rhcs10-RSA-RootCA][Outcome=Success] access session establish success", "0.https-jsse-jss-nio-21443-exec-7 - [10/Jun/2024:10:36:08 EDT] [14] [6] [AuditEvent=ACCESS_SESSION_TERMINATED][ClientIP=10.0.188.72][ServerIP=10.0.188.72][SubjectID=CN=ecc_SubCA_AgentV,UID=ecc_SubCA_AgentV][CertSerialNum=72118278][IssuerDN=CN=CA Signing Certificate,OU=rhcs10-ECC-SubCA,O=Example-rhcs10-ECC-RootCA][Outcome=Success][Info=serverAlertSent: CLOSE_NOTIFY] access session terminated", "0.https-jsse-jss-nio-21443-exec-11 - [10/Jun/2024:13:35:09 EDT] [14] [6] [AuditEvent=ACCESS_SESSION_TERMINATED][ClientIP=10.0.188.72][ServerIP=10.0.188.72][SubjectID=CN=ecc_SubCA_AgentV,UID=ecc_SubCA_AgentV][CertSerialNum=72118278][IssuerDN=CN=CA Signing Certificate,OU=rhcs10-ECC-SubCA,O=Example-rhcs10-ECC-RootCA][Outcome=Success][Info=serverAlertSent: CLOSE_NOTIFY] access session terminated", "0.https-jsse-jss-nio-25443-exec-20 - [11/Jun/2024:06:03:06 EDT] [14] [6] [AuditEvent=ACCESS_SESSION_TERMINATED][ClientIP=10.0.188.72][ServerIP=10.0.188.72][SubjectID=CN=TPS_AdminV,UID=TPS_AdminV][CertSerialNum=190384736][IssuerDN=CN=CA Signing Certificate,OU=rhcs10-RSA-SubCA,O=Example-rhcs10-RSA-RootCA][Outcome=Success][Info=serverAlertSent: CLOSE_NOTIFY] access session terminated", "0.https-jsse-nio-8443-exec-5 - [25/Apr/2023:02:26:34 EDT] [14] [6] [AuditEvent=CERT_SIGNING_INFO][SubjectID=USDSystemUSD][Outcome=Success][SKI=96:44:A6:53:DB:AF:3D:C3:3D:A0:00:0A:84:CB:6E:0E:B5:3E:4E:10] certificate signing info", "0.https-jsse-nio-8443-exec-3 - [25/Apr/2023:02:28:17 EDT] [14] [6] [AuditEvent=CERT_REQUEST_PROCESSED][SubjectID=caadmin][Outcome=Success][ReqID=7][CertSerialNum=165675596] certificate request processed", "0.main - [25/Apr/2023:02:28:39 EDT] [14] [6] [AuditEvent=OCSP_SIGNING_INFO][SubjectID=USDSystemUSD][Outcome=Success][SKI=A3:AB:71:4C:E0:C8:8B:E4:6D:08:5B:10:EC:F3:E4:6B:F3:70:EB:57] OCSP signing info", "0.http-nio-32080-exec-1 - [25/Apr/2023:06:07:29 EDT] [14] [6] [AuditEvent=OCSP_GENERATION][SubjectID=USDNonRoleUserUSD][Outcome=Success] OCSP response generation", "0.main - [25/Apr/2023:05:55:22 EDT] [14] [6] [AuditEvent=CRL_SIGNING_INFO][SubjectID=USDSystemUSD][Outcome=Success][SKI=2C:E1:7C:DB:B0:6E:62:36:70:67:B7:BF:19:80:4C:D0:8F:B5:80:02] CRL signing info", "0.Thread-17 - [04/May/2023:05:46:26 EDT] [14] [6] [AuditEvent=FULL_CRL_GENERATION][SubjectID=USDUnidentifiedUSD][Outcome=Success][CRLnum=62] Full CRL generation", "0.Thread-17 - [04/May/2023:06:29:03 EDT] [14] [6] [AuditEvent=DELTA_CRL_GENERATION][SubjectID=USDUnidentifiedUSD][Outcome=Success][CRLnum=63] Delta CRL generation", "0.https-jsse-jss-nio-21443-exec-18 - [14/Sep/2023:13:44:35 EDT] [14] [6] [AuditEvent=CERT_REQUEST_PROCESSED][SubjectID=USDNonRoleUserUSD][Outcome=Failure][ReqID=71][InfoName=rejectReason][InfoValue=Request 71 Rejected - Key Type RSA Not Matched] certificate request processed", "0.http-nio-32080-exec-15 - [25/Apr/2023:02:47:47 EDT] [14] [6] [AuditEvent=OCSP_GENERATION][SubjectID=USDNonRoleUserUSD][Outcome=Failure][FailureReason=End-of-file reached while decoding ASN.1 header] OCSP response generation", "0.https-jsse-jss-nio-21443-exec-15 - [10/Jun/2024:12:29:16 EDT] [14] [6] [AuditEvent=CLIENT_ACCESS_SESSION_ESTABLISH][ClientHost=10.0.188.72][ServerHost=rhcs10.example.com][ServerPort=23443][SubjectID=SYSTEM][Outcome=Failure][Info=send:java.io.IOException: Socket has been closed, and cannot be reused.] access session failed to establish when Certificate System acts as client", "0.https-jsse-jss-nio-23443-exec-1 - [10/Jun/2024:12:35:25 EDT] [14] [6] [AuditEvent=ACCESS_SESSION_ESTABLISH][ClientIP=10.0.188.72][ServerIP=10.0.188.72][SubjectID=CN=Subsystem Certificate,OU=rhcs10-ECC-SubCA,O=Example-rhcs10-ECC-RootCA][CertSerialNum=208481924][IssuerDN=CN=CA Signing Certificate,OU=rhcs10-ECC-SubCA,O=Example-rhcs10-ECC-RootCA][Outcome=Failure][Info=serverAlertSent: CERTIFICATE_REVOKED] access session establish failure", "0.https-jsse-jss-nio-21443-exec-3 - [10/Jun/2024:12:35:25 EDT] [14] [6] [AuditEvent=CLIENT_ACCESS_SESSION_ESTABLISH][ClientHost=10.0.188.72][ServerHost=rhcs10.example.com][ServerPort=23443][SubjectID=SYSTEM][Outcome=Failure][Info=send:java.io.IOException: Socket has been closed, and cannot be reused.] access session failed to establish when Certificate System acts as client", "0.ConnectAsync - [10/Jun/2024:12:35:25 EDT] [14] [6] [AuditEvent=CLIENT_ACCESS_SESSION_TERMINATED][ClientHost=10.0.188.72][ServerHost=10.0.188.72][ServerPort=23443][SubjectID=CN=rhcs10.example.com,OU=rhcs10-ECC-KRA,O=Example-SubCA][CertSerialNum=42383494][IssuerDN=CN=CA Signing Certificate,OU=rhcs10-ECC-SubCA,O=Example-rhcs10-ECC-RootCA][Outcome=Success][Info=clientAlertReceived: CERTIFICATE_REVOKED] access session terminated when Certificate System acts as client", "0.https-jsse-jss-nio-31443-exec-9 - [11/Jun/2024:09:31:47 EDT] [14] [6] [AuditEvent=CLIENT_ACCESS_SESSION_TERMINATED][ClientHost=10.0.188.72][ServerHost=10.0.188.64][ServerPort=7636][SubjectID=CN=rhds11-5.example.com][CertSerialNum=119813240][IssuerDN=CN=CA Signing Certificate,OU=rhcs10-RSA-SubCA,O=Example-rhcs10-RSA-RootCA][Outcome=Success][Info=clientAlertSent: CLOSE_NOTIFY] access session terminated when Certificate System acts as client", "0.CRLIssuingPoint-MasterCRL - [11/May/2023:00:09:42 EDT] [14] [6] [AuditEvent=FULL_CRL_GENERATION][SubjectID=USDUnidentifiedUSD][Outcome=Failure][FailureReason=Signing algorithm not supported: SHA1withRSA: Unable to create signing context: (-8011) Unknown error] Full CRL generation", "0.http-nio-31080-exec-1 - [30/Nov/2023:18:50:51 EST] [14] [6] [AuditEvent=OCSP_GENERATION][SubjectID=USDNonRoleUserUSD][Outcome=Failure][FailureReason=OCSP service disabled] OCSP response generation", "0.https-jsse-jss-nio-21443-exec-3 - [25/Nov/2023:16:47:47 PST] [14] [6] [AuditEvent=CMC_SIGNED_REQUEST_SIG_VERIFY][SubjectID=CN=PKI Administrator,[email protected],OU=rhcs10-ECC-SubCA,O=Example-rhcs10-ECC-RootCA][Outcome=Success][ReqType=enrollment][CertSubject=CN=ecc test ecc-user1,UID=ecc-ecc-user1][SignerInfo=CN=PKI Administrator,[email protected],OU=rhcs10-ECC-SubCA,O=Example-rhcs10-ECC-RootCA] agent signed CMC request signature verification", "0.https-jsse-jss-nio-21443-exec-6 - [25/Nov/2023:17:02:13 PST] [14] [6] [AuditEvent=CMC_USER_SIGNED_REQUEST_SIG_VERIFY][SubjectID=CN=PKI Administrator,[email protected],OU=rhcs10-ECC-SubCA,O=Example-rhcs10-ECC-RootCA][Outcome=Success][ReqType=enrollment][CertSubject=CN=eccFooUser123,UID=eccFooUser123,OU=self-signed][SignerInfo=USDUnidentifiedUSD] User signed CMC request signature verification success", "0.https-jsse-jss-nio-21443-exec-8 - [21/Nov/2023:16:49:57 EST] [14] [6] [AuditEvent=CMC_REQUEST_RECEIVED][SubjectID=USDUnidentifiedUSD][Outcome=Success][CMCRequest=MIIDYgYJKoZIhvcNAQcCoIIDUzCCA08CAQMxDzANBglghkgBZQMEAgEFA...] CMC request received", "0.https-jsse-jss-nio-21443-exec-8 - [21/Nov/2023:16:49:57 EST] [14] [6] [AuditEvent=PROOF_OF_POSSESSION][SubjectID=eccFooUser123][Outcome=Success][Info=method=EnrollProfile: fillTaggedRequest: ] proof of possession", "0.https-jsse-jss-nio-21443-exec-3 - [21/Nov/2023:16:58:45 EST] [14] [6] [AuditEvent=PROFILE_CERT_REQUEST][SubjectID=caadmin][Outcome=Success][ReqID=87][ProfileID=caECFullCMCUserCert][CertSubject=CN=ecc test ecc-user1,UID=ecc-ecc-user1] certificate request made with certificate profiles", "[AuditEvent=CERT_STATUS_CHANGE_REQUEST_PROCESSED][SubjectID=CN=ecc test ecc-user2,UID=ecc-ecc-user2][Outcome=Success][ReqID=14][CertSerialNum=15390937][RequestType=revoke][RevokeReasonNum=Unspecified][Approval=complete] certificate status change request processed", "0.https-jsse-nio-31443-exec-5 - [09/May/2023:16:42:56 EDT] [14] [6] [AuditEvent=CERT_STATUS_CHANGE_REQUEST][SubjectID=caadmin][Outcome=Failure][ReqID=<null>][CertSerialNum=0x2c192ac][RequestType=on-hold] certificate revocation/unrevocation request made", "0.https-jsse-nio-31443-exec-24 - [28/Apr/2023:09:58:07 EDT] [14] [6] [AuditEvent=CERT_REQUEST_PROCESSED][SubjectID=caadmin][Outcome=Success][ReqID=67][CertSerialNum=86198753] certificate request processed", "0.https-jsse-nio-31443-exec-14 - [09/May/2023:17:29:35 EDT] [14] [6] [AuditEvent=CERT_STATUS_CHANGE_REQUEST_PROCESSED][SubjectID=rsa_SubCA_AgentV][Outcome=Success][ReqID=80][CertSerialNum=0x2c192ac][RequestType=<null>][RevokeReasonNum=6][Approval=complete] certificate status change request processed", "0.http-bio-20443-exec-14 - [29/Jan/2019:07:15:27 EST] [14] [6] [AuditEvent=CERT_STATUS_CHANGE_REQUEST_PROCESSED][SubjectID=<null>][Outcome=Failure][ReqID=<null>][CertSerialNum=20][RequestType=revoke][RevokeReasonNum=Certificate_Hold][Approval=rejected][Info=CMCOutputTemplate: SharedSecret.getSharedToken(BigInteger serial): shrTok not found in metaInfo] certificate status change request processed", "0.http-bio-20443-exec-20 - [29/Jan/2019:07:30:41 EST] [14] [6] [AuditEvent=CERT_STATUS_CHANGE_REQUEST_PROCESSED][SubjectID=UID=user1a,OU=People,DC=rhel76,DC=test][Outcome=Failure][ReqID=<null>][CertSerialNum=20][RequestType=revoke][RevokeReasonNum=Certificate_Hold][Approval=rejected][Info= certificate issuer DN and revocation request issuer DN do not match] certificate status change request processed", "0.https-jsse-jss-nio-21443-exec-12 - [27/Nov/2023:11:34:53 PST] [14] [6] [AuditEvent=CERT_STATUS_CHANGE_REQUEST_PROCESSED][SubjectID=<null>][Outcome=Failure][ReqID=<null>][CertSerialNum=1111111][RequestType=revoke][RevokeReasonNum=Unspecified][Approval=rejected][Info= The certificate is not found] certificate status change request processed", "0.https-jsse-nio-31443-exec-8 - [01/May/2023:23:37:50 EDT] [14] [6] [AuditEvent=CMC_RESPONSE_SENT][SubjectID=FooUser123][Outcome=Success][CMCResponse=MIIM+wYJkwWSE/] CMC response sent", "0.http-bio-20443-exec-9 - [29/Jan/2019:07:43:36 EST] [14] [6] [AuditEvent=CMC_RESPONSE_SENT][SubjectID=USDUnidentifiedUSD][Outcome=Success][CMCResponse=MIIExgYJKoZ...] CMC response sent", "0.https-jsse-nio-31443-exec-8 - [01/May/2023:23:37:50 EDT] [14] [6] [AuditEvent=CMC_RESPONSE_SENT][SubjectID=FooUser123][Outcome=Success][CMCResponse=MIIM+wYJKoZIh...] CMC response sent", "0.https-jsse-nio-31443-exec-24 - [03/May/2023:08:30:38 EDT] [14] [6] [AuditEvent=AUTHZ][SubjectID=rsa_SubCA_AdminV][Outcome=Failure][aclResource=certServer.log.content.signedAudit][Op=read][Info=Authorization Error] authorization failure", "0.https-jsse-nio-31443-exec-5 - [03/May/2023:08:31:11 EDT] [14] [6] [AuditEvent=AUTHZ][SubjectID=rsa_SubCA_AuditV][Outcome=Success][aclResource=certServer.log.content.signedAudit][Op=read][Info=AuditResource.findAuditFiles] authorization success", "####################### SIGNED AUDIT EVENTS ############################# Common fields: - Outcome: \"Success\" or \"Failure\" - SubjectID: The UID of the user responsible for the operation \"USDSystemUSD\" or \"SYSTEM\" if system-initiated operation (e.g. log signing). # ######################################################################### Required Audit Events # Event: ACCESS_SESSION_ESTABLISH with [Outcome=Failure] Description: This event is used when access session failed to establish. Applicable subsystems: CA, KRA, OCSP, TKS, TPS Enabled by default: Yes Fields: - ClientIP: Client IP address. - ServerIP: Server IP address. - SubjectID: Client certificate subject DN. - Outcome: Failure - Info: Failure reason. # LOGGING_SIGNED_AUDIT_ACCESS_SESSION_ESTABLISH_FAILURE= <type=ACCESS_SESSION_ESTABLISH>:[AuditEvent=ACCESS_SESSION_ESTABLISH]{0} access session establish failure # Event: ACCESS_SESSION_ESTABLISH with [Outcome=Success] Description: This event is used when access session was established successfully. Applicable subsystems: CA, KRA, OCSP, TKS, TPS Enabled by default: Yes Fields: - ClientIP: Client IP address. - ServerIP: Server IP address. - SubjectID: Client certificate subject DN. - Outcome: Success # LOGGING_SIGNED_AUDIT_ACCESS_SESSION_ESTABLISH_SUCCESS= <type=ACCESS_SESSION_ESTABLISH>:[AuditEvent=ACCESS_SESSION_ESTABLISH]{0} access session establish success # Event: ACCESS_SESSION_TERMINATED Description: This event is used when access session was terminated. Applicable subsystems: CA, KRA, OCSP, TKS, TPS Enabled by default: Yes Fields: - ClientIP: Client IP address. - ServerIP: Server IP address. - SubjectID: Client certificate subject DN. - Info: The TLS Alert received from NSS - Outcome: Success - Info: The TLS Alert received from NSS # LOGGING_SIGNED_AUDIT_ACCESS_SESSION_TERMINATED= <type=ACCESS_SESSION_TERMINATED>:[AuditEvent=ACCESS_SESSION_TERMINATED]{0} access session terminated # Event: AUDIT_LOG_SIGNING Description: This event is used when a signature on the audit log is generated (same as \"flush\" time). Applicable subsystems: CA, KRA, OCSP, TKS, TPS Enabled by default: Yes Fields: - SubjectID: Predefined to be \"USDSystemUSD\" because this operation associates with no user. - Outcome: Success - sig: The base-64 encoded signature of the buffer just flushed. # LOGGING_SIGNED_AUDIT_AUDIT_LOG_SIGNING_3=[AuditEvent=AUDIT_LOG_SIGNING][SubjectID={0}][Outcome={1}] signature of audit buffer just flushed: sig: {2} # Event: AUDIT_LOG_STARTUP Description: This event is used at audit function startup. Applicable subsystems: CA, KRA, OCSP, TKS, TPS Enabled by default: Yes Fields: - SubjectID: USDSystemUSD - Outcome: # LOGGING_SIGNED_AUDIT_AUDIT_LOG_STARTUP_2=<type=AUDIT_LOG_STARTUP>:[AuditEvent=AUDIT_LOG_STARTUP][SubjectID={0}][Outcome={1}] audit function startup # Event: AUTH with [Outcome=Failure] Description: This event is used when authentication fails. In case of SSL-client auth, only webserver env can pick up the SSL violation. CS authMgr can pick up certificate mismatch, so this event is used. Applicable subsystems: CA, KRA, OCSP, TKS, TPS Enabled by default: Yes Fields: - SubjectID: - Outcome: Failure (obviously, if authentication failed, you won't have a valid SubjectID, so in this case, SubjectID should be USDUnidentifiedUSD) - AuthMgr: The authentication manager instance name that did this authentication. - AttemptedCred: The credential attempted and failed. # LOGGING_SIGNED_AUDIT_AUTH_FAIL=<type=AUTH>:[AuditEvent=AUTH]{0} authentication failure # Event: AUTH with [Outcome=Success] Description: This event is used when authentication succeeded. Applicable subsystems: CA, KRA, OCSP, TKS, TPS Enabled by default: Yes Fields: - SubjectID: id of user who has been authenticated - Outcome: Success - AuthMgr: The authentication manager instance name that did this authentication. # LOGGING_SIGNED_AUDIT_AUTH_SUCCESS=<type=AUTH>:[AuditEvent=AUTH]{0} authentication success # Event: AUTHZ with [Outcome=Failure] Description: This event is used when authorization has failed. Applicable subsystems: CA, KRA, OCSP, TKS, TPS Enabled by default: Yes Fields: - SubjectID: id of user who has failed to be authorized for an action - Outcome: Failure - aclResource: The ACL resource ID as defined in ACL resource list. - Op: One of the operations as defined with the ACL statement e.g. \"read\" for an ACL statement containing \"(read,write)\". - Info: # LOGGING_SIGNED_AUDIT_AUTHZ_FAIL=<type=AUTHZ>:[AuditEvent=AUTHZ]{0} authorization failure # Event: AUTHZ with [Outcome=Success] Description: This event is used when authorization is successful. Applicable subsystems: CA, KRA, OCSP, TKS, TPS Enabled by default: Yes Fields: - SubjectID: id of user who has been authorized for an action - Outcome: Success - aclResource: The ACL resource ID as defined in ACL resource list. - Op: One of the operations as defined with the ACL statement e.g. \"read\" for an ACL statement containing \"(read,write)\". # LOGGING_SIGNED_AUDIT_AUTHZ_SUCCESS=<type=AUTHZ>:[AuditEvent=AUTHZ]{0} authorization success # Event: CERT_PROFILE_APPROVAL Description: This event is used when an agent approves/disapproves a certificate profile set by the administrator for automatic approval. Applicable subsystems: CA Enabled by default: Yes Fields: - SubjectID: id of the CA agent who approved the certificate enrollment profile - Outcome: - ProfileID: One of the profiles defined by the administrator and to be approved by an agent. - Op: \"approve\" or \"disapprove\". # LOGGING_SIGNED_AUDIT_CERT_PROFILE_APPROVAL_4=<type=CERT_PROFILE_APPROVAL>:[AuditEvent=CERT_PROFILE_APPROVAL][SubjectID={0}][Outcome={1}][ProfileID={2}][Op={3}] certificate profile approval # Event: CERT_REQUEST_PROCESSED Description: This event is used when certificate request has just been through the approval process. Applicable subsystems: CA Enabled by default: Yes Fields: - SubjectID: The UID of the agent who approves, rejects, or cancels the certificate request. - Outcome: - ReqID: The request ID. - InfoName: \"certificate\" (in case of approval), \"rejectReason\" (in case of reject), or \"cancelReason\" (in case of cancel) - InfoValue: The certificate (in case of success), a reject reason in text, or a cancel reason in text. - CertSerialNum: # LOGGING_SIGNED_AUDIT_CERT_REQUEST_PROCESSED=<type=CERT_REQUEST_PROCESSED>:[AuditEvent=CERT_REQUEST_PROCESSED]{0} certificate request processed # Event: CERT_SIGNING_INFO Description: This event indicates which key is used to sign certificates. Applicable subsystems: CA Enabled by default: Yes Fields: - SubjectID: USDSystemUSD - Outcome: Success - SKI: Subject Key Identifier of the certificate signing certificate - AuthorityID: (applicable only to lightweight CA) # LOGGING_SIGNED_AUDIT_CERT_SIGNING_INFO=<type=CERT_SIGNING_INFO>:[AuditEvent=CERT_SIGNING_INFO]{0} certificate signing info # Event: CERT_STATUS_CHANGE_REQUEST Description: This event is used when a certificate status change request (e.g. revocation) is made (before approval process). Applicable subsystems: CA Enabled by default: Yes Fields: - SubjectID: id of uer who performed the action - Outcome: - ReqID: The request ID. - CertSerialNum: The serial number (in hex) of the certificate to be revoked. - RequestType: \"revoke\", \"on-hold\", \"off-hold\" # LOGGING_SIGNED_AUDIT_CERT_STATUS_CHANGE_REQUEST=<type=CERT_STATUS_CHANGE_REQUEST>:[AuditEvent=CERT_STATUS_CHANGE_REQUEST]{0} certificate revocation/unrevocation request made # Event: CERT_STATUS_CHANGE_REQUEST_PROCESSED Description: This event is used when certificate status is changed (revoked, expired, on-hold, off-hold). Applicable subsystems: CA Enabled by default: Yes Fields: - SubjectID: The UID of the agent that processed the request. - Outcome: - ReqID: The request ID. - RequestType: \"revoke\", \"on-hold\", \"off-hold\" - Approval: \"complete\", \"rejected\", or \"canceled\" (note that \"complete\" means \"approved\") - CertSerialNum: The serial number (in hex). - RevokeReasonNum: One of the following number: reason number reason -------------------------------------- 0 Unspecified 1 Key compromised 2 CA key compromised (should not be used) 3 Affiliation changed 4 Certificate superceded 5 Cessation of operation 6 Certificate is on-hold - Info: # LOGGING_SIGNED_AUDIT_CERT_STATUS_CHANGE_REQUEST_PROCESSED=<type=CERT_STATUS_CHANGE_REQUEST_PROCESSED>:[AuditEvent=CERT_STATUS_CHANGE_REQUEST_PROCESSED]{0} certificate status change request processed # Event: CLIENT_ACCESS_SESSION_ESTABLISH with [Outcome=Failure] Description: This event is when access session failed to establish when Certificate System acts as client. Applicable subsystems: CA, KRA, OCSP, TKS, TPS Enabled by default: Yes Fields: - ClientHost: Client hostname. - ServerHost: Server hostname. - ServerPort: Server port. - SubjectID: SYSTEM - Outcome: Failure - Info: # LOGGING_SIGNED_AUDIT_CLIENT_ACCESS_SESSION_ESTABLISH_FAILURE= <type=CLIENT_ACCESS_SESSION_ESTABLISH>:[AuditEvent=CLIENT_ACCESS_SESSION_ESTABLISH]{0} access session failed to establish when Certificate System acts as client # Event: CLIENT_ACCESS_SESSION_ESTABLISH with [Outcome=Success] Description: This event is used when access session was established successfully when Certificate System acts as client. Applicable subsystems: CA, KRA, OCSP, TKS, TPS Enabled by default: Yes Fields: - ClientHost: Client hostname. - ServerHost: Server hostname. - ServerPort: Server port. - SubjectID: SYSTEM - Outcome: Success # LOGGING_SIGNED_AUDIT_CLIENT_ACCESS_SESSION_ESTABLISH_SUCCESS= <type=CLIENT_ACCESS_SESSION_ESTABLISH>:[AuditEvent=CLIENT_ACCESS_SESSION_ESTABLISH]{0} access session establish successfully when Certificate System acts as client # Event: CLIENT_ACCESS_SESSION_TERMINATED Description: This event is used when access session was terminated when Certificate System acts as client. Applicable subsystems: CA, KRA, OCSP, TKS, TPS Enabled by default: Yes Fields: - ClientHost: Client hostname. - ServerHost: Server hostname. - ServerPort: Server port. - SubjectID: SYSTEM - Outcome: Success - Info: The TLS Alert received from NSS # LOGGING_SIGNED_AUDIT_CLIENT_ACCESS_SESSION_TERMINATED= <type=CLIENT_ACCESS_SESSION_TERMINATED>:[AuditEvent=CLIENT_ACCESS_SESSION_TERMINATED]{0} access session terminated when Certificate System acts as client # Event: CMC_REQUEST_RECEIVED Description: This event is used when a CMC request is received. Applicable subsystems: CA Enabled by default: Yes Fields: - SubjectID: The UID of user that triggered this event. If CMC requests is signed by an agent, SubjectID should be that of the agent. In case of an unsigned request, it would bear USDUnidentifiedUSD. - Outcome: - CMCRequest: Base64 encoding of the CMC request received # LOGGING_SIGNED_AUDIT_CMC_REQUEST_RECEIVED_3=<type=CMC_REQUEST_RECEIVED>:[AuditEvent=CMC_REQUEST_RECEIVED][SubjectID={0}][Outcome={1}][CMCRequest={2}] CMC request received # Event: CMC_RESPONSE_SENT Description: This event is used when a CMC response is sent. Applicable subsystems: CA Enabled by default: Yes Fields: - SubjectID: The UID of user that triggered this event. - Outcome: - CMCResponse: Base64 encoding of the CMC response sent # LOGGING_SIGNED_AUDIT_CMC_RESPONSE_SENT_3=<type=CMC_RESPONSE_SENT>:[AuditEvent=CMC_RESPONSE_SENT][SubjectID={0}][Outcome={1}][CMCResponse={2}] CMC response sent # Event: CMC_SIGNED_REQUEST_SIG_VERIFY Description: This event is used when agent signed CMC certificate requests or revocation requests are submitted and signature is verified. Applicable subsystems: CA Enabled by default: Yes Fields: - SubjectID: the user who signed the CMC request (success case) - Outcome: - ReqType: The request type (enrollment, or revocation). - CertSubject: The certificate subject name of the certificate request. - SignerInfo: A unique String representation for the signer. # LOGGING_SIGNED_AUDIT_CMC_SIGNED_REQUEST_SIG_VERIFY=<type=CMC_SIGNED_REQUEST_SIG_VERIFY>:[AuditEvent=CMC_SIGNED_REQUEST_SIG_VERIFY]{0} agent signed CMC request signature verification # Event: CMC_USER_SIGNED_REQUEST_SIG_VERIFY Description: This event is used when CMC (user-signed or self-signed) certificate requests or revocation requests are submitted and signature is verified. Applicable subsystems: CA Enabled by default: Yes Fields: - SubjectID: the user who signed the CMC request (success case) - Outcome: - ReqType: The request type (enrollment, or revocation). - CertSubject: The certificate subject name of the certificate request. - CMCSignerInfo: A unique String representation for the CMC request signer. - info: # LOGGING_SIGNED_AUDIT_CMC_USER_SIGNED_REQUEST_SIG_VERIFY_FAILURE=<type=CMC_USER_SIGNED_REQUEST_SIG_VERIFY>:[AuditEvent=CMC_USER_SIGNED_REQUEST_SIG_VERIFY]{0} User signed CMC request signature verification failure LOGGING_SIGNED_AUDIT_CMC_USER_SIGNED_REQUEST_SIG_VERIFY_SUCCESS=<type=CMC_USER_SIGNED_REQUEST_SIG_VERIFY>:[AuditEvent=CMC_USER_SIGNED_REQUEST_SIG_VERIFY]{0} User signed CMC request signature verification success # Event: CONFIG_ACL Description: This event is used when configuring ACL information. Applicable subsystems: CA, KRA, OCSP, TKS, TPS Enabled by default: Yes Fields: - SubjectID: id of administrator who performed the action - Outcome: - ParamNameValPairs: A name-value pair (where name and value are separated by the delimiter ;;) separated by + (if more than one name-value pair) of config params changed. # LOGGING_SIGNED_AUDIT_CONFIG_ACL_3=<type=CONFIG_ACL>:[AuditEvent=CONFIG_ACL][SubjectID={0}][Outcome={1}][ParamNameValPairs={2}] ACL configuration parameter(s) change # Event: CONFIG_AUTH Description: This event is used when configuring authentication. Applicable subsystems: CA, KRA, OCSP, TKS, TPS Enabled by default: Yes Fields: - SubjectID: id of administrator who performed the action - Outcome: - ParamNameValPairs: A name-value pair (where name and value are separated by the delimiter ;;) separated by + (if more than one name-value pair) of config params changed. --- Password MUST NOT be logged --- # LOGGING_SIGNED_AUDIT_CONFIG_AUTH_3=<type=CONFIG_AUTH>:[AuditEvent=CONFIG_AUTH][SubjectID={0}][Outcome={1}][ParamNameValPairs={2}] authentication configuration parameter(s) change # Event: CONFIG_CERT_PROFILE Description: This event is used when configuring certificate profile (general settings and certificate profile). Applicable subsystems: CA Enabled by default: Yes Fields: - SubjectID: id of administrator who performed the action - Outcome: - ParamNameValPairs: A name-value pair (where name and value are separated by the delimiter ;;) separated by + (if more than one name-value pair) of config params changed. # LOGGING_SIGNED_AUDIT_CONFIG_CERT_PROFILE_3=<type=CONFIG_CERT_PROFILE>:[AuditEvent=CONFIG_CERT_PROFILE][SubjectID={0}][Outcome={1}][ParamNameValPairs={2}] certificate profile configuration parameter(s) change # Event: CONFIG_CRL_PROFILE Description: This event is used when configuring CRL profile (extensions, frequency, CRL format). Applicable subsystems: CA Enabled by default: Yes Fields: - SubjectID: id of administrator who performed the action - Outcome: - ParamNameValPairs: A name-value pair (where name and value are separated by the delimiter ;;) separated by + (if more than one name-value pair) of config params changed. # LOGGING_SIGNED_AUDIT_CONFIG_CRL_PROFILE_3=<type=CONFIG_CRL_PROFILE>:[AuditEvent=CONFIG_CRL_PROFILE][SubjectID={0}][Outcome={1}][ParamNameValPairs={2}] CRL profile configuration parameter(s) change # Event: CONFIG_DRM Description: This event is used when configuring KRA. This includes key recovery scheme, change of any secret component. Applicable subsystems: KRA Enabled by default: Yes Fields: - SubjectID: id of administrator who performed the action - Outcome: - ParamNameValPairs A name-value pair (where name and value are separated by the delimiter ;;) separated by + (if more than one name-value pair) of config params changed. --- secret component (password) MUST NOT be logged --- # LOGGING_SIGNED_AUDIT_CONFIG_DRM_3=<type=CONFIG_DRM>:[AuditEvent=CONFIG_DRM][SubjectID={0}][Outcome={1}][ParamNameValPairs={2}] DRM configuration parameter(s) change # Event: CONFIG_OCSP_PROFILE Description: This event is used when configuring OCSP profile (everything under Online Certificate Status Manager). Applicable subsystems: OCSP Enabled by default: Yes Fields: - SubjectID: id of administrator who performed the action - Outcome: - ParamNameValPairs: A name-value pair (where name and value are separated by the delimiter ;;) separated by + (if more than one name-value pair) of config params changed. # LOGGING_SIGNED_AUDIT_CONFIG_OCSP_PROFILE_3=<type=CONFIG_OCSP_PROFILE>:[AuditEvent=CONFIG_OCSP_PROFILE][SubjectID={0}][Outcome={1}][ParamNameValPairs={2}] OCSP profile configuration parameter(s) change # Event: CONFIG_ROLE Description: This event is used when configuring role information. This includes anything under users/groups, add/remove/edit a role, etc. Applicable subsystems: CA, KRA, OCSP, TKS, TPS Enabled by default: Yes Fields: - SubjectID: id of administrator who performed the action - Outcome: - ParamNameValPairs: A name-value pair (where name and value are separated by the delimiter ;;) separated by + (if more than one name-value pair) of config params changed. # LOGGING_SIGNED_AUDIT_CONFIG_ROLE=<type=CONFIG_ROLE>:[AuditEvent=CONFIG_ROLE]{0} role configuration parameter(s) change # Event: CONFIG_SERIAL_NUMBER Description: This event is used when configuring serial number ranges (when requesting a serial number range when cloning, for example). Applicable subsystems: CA, KRA Enabled by default: Yes Fields: - SubjectID: id of administrator who performed the action - Outcome: - ParamNameValPairs: A name-value pair (where name and value are separated by the delimiter ;;) separated by + (if more than one name-value pair) of config params changed. # LOGGING_SIGNED_AUDIT_CONFIG_SERIAL_NUMBER_1=<type=CONFIG_SERIAL_NUMBER>:[AuditEvent=CONFIG_SERIAL_NUMBER][SubjectID={0}][Outcome={1}][ParamNameValPairs={2}] serial number range update # Event: CONFIG_SIGNED_AUDIT Description: This event is used when configuring signedAudit. Applicable subsystems: CA, KRA, OCSP, TKS, TPS Enabled by default: Yes Fields: - SubjectID: id of administrator who performed the action - Outcome: - ParamNameValPairs: A name-value pair (where name and value are separated by the delimiter ;;) separated by + (if more than one name-value pair) of config params changed. # LOGGING_SIGNED_AUDIT_CONFIG_SIGNED_AUDIT=<type=CONFIG_SIGNED_AUDIT>:[AuditEvent=CONFIG_SIGNED_AUDIT]{0} signed audit configuration parameter(s) change # Event: CONFIG_TRUSTED_PUBLIC_KEY Description: This event is used when: 1. \"Manage Certificate\" is used to edit the trustness of certificates and deletion of certificates 2. \"Certificate Setup Wizard\" is used to import CA certificates into the certificate database (Although CrossCertificatePairs are stored within internaldb, audit them as well) Applicable subsystems: CA, KRA, OCSP, TKS, TPS Enabled by default: Yes Fields: - SubjectID: ID of administrator who performed this configuration - Outcome: - ParamNameValPairs: A name-value pair (where name and value are separated by the delimiter ;;) separated by + (if more than one name-value pair) of config params changed. # LOGGING_SIGNED_AUDIT_CONFIG_TRUSTED_PUBLIC_KEY=<type=CONFIG_TRUSTED_PUBLIC_KEY>:[AuditEvent=CONFIG_TRUSTED_PUBLIC_KEY]{0} certificate database configuration # Event: CRL_SIGNING_INFO Description: This event indicates which key is used to sign CRLs. Applicable subsystems: CA Enabled by default: Yes Fields: - SubjectID: USDSystemUSD - Outcome: - SKI: Subject Key Identifier of the CRL signing certificate # LOGGING_SIGNED_AUDIT_CRL_SIGNING_INFO=<type=CRL_SIGNING_INFO>:[AuditEvent=CRL_SIGNING_INFO]{0} CRL signing info # Event: DELTA_CRL_GENERATION Description: This event is used when delta CRL generation is complete. Applicable subsystems: CA Enabled by default: Yes Fields: - SubjectID: USDUnidentifiedUSD - Outcome: \"Success\" when delta CRL is generated successfully, \"Failure\" otherwise. - CRLnum: The CRL number that identifies the CRL - Info: - FailureReason: # LOGGING_SIGNED_AUDIT_DELTA_CRL_GENERATION=<type=DELTA_CRL_GENERATION>:[AuditEvent=DELTA_CRL_GENERATION]{0} Delta CRL generation # Event: FULL_CRL_GENERATION Description: This event is used when full CRL generation is complete. Applicable subsystems: CA Enabled by default: Yes Fields: - SubjectID: USDSystemUSD - Outcome: \"Success\" when full CRL is generated successfully, \"Failure\" otherwise. - CRLnum: The CRL number that identifies the CRL - Info: - FailureReason: # LOGGING_SIGNED_AUDIT_FULL_CRL_GENERATION=<type=FULL_CRL_GENERATION>:[AuditEvent=FULL_CRL_GENERATION]{0} Full CRL generation # Event: PROFILE_CERT_REQUEST Description: This event is used when a profile certificate request is made (before approval process). Applicable subsystems: CA Enabled by default: Yes Fields: - SubjectID: The UID of user that triggered this event. If CMC enrollment requests signed by an agent, SubjectID should be that of the agent. - Outcome: - CertSubject: The certificate subject name of the certificate request. - ReqID: The certificate request ID. - ProfileID: One of the certificate profiles defined by the administrator. # LOGGING_SIGNED_AUDIT_PROFILE_CERT_REQUEST_5=<type=PROFILE_CERT_REQUEST>:[AuditEvent=PROFILE_CERT_REQUEST][SubjectID={0}][Outcome={1}][ReqID={2}][ProfileID={3}][CertSubject={4}] certificate request made with certificate profiles # Event: PROOF_OF_POSSESSION Description: This event is used for proof of possession during certificate enrollment processing. Applicable subsystems: CA Enabled by default: Yes Fields: - SubjectID: id that represents the authenticated user - Outcome: - Info: some information on when/how it occurred # LOGGING_SIGNED_AUDIT_PROOF_OF_POSSESSION_3=<type=PROOF_OF_POSSESSION>:[AuditEvent=PROOF_OF_POSSESSION][SubjectID={0}][Outcome={1}][Info={2}] proof of possession # Event: OCSP_ADD_CA_REQUEST_PROCESSED Description: This event is used when an add CA request to the OCSP Responder is processed. Applicable subsystems: OCSP Enabled by default: Yes Fields: - SubjectID: OCSP administrator user id - Outcome: \"Success\" when CA is added successfully, \"Failure\" otherwise. - CASubjectDN: The subject DN of the leaf CA cert in the chain. # LOGGING_SIGNED_AUDIT_OCSP_ADD_CA_REQUEST_PROCESSED=<type=OCSP_ADD_CA_REQUEST_PROCESSED>:[AuditEvent=OCSP_ADD_CA_REQUEST_PROCESSED]{0} Add CA for OCSP Responder # Event: OCSP_GENERATION Description: This event is used when an OCSP response generated is complete. Applicable subsystems: CA, OCSP Enabled by default: Yes Fields: - SubjectID: USDNonRoleUserUSD - Outcome: \"Success\" when OCSP response is generated successfully, \"Failure\" otherwise. - FailureReason: # LOGGING_SIGNED_AUDIT_OCSP_GENERATION=<type=OCSP_GENERATION>:[AuditEvent=OCSP_GENERATION]{0} OCSP response generation # Event: OCSP_REMOVE_CA_REQUEST_PROCESSED with [Outcome=Failure] Description: This event is used when a remove CA request to the OCSP Responder is processed and failed. Applicable subsystems: OCSP Enabled by default: Yes Fields: - SubjectID: OCSP administrator user id - Outcome: Failure - CASubjectDN: The subject DN of the leaf CA certificate in the chain. # LOGGING_SIGNED_AUDIT_OCSP_REMOVE_CA_REQUEST_PROCESSED_FAILURE=<type=OCSP_REMOVE_CA_REQUEST_PROCESSED>:[AuditEvent=OCSP_REMOVE_CA_REQUEST_PROCESSED]{0} Remove CA for OCSP Responder has failed # Event: OCSP_REMOVE_CA_REQUEST_PROCESSED with [Outcome=Success] Description: This event is used when a remove CA request to the OCSP Responder is processed successfully. Applicable subsystems: OCSP Enabled by default: Yes Fields: - SubjectID: OCSP administrator user id - Outcome: \"Success\" when CA is removed successfully, \"Failure\" otherwise. - CASubjectDN: The subject DN of the leaf CA certificate in the chain. # LOGGING_SIGNED_AUDIT_OCSP_REMOVE_CA_REQUEST_PROCESSED_SUCCESS=<type=OCSP_REMOVE_CA_REQUEST_PROCESSED>:[AuditEvent=OCSP_REMOVE_CA_REQUEST_PROCESSED]{0} Remove CA for OCSP Responder is successful # Event: OCSP_SIGNING_INFO Description: This event indicates which key is used to sign OCSP responses. Applicable subsystems: CA, OCSP Enabled by default: Yes Fields: - SubjectID: USDSystemUSD - Outcome: - SKI: Subject Key Identifier of the OCSP signing certificate - AuthorityID: (applicable only to lightweight CA) # LOGGING_SIGNED_AUDIT_OCSP_SIGNING_INFO=<type=OCSP_SIGNING_INFO>:[AuditEvent=OCSP_SIGNING_INFO]{0} OCSP signing info # Event: ROLE_ASSUME Description: This event is used when a user assumes a role. Applicable subsystems: CA, KRA, OCSP, TKS, TPS Enabled by default: Yes Fields: - SubjectID: - Outcome: - Role: One of the valid roles: \"Administrators\", \"Certificate Manager Agents\", or \"Auditors\". Note that customized role names can be used once configured. # LOGGING_SIGNED_AUDIT_ROLE_ASSUME=<type=ROLE_ASSUME>:[AuditEvent=ROLE_ASSUME]{0} assume privileged role # Event: SECURITY_DOMAIN_UPDATE Description: This event is used when updating contents of security domain (add/remove a subsystem). Applicable subsystems: CA Enabled by default: Yes Fields: - SubjectID: CA administrator user ID - Outcome: - ParamNameValPairs: A name-value pair (where name and value are separated by the delimiter ;;) separated by + (if more than one name-value pair) of config params changed. # LOGGING_SIGNED_AUDIT_SECURITY_DOMAIN_UPDATE_1=<type=SECURITY_DOMAIN_UPDATE>:[AuditEvent=SECURITY_DOMAIN_UPDATE][SubjectID={0}][Outcome={1}][ParamNameValPairs={2}] security domain update # Event: SELFTESTS_EXECUTION Description: This event is used when self tests are run. Applicable subsystems: CA, KRA, OCSP, TKS, TPS Enabled by default: Yes Fields: - SubjectID: USDSystemUSD - Outcome: # LOGGING_SIGNED_AUDIT_SELFTESTS_EXECUTION_2=<type=SELFTESTS_EXECUTION>:[AuditEvent=SELFTESTS_EXECUTION][SubjectID={0}][Outcome={1}] self tests execution (see selftests.log for details) ######################################################################### Available Audit Events - Enabled by default: Yes ######################################################################### # Event: SERVER_SIDE_KEYGEN_ENROLL_KEYGEN_REQUEST Description: This event is used when Server-Side Keygen enrollment keygen request is made. Applicable subsystems: CA Enabled by default: Yes Fields: - SubjectID: - Outcome: - RequestID: - ClientID: # LOGGING_SIGNED_AUDIT_SERVER_SIDE_KEYGEN_ENROLL_KEYGEN_REQUEST=<type=SERVER_SIDE_KEYGEN_ENROLL_KEYGEN_REQUEST>:[AuditEvent=SERVER_SIDE_KEYGEN_ENROLL_KEYGEN_REQUEST]{0} Server-Side Keygen enrollment keygen request made # Event: SERVER_SIDE_KEYGEN_ENROLL_KEYGEN_REQUEST_PROCESSED Description: This event is used when a request to do Server-Side Keygen enrollment keygen has been processed is processed. Applicable subsystems: KRA Enabled by default: Yes Fields: - SubjectID: - Outcome: - RequestID: - ClientID: - FailureReason: # LOGGING_SIGNED_AUDIT_SERVER_SIDE_KEYGEN_ENROLL_KEYGEN_REQUEST_PROCESSED=<type=SERVER_SIDE_KEYGEN_ENROLL_KEYGEN_REQUEST_PROCESSED>:[AuditEvent=SERVER_SIDE_KEYGEN_ENROLL_KEYGEN_REQUEST_PROCESSED]{0} Server-Side Keygen enrollment keygen request processed # Event: SERVER_SIDE_KEYGEN_ENROLL_KEY_RETRIEVAL_REQUEST Description: This event is used when Server-Side Keygen enrollment key retrieval request is made. Applicable subsystems: CA Enabled by default: Yes Fields: - SubjectID: - Outcome: - RequestID: - ClientID: # LOGGING_SIGNED_AUDIT_SERVER_SIDE_KEYGEN_ENROLL_KEY_RETRIEVAL_REQUEST=<type=SERVER_SIDE_KEYGEN_ENROLL_KEY_RETRIEVAL_REQUEST>:[AuditEvent=SERVER_SIDE_KEYGEN_ENROLL_KEYGEN_REQUEST]{0} Server-Side Keygen enrollment retrieval request made # Event: SERVER_SIDE_KEYGEN_ENROLL_KEY_RETRIEVAL_REQUEST_PROCESSED Description: This event is used when a request to do Server-Side Keygen enrollment retrieval has been processed is processed. Applicable subsystems: KRA Enabled by default: Yes Fields: - SubjectID: - Outcome: - RequestID: - ClientID: - FailureReason: # LOGGING_SIGNED_AUDIT_SERVER_SIDE_KEYGEN_ENROLL_KEY_RETRIEVAL_REQUEST_PROCESSED=<type=SERVER_SIDE_KEYGEN_ENROLL_KEY_RETRIEVAL_REQUEST_PROCESSED>:[AuditEvent=SERVER_SIDE_KEYGEN_ENROLL_RETRIEVAL_REQUEST_PROCESSED]{0} Server-Side Keygen enrollment retrieval request processed # Event: ASYMKEY_GENERATION_REQUEST Description: This event is used when asymmetric key generation request is made. Applicable subsystems: KRA Enabled by default: Yes Fields: - SubjectID: - Outcome: - GenerationRequestID: - ClientKeyID: # LOGGING_SIGNED_AUDIT_ASYMKEY_GENERATION_REQUEST=<type=ASYMKEY_GENERATION_REQUEST>:[AuditEvent=ASYMKEY_GENERATION_REQUEST]{0} Asymkey generation request made # Event: ASYMKEY_GENERATION_REQUEST_PROCESSED Description: This event is used when a request to generate asymmetric keys received by the KRA is processed. Applicable subsystems: KRA Enabled by default: Yes Fields: - SubjectID: - Outcome: - GenerationRequestID: - ClientKeyID: - KeyID: - FailureReason: # LOGGING_SIGNED_AUDIT_ASYMKEY_GEN_REQUEST_PROCESSED=<type=ASYMKEY_GENERATION_REQUEST_PROCESSED>:[AuditEvent=ASYMKEY_GENERATION_REQUEST_PROCESSED]{0} Asymkey generation request processed # Event: AUTHORITY_CONFIG Description: This event is used when configuring lightweight authorities. Applicable subsystems: CA Enabled by default: Yes Fields: - SubjectID: - Outcome: - ParamNameValPairs: A name-value pair (where name and value are separated by the delimiter ;;) separated by + (if more than one name-value pair) of config params changed. # LOGGING_SIGNED_AUDIT_AUTHORITY_CONFIG_3=<type=AUTHORITY_CONFIG>:[AuditEvent=AUTHORITY_CONFIG][SubjectID={0}][Outcome={1}][ParamNameValPairs={2}] lightweight authority configuration change # Event: CONFIG_ENCRYPTION Description: This event is used when configuring encryption (cert settings and SSL cipher preferences). Applicable subsystems: CA, KRA, OCSP, TKS, TPS Enabled by default: Yes Fields: - SubjectID: - Outcome: - ParamNameValPairs: A name-value pair (where name and value are separated by the delimiter ;;) separated by + (if more than one name-value pair) of config params changed. # LOGGING_SIGNED_AUDIT_CONFIG_ENCRYPTION_3=<type=CONFIG_ENCRYPTION>:[AuditEvent=CONFIG_ENCRYPTION][SubjectID={0}][Outcome={1}][ParamNameValPairs={2}] encryption configuration parameter(s) change # Event: CONFIG_TOKEN_AUTHENTICATOR Description: This event is used when configuring token authenticators. Applicable subsystems: TPS Enabled by default: Yes Fields: - SubjectID: - Outcome: - OP: - Authenticator: - ParamNameValPairs: A name-value pair (where name and value are separated by the delimiter ;;) separated by + (if more than one name-value pair) of config params changed. --- secret component (password) MUST NOT be logged --- - Info: Error info for failed cases. # LOGGING_SIGNED_AUDIT_CONFIG_TOKEN_AUTHENTICATOR_6=<type=CONFIG_TOKEN_AUTHENTICATOR>:[AuditEvent=CONFIG_TOKEN_AUTHENTICATOR][SubjectID={0}][Outcome={1}][OP={2}][Authenticator={3}][ParamNameValPairs={4}][Info={5}] token authenticator configuration parameter(s) change # Event: CONFIG_TOKEN_CONNECTOR Description: This event is used when configuring token connectors. Applicable subsystems: TPS Enabled by default: Yes Fields: - SubjectID: - Outcome: - Service: can be any of the methods offered - Connector: - ParamNameValPairs: A name-value pair (where name and value are separated by the delimiter ;;) separated by + (if more than one name-value pair) of config params changed. --- secret component (password) MUST NOT be logged --- - Info: Error info for failed cases. # LOGGING_SIGNED_AUDIT_CONFIG_TOKEN_CONNECTOR_6=<type=CONFIG_TOKEN_CONNECTOR>:[AuditEvent=CONFIG_TOKEN_CONNECTOR][SubjectID={0}][Outcome={1}][Service={2}][Connector={3}][ParamNameValPairs={4}][Info={5}] token connector configuration parameter(s) change # Event: CONFIG_TOKEN_MAPPING_RESOLVER Description: This event is used when configuring token mapping resolver. Applicable subsystems: TPS Enabled by default: Yes Fields: - SubjectID: TPS administrator id - Outcome: - Service: - MappingResolverID: - ParamNameValPairs: A name-value pair (where name and value are separated by the delimiter ;;) separated by + (if more than one name-value pair) of config params changed. --- secret component (password) MUST NOT be logged --- - Info: Error info for failed cases. # LOGGING_SIGNED_AUDIT_CONFIG_TOKEN_MAPPING_RESOLVER_6=<type=CONFIG_TOKEN_MAPPING_RESOLVER>:[AuditEvent=CONFIG_TOKEN_MAPPING_RESOLVER][SubjectID={0}][Outcome={1}][Service={2}][MappingResolverID={3}][ParamNameValPairs={4}][Info={5}] token mapping resolver configuration parameter(s) change # Event: CONFIG_TOKEN_RECORD Description: This event is used when information in token record changed. Applicable subsystems: TPS Enabled by default: Yes Fields: - SubjectID: TPS administrator id - Outcome: - OP: operation to add or delete token - TokenID: smart card unique id - ParamNameValPairs: A name-value pair (where name and value are separated by the delimiter ;;) separated by + (if more than one name-value pair) of config params changed. --- secret component (password) MUST NOT be logged --- - Info: in general is used for capturing error info for failed cases # LOGGING_SIGNED_AUDIT_CONFIG_TOKEN_RECORD_6=<type=CONFIG_TOKEN_RECORD>:[AuditEvent=CONFIG_TOKEN_RECORD][SubjectID={0}][Outcome={1}][OP={2}][TokenID={3}][ParamNameValPairs={4}][Info={5}] token record configuration parameter(s) change # Event: KEY_GEN_ASYMMETRIC Description: This event is used when asymmetric keys are generated such as when CA certificate requests are generated, e.g. CA certificate change over, renewal with new key. Applicable subsystems: CA, KRA, OCSP, TKS, TPS Enabled by default: Yes Fields: - SubjectID: - Outcome: - PubKey: The base-64 encoded public key material. # LOGGING_SIGNED_AUDIT_KEY_GEN_ASYMMETRIC_3=<type=KEY_GEN_ASYMMETRIC>:[AuditEvent=KEY_GEN_ASYMMETRIC][SubjectID={0}][Outcome={1}][PubKey={2}] asymmetric key generation # Event: LOG_PATH_CHANGE Description: This event is used when log file name (including any path changes) for any of audit, system, transaction, or other customized log file change is attempted. The ACL should not allow this operation, but make sure it's written after the attempt. Applicable subsystems: CA, KRA, OCSP, TKS, TPS Enabled by default: Yes Fields: - SubjectID: administrator user id - Outcome: - LogType: \"System\", \"Transaction\", or \"SignedAudit\" - toLogFile: The name (including any path changes) that the user is attempting to change to. # LOGGING_SIGNED_AUDIT_LOG_PATH_CHANGE_4=<type=LOG_PATH_CHANGE>:[AuditEvent=LOG_PATH_CHANGE][SubjectID={0}][Outcome={1}][LogType={2}][toLogFile={3}] log path change attempt # Event: RANDOM_GENERATION Description: This event is used when a random number generation is complete. Applicable subsystems: CA, KRA, OCSP, TKS, TPS Enabled by default: Yes Fields: - SubjectID: - Outcome: \"Success\" when a random number is generated successfully, \"Failure\" otherwise. - Info: - Caller: PKI code that calls the random number generator. - Size: Size of random number in bytes. - FailureReason: # LOGGING_SIGNED_AUDIT_RANDOM_GENERATION=<type=RANDOM_GENERATION>:[AuditEvent=RANDOM_GENERATION]{0} Random number generation # Event: SCHEDULE_CRL_GENERATION Description: This event is used when CRL generation is scheduled. Applicable subsystems: CA Enabled by default: Yes Fields: - SubjectID: - Outcome: \"Success\" when CRL generation is scheduled successfully, \"Failure\" otherwise. - FailureReason: # LOGGING_SIGNED_AUDIT_SCHEDULE_CRL_GENERATION=<type=SCHEDULE_CRL_GENERATION>:[AuditEvent=SCHEDULE_CRL_GENERATION]{0} schedule for CRL generation # Event: SECURITY_DATA_ARCHIVAL_REQUEST Description: This event is used when security data recovery request is made. Applicable subsystems: KRA Enabled by default: Yes Fields: - SubjectID: - Outcome: - ArchivalRequestID: The requestID provided by the CA through the connector. It is used to track the request through from CA to KRA. - RequestId: The KRA archival request ID. - ClientKeyID: The user supplied client ID associated with the security data to be archived. - FailureReason: # LOGGING_SIGNED_AUDIT_SECURITY_DATA_ARCHIVAL_REQUEST=<type=SECURITY_DATA_ARCHIVAL_REQUEST>:[AuditEvent=SECURITY_DATA_ARCHIVAL_REQUEST]{0} security data archival request made # Event: SECURITY_DATA_ARCHIVAL_REQUEST_PROCESSED Description: This event is used when user security data archive request is processed. This is when KRA receives and processed the request. Applicable subsystems: KRA Enabled by default: Yes Fields: - SubjectID: - Outcome: - ArchivalRequestID: The requestID provided by the CA through the connector. It is used to track the request through from CA to KRA. - RequestId: The KRA archival request ID. - ClientKeyID: The user supplied client ID associated with the security data to be archived. - KeyID: - PubKey: - FailureReason: # LOGGING_SIGNED_AUDIT_SECURITY_DATA_ARCHIVAL_REQUEST_PROCESSED=<type=SECURITY_DATA_ARCHIVAL_REQUEST_PROCESSED>:[AuditEvent=SECURITY_DATA_ARCHIVAL_REQUEST_PROCESSED]{0} security data archival request processed # Event: SECURITY_DATA_RECOVERY_REQUEST Description: This event is used when security data recovery request is made. Applicable subsystems: KRA Enabled by default: Yes Fields: - SubjectID: - Outcome: - RecoveryID: The recovery request ID. - DataID: The ID of the security data being requested to be recovered. - PubKey: # LOGGING_SIGNED_AUDIT_SECURITY_DATA_RECOVERY_REQUEST=<type=SECURITY_DATA_RECOVERY_REQUEST>:[AuditEvent=SECURITY_DATA_RECOVERY_REQUEST]{0} security data recovery request made # Event: SECURITY_DATA_RECOVERY_REQUEST_PROCESSED Description: This event is used when security data recovery request is processed. Applicable subsystems: KRA Enabled by default: Yes Fields: - SubjectID: - Outcome: - RecoveryID: The recovery request ID. - KeyID: The ID of the security data being requested to be recovered. - RecoveryAgents: The UIDs of the recovery agents approving this request. - FailureReason: # LOGGING_SIGNED_AUDIT_SECURITY_DATA_RECOVERY_REQUEST_PROCESSED=<type=SECURITY_DATA_RECOVERY_REQUEST_PROCESSED>:[AuditEvent=SECURITY_DATA_RECOVERY_REQUEST_PROCESSED]{0} security data recovery request processed # Event: SECURITY_DATA_RECOVERY_REQUEST_STATE_CHANGE Description: This event is used when KRA agents login as recovery agents to change the state of key recovery requests. Applicable subsystems: KRA Enabled by default: Yes Fields: - SubjectID: - Outcome: - RecoveryID: The recovery request ID. - Operation: The operation performed (approve, reject, cancel etc.). # LOGGING_SIGNED_AUDIT_SECURITY_DATA_RECOVERY_REQUEST_STATE_CHANGE=<type=SECURITY_DATA_RECOVERY_REQUEST_STATE_CHANGE>:[AuditEvent=SECURITY_DATA_RECOVERY_REQUEST_STATE_CHANGE]{0} security data recovery request state change # Event: SERVER_SIDE_KEYGEN_REQUEST Description: This event is used when server-side key generation request is made. This is for token keys. Applicable subsystems: CA, KRA, OCSP, TKS, TPS Enabled by default: Yes Fields: - SubjectID: - Outcome: - EntityID: The representation of the subject that will be on the certificate when issued. - RequestID: # LOGGING_SIGNED_AUDIT_SERVER_SIDE_KEYGEN_REQUEST=<type=SERVER_SIDE_KEYGEN_REQUEST>:[AuditEvent=SERVER_SIDE_KEYGEN_REQUEST]{0} server-side key generation request # Event: SERVER_SIDE_KEYGEN_REQUEST_PROCESSED Description: This event is used when server-side key generation request has been processed. This is for token keys. Applicable subsystems: CA, KRA, OCSP, TKS, TPS Enabled by default: Yes Fields: - SubjectID: - Outcome: - EntityID: The representation of the subject that will be on the certificate when issued. - RequestID: - PubKey: The base-64 encoded public key associated with the private key to be archived. # LOGGING_SIGNED_AUDIT_SERVER_SIDE_KEYGEN_REQUEST_PROCESSED=<type=SERVER_SIDE_KEYGEN_REQUEST_PROCESSED>:[AuditEvent=SERVER_SIDE_KEYGEN_REQUEST_PROCESSED]{0} server-side key generation request processed # Event: SYMKEY_GENERATION_REQUEST Description: This event is used when symmetric key generation request is made. Applicable subsystems: KRA Enabled by default: Yes Fields: - SubjectID: - Outcome: - GenerationRequestID: - ClientKeyID: The ID of the symmetric key to be generated and archived. # LOGGING_SIGNED_AUDIT_SYMKEY_GENERATION_REQUEST=<type=SYMKEY_GENERATION_REQUEST>:[AuditEvent=SYMKEY_GENERATION_REQUEST]{0} symkey generation request made # Event: SYMKEY_GENERATION_REQUEST_PROCESSED Description: This event is used when symmetric key generation request is processed. This is when KRA receives and processes the request. Applicable subsystems: KRA Enabled by default: Yes Fields: - SubjectID: - Outcome: - GenerationRequestID: - ClientKeyID: The user supplied client ID associated with the symmetric key to be generated and archived. - KeyID: - FailureReason: # LOGGING_SIGNED_AUDIT_SYMKEY_GEN_REQUEST_PROCESSED=<type=SYMKEY_GENERATION_REQUEST_PROCESSED>:[AuditEvent=SYMKEY_GENERATION_REQUEST_PROCESSED]{0} symkey generation request processed # Event: TOKEN_APPLET_UPGRADE with [Outcome=Failure] Description: This event is used when token apple upgrade failed. Applicable subsystems: TPS Enabled by default: Yes Fields: - SubjectID: - Outcome: - IP: - CUID: - MSN: - KeyVersion: - oldAppletVersion: - newAppletVersion: - Info: # LOGGING_SIGNED_AUDIT_TOKEN_APPLET_UPGRADE_FAILURE=<type=TOKEN_APPLET_UPGRADE>:[AuditEvent=TOKEN_APPLET_UPGRADE]{0} token applet upgrade failure # Event: TOKEN_APPLET_UPGRADE with [Outcome=Success] Description: This event is used when token apple upgrade succeeded. Applicable subsystems: TPS Enabled by default: Yes Fields: - SubjectID: - Outcome: - IP: - CUID: - MSN: - KeyVersion: - oldAppletVersion: - newAppletVersion: - Info: # LOGGING_SIGNED_AUDIT_TOKEN_APPLET_UPGRADE_SUCCESS=<type=TOKEN_APPLET_UPGRADE>:[AuditEvent=TOKEN_APPLET_UPGRADE]{0} token applet upgrade success # Event: TOKEN_KEY_CHANGEOVER with [Outcome=Failure] Description: This event is used when token key changeover failed. Applicable subsystems: TPS Enabled by default: Yes Fields: - SubjectID: - Outcome: - IP: - CUID: - MSN: - tokenType: - AppletVersion: - oldKeyVersion: - newKeyVersion: - Info: Info in case of failure. # LOGGING_SIGNED_AUDIT_TOKEN_KEY_CHANGEOVER_FAILURE=<type=TOKEN_KEY_CHANGEOVER>:[AuditEvent=TOKEN_KEY_CHANGEOVER]{0} token key changeover failure # Event: TOKEN_KEY_CHANGEOVER with [Outcome=Success] Description: This event is used when token key changeover succeeded. Applicable subsystems: TPS Enabled by default: Yes Fields: - SubjectID: - Outcome: - IP: - CUID: - MSN: - tokenType: - AppletVersion: - oldKeyVersion: - newKeyVersion: - Info: Usually is unused for success. # LOGGING_SIGNED_AUDIT_TOKEN_KEY_CHANGEOVER_SUCCESS=<type=TOKEN_KEY_CHANGEOVER>:[AuditEvent=TOKEN_KEY_CHANGEOVER]{0} token key changeover success # Event: TOKEN_KEY_CHANGEOVER_REQUIRED Description: This event is used when token key changeover is required. Applicable subsystems: TPS Enabled by default: Yes Fields: - SubjectID: - Outcome: - IP: - CUID: - MSN: - tokenType: - AppletVersion: - oldKeyVersion: - newKeyVersion: - Info: # LOGGING_SIGNED_AUDIT_TOKEN_KEY_CHANGEOVER_REQUIRED_10=<type=TOKEN_KEY_CHANGEOVER_REQUIRED>:[AuditEvent=TOKEN_KEY_CHANGEOVER_REQUIRED][IP={0}][SubjectID={1}][CUID={2}][MSN={3}][Outcome={4}][tokenType={5}][AppletVersion={6}][oldKeyVersion={7}][newKeyVersion={8}][Info={9}] token key changeover required # Event: LOGGING_SIGNED_AUDIT_TOKEN_KEY_SANITY_CHECK_SUCCESS Description: used for the CS.cfg properties: enableBoundedGPKeyVersion, cuidMustMatchKDD, and validateCardKeyInfoAgainstTokenDB Applicable subsystems: TPS Enabled by default: Yes Fields: - SubjectID: - Outcome: - IP: - CUID: - KDD: - TokenKeyVersion: - NewKeyVersion: - TokenDBKeyVersion: - Info: # LOGGING_SIGNED_AUDIT_TOKEN_KEY_SANITY_CHECK_SUCCESS_9=<type=TOKEN_KEY_SANITY_CHECK>:[AuditEvent=TOKEN_KEY_SANITY_CHECK][IP={0}][SubjectID={1}][CUID={2}][KDD={3}][Outcome={4}][TokenKeyVersion={5}][NewKeyVersion={6}][TokenDBKeyVersion={7}][Info={8}] token key sanity check success # Event: LOGGING_SIGNED_AUDIT_TOKEN_KEY_SANITY_CHECK_FAILURE Description: used for the CS.cfg properties: enableBoundedGPKeyVersion, cuidMustMatchKDD, and validateCardKeyInfoAgainstTokenDB Applicable subsystems: TPS Enabled by default: Yes Fields: - SubjectID: - Outcome: - IP: - CUID: - KDD: - TokenKeyVersion: - NewKeyVersion: - TokenDBKeyVersion: - Info: # LOGGING_SIGNED_AUDIT_TOKEN_KEY_SANITY_CHECK_FAILURE_9=<type=TOKEN_KEY_SANITY_CHECK>:[AuditEvent=TOKEN_KEY_SANITY_CHECK][IP={0}][SubjectID={1}][CUID={2}][KDD={3}][Outcome={4}][TokenKeyVersion={5}][NewKeyVersion={6}][TokenDBKeyVersion={7}][Info={8}] token key sanity check failure +# ######################################################################### Available Audit Events - Enabled by default: No ######################################################################### # Event: AUDIT_LOG_DELETE Description: This event is used AFTER audit log gets expired. The ACL should not allow this operation, but it is provided in case ACL gets compromised. Make sure it is written AFTER the log expiration happens. Applicable subsystems: CA, KRA, OCSP, TKS, TPS Enabled by default: No Fields: - SubjectID: - Outcome: - LogFile: The complete name (including the path) of the signedAudit log that is attempted to be deleted. # LOGGING_SIGNED_AUDIT_LOG_DELETE_3=<type=AUDIT_LOG_DELETE>:[AuditEvent=AUDIT_LOG_DELETE][SubjectID={0}][Outcome={1}][LogFile={2}] signedAudit log deletion # Event: AUDIT_LOG_SHUTDOWN Description: This event is used at audit function shutdown. Applicable subsystems: CA, KRA, OCSP, TKS, TPS Enabled by default: No Fields: - SubjectID: - Outcome: # LOGGING_SIGNED_AUDIT_AUDIT_LOG_SHUTDOWN_2=<type=AUDIT_LOG_SHUTDOWN>:[AuditEvent=AUDIT_LOG_SHUTDOWN][SubjectID={0}][Outcome={1}] audit function shutdown # Event: CIMC_CERT_VERIFICATION Description: This event is used for verifying CS system certificates. Applicable subsystems: CA, KRA, OCSP, TKS, TPS Enabled by default: No Fields: - SubjectID: - Outcome: - CertNickName: The certificate nickname. # LOGGING_SIGNED_AUDIT_CIMC_CERT_VERIFICATION_3=<type=CIMC_CERT_VERIFICATION>:[AuditEvent=CIMC_CERT_VERIFICATION][SubjectID={0}][Outcome={1}][CertNickName={2}] CS certificate verification # Event: CMC_ID_POP_LINK_WITNESS Description: This event is used for identification and POP linking verification during CMC request processing. Applicable subsystems: CA Enabled by default: No Fields: - SubjectID: - Outcome: - Info: # LOGGING_SIGNED_AUDIT_CMC_ID_POP_LINK_WITNESS_3=<type=CMC_ID_POP_LINK_WITNESS>:[AuditEvent=CMC_ID_POP_LINK_WITNESS][SubjectID={0}][Outcome={1}][Info={2}] Identification Proof of Possession linking witness verification # Event: CMC_PROOF_OF_IDENTIFICATION Description: This event is used for proof of identification during CMC request processing. Applicable subsystems: CA Enabled by default: No Fields: - SubjectID: In case of success, \"SubjectID\" is the actual identified identification. In case of failure, \"SubjectID\" is the attempted identification. - Outcome: - Info: # LOGGING_SIGNED_AUDIT_CMC_PROOF_OF_IDENTIFICATION_3=<type=CMC_PROOF_OF_IDENTIFICATION>:[AuditEvent=CMC_PROOF_OF_IDENTIFICATION][SubjectID={0}][Outcome={1}][Info={2}] proof of identification in CMC request # Event: COMPUTE_RANDOM_DATA_REQUEST Description: This event is used when the request for TPS to TKS to get random challenge data is received. Applicable subsystems: TKS, TPS Enabled by default: No Fields: - Outcome: - AgentID: The trusted agent ID used to make the request. # LOGGING_SIGNED_AUDIT_COMPUTE_RANDOM_DATA_REQUEST_2=<type=COMPUTE_RANDOM_DATA_REQUEST>:[AuditEvent=COMPUTE_RANDOM_DATA_REQUEST][Outcome={0}][AgentID={1}] TKS Compute random data request # Event: COMPUTE_RANDOM_DATA_REQUEST_PROCESSED with [Outcome=Failure] Description: This event is used when the request for TPS to TKS to get random challenge data is processed unsuccessfully. Applicable subsystems: TKS, TPS Enabled by default: No Fields: - Outcome: Success or Failure. - Status: 0 for no error. - Error: The error message. - AgentID: The trusted agent ID used to make the request. # LOGGING_SIGNED_AUDIT_COMPUTE_RANDOM_DATA_REQUEST_PROCESSED_FAILURE=<type=COMPUTE_RANDOM_DATA_REQUEST_PROCESSED>:[AuditEvent=COMPUTE_RANDOM_DATA_REQUEST_PROCCESED]{0} TKS Compute random data request failed # Event: COMPUTE_RANDOM_DATA_REQUEST_PROCESSED with [Outcome=Success] Description: This event is used when the request for TPS to TKS to get random challenge data is processed successfully. Applicable subsystems: TKS, TPS Fields: - Outcome: Success or Failure. - Status: 0 for no error. - AgentID: The trusted agent ID used to make the request. # LOGGING_SIGNED_AUDIT_COMPUTE_RANDOM_DATA_REQUEST_PROCESSED_SUCCESS=<type=COMPUTE_RANDOM_DATA_REQUEST_PROCESSED>:[AuditEvent=COMPUTE_RANDOM_DATA_REQUEST_PROCESSED]{0} TKS Compute random data request processed successfully # Event: COMPUTE_SESSION_KEY_REQUEST Description: This event is used when the request for TPS to TKS to get a session key for secure channel is received. Applicable subsystems: TKS, TPS Enabled by default: No Fields: - Outcome: - AgentID: The trusted agent ID used to make the request. ## AC: KDF SPEC CHANGE - Need to log both the KDD and CUID, not just the ## CUID. Renamed to \"CUID_encoded\" and \"KDD_encoded\" to reflect fact that ## encoded parameters are being logged. - CUID_encoded: The special-encoded CUID of the token establishing the secure channel. - KDD_encoded: The special-encoded KDD of the token establishing the secure channel. # LOGGING_SIGNED_AUDIT_COMPUTE_SESSION_KEY_REQUEST_4=<type=COMPUTE_SESSION_KEY_REQUEST>:[AuditEvent=COMPUTE_SESSION_KEY_REQUEST][CUID_encoded={0}][KDD_encoded={1}][Outcome={2}][AgentID={3}] TKS Compute session key request # Event: COMPUTE_SESSION_KEY_REQUEST_PROCESSED with [Outcome=Failure] Description: This event is used when the request for TPS to TKS to get a session key for secure channel is processed unsuccessfully. Applicable subsystems: TKS, TPS Enabled by default: No Fields: - Outcome: Failure - status: Error code or 0 for no error. - AgentID: The trusted agent ID used to make the request. - IsCryptoValidate: tells if the card cryptogram is to be validated - IsServerSideKeygen: tells if the keys are to be generated on server - SelectedToken: The cryptographic token performing key operations. - KeyNickName: The numeric keyset, e.g. #01#01. - Error: The error message. # ## AC: KDF SPEC CHANGE - Need to log both the KDD and CUID, not just the CUID. Renamed to \"CUID_decoded\" and \"KDD_decoded\" to reflect fact that decoded parameters are now logged. ## Also added TKSKeyset, KeyInfo_KeyVersion, NistSP800_108KdfOnKeyVersion, NistSP800_108KdfUseCuidAsKdd - CUID_decoded: The ASCII-HEX representation of the CUID of the token establishing the secure channel. - KDD_decoded: The ASCII-HEX representation of the KDD of the token establishing the secure channel. - TKSKeyset: The name of the TKS keyset being used for this request. - KeyInfo_KeyVersion: The key version number requested in hex. - NistSP800_108KdfOnKeyVersion: The value of the corresponding setting in hex. - NistSP800_108KdfUseCuidAsKdd: The value of the corresponding setting in hex. # LOGGING_SIGNED_AUDIT_COMPUTE_SESSION_KEY_REQUEST_PROCESSED_FAILURE=<type=COMPUTE_SESSION_KEY_REQUEST_PROCESSED>:[AuditEvent=COMPUTE_SESSION_KEY_REQUEST_PROCESSED]{0} TKS Compute session key request failed # Event: COMPUTE_SESSION_KEY_REQUEST_PROCESSED with [Outcome=Success] Description: This event is used when the request for TPS to TKS to get a session key for secure channel is processed successfully. Applicable subsystems: TKS, TPS Enabled by default: No Fields: - AgentID: The trusted agent ID used to make the request. - Outcome: Success - status: 0 for no error. - IsCryptoValidate: tells if the card cryptogram is to be validated - IsServerSideKeygen: tells if the keys are to be generated on server - SelectedToken: The cryptographic token performing key operations. - KeyNickName: The number keyset, e.g. #01#01. # ## AC: KDF SPEC CHANGE - Need to log both the KDD and CUID, not just the ## CUID. Renamed to \"CUID_decoded\" and \"KDD_decoded\" to reflect fact ## that decoded parameters are now logged. ## Also added TKSKeyset, KeyInfo_KeyVersion, ## NistSP800_108KdfOnKeyVersion, NistSP800_108KdfUseCuidAsKdd - CUID_decoded: The ASCII-HEX representation of the CUID of the token establishing the secure channel. - KDD_decoded: The ASCII-HEX representation of the KDD of the token establishing the secure channel. - TKSKeyset: The name of the TKS keyset being used for this request. - KeyInfo_KeyVersion: The key version number requested in hex. - NistSP800_108KdfOnKeyVersion: The value of the corresponding setting in hex. - NistSP800_108KdfUseCuidAsKdd: The value of the corresponding setting in hex. # LOGGING_SIGNED_AUDIT_COMPUTE_SESSION_KEY_REQUEST_PROCESSED_SUCCESS=<type=COMPUTE_SESSION_KEY_REQUEST_PROCESSED>:[AuditEvent=COMPUTE_SESSION_KEY_REQUEST_PROCESSED]{0} TKS Compute session key request processed successfully # Event: CONFIG_CERT_POLICY Description: This event is used when configuring certificate policy constraints and extensions. Applicable subsystems: CA Enabled by default: No Fields: - SubjectID: - Outcome: - ParamNameValPairs: A name-value pair (where name and value are separated by the delimiter ;;) separated by + (if more than one name-value pair) of config params changed. # LOGGING_SIGNED_AUDIT_CONFIG_CERT_POLICY_3=<type=CONFIG_CERT_POLICY>:[AuditEvent=CONFIG_CERT_POLICY][SubjectID={0}][Outcome={1}][ParamNameValPairs={2}] certificate policy constraint or extension configuration parameter(s) change # Event: CONFIG_TOKEN_GENERAL Description: This event is used when doing general TPS configuration. Applicable subsystems: TPS Enabled by default: No Fields: - SubjectID: - Outcome: - ParamNameValPairs: A name-value pair (where name and value are separated by the delimiter ;;) separated by + (if more than one name-value pair) of config params changed. --- secret component (password) MUST NOT be logged --- - Info: Error info for failed cases. # LOGGING_SIGNED_AUDIT_CONFIG_TOKEN_GENERAL_5=<type=CONFIG_TOKEN_GENERAL>:[AuditEvent=CONFIG_TOKEN_GENERAL][SubjectID={0}][Outcome={1}][Service={2}][ParamNameValPairs={3}][Info={4}] TPS token configuration parameter(s) change # Event: CONFIG_TOKEN_PROFILE Description: This event is used when configuring token profile. Applicable subsystems: TPS Enabled by default: No Fields: - SubjectID: - Outcome: - Service: can be any of the methods offered - ProfileID: - ParamNameValPairs: A name-value pair (where name and value are separated by the delimiter ;;) separated by + (if more than one name-value pair) of config params changed. --- secret component (password) MUST NOT be logged --- - Info: Error info for failed cases. # LOGGING_SIGNED_AUDIT_CONFIG_TOKEN_PROFILE_6=<type=CONFIG_TOKEN_PROFILE>:[AuditEvent=CONFIG_TOKEN_PROFILE][SubjectID={0}][Outcome={1}][Service={2}][ProfileID={3}][ParamNameValPairs={4}][Info={5}] token profile configuration parameter(s) change # Event: CRL_RETRIEVAL Description: This event is used when CRLs are retrieved by the OCSP Responder. Applicable subsystems: OCSP Enabled by default: No Fields: - SubjectID: - Outcome: \"Success\" when CRL is retrieved successfully, \"Failure\" otherwise. - CRLnum: The CRL number that identifies the CRL. # LOGGING_SIGNED_AUDIT_CRL_RETRIEVAL_3=<type=CRL_RETRIEVAL>:[AuditEvent=CRL_RETRIEVAL][SubjectID={0}][Outcome={1}][CRLnum={2}] CRL retrieval # Event: CRL_VALIDATION Description: This event is used when CRL is retrieved and validation process occurs. Applicable subsystems: OCSP Enabled by default: No Fields: - SubjectID: - Outcome: # LOGGING_SIGNED_AUDIT_CRL_VALIDATION_2=<type=CRL_VALIDATION>:[AuditEvent=CRL_VALIDATION][SubjectID={0}][Outcome={1}] CRL validation # Event: DELTA_CRL_PUBLISHING Description: This event is used when delta CRL publishing is complete. Applicable subsystems: CA Enabled by default: No Fields: - SubjectID: - Outcome: \"Success\" when delta CRL is publishing successfully, \"Failure\" otherwise. - CRLnum: - FailureReason: # LOGGING_SIGNED_AUDIT_DELTA_CRL_PUBLISHING=<type=DELTA_CRL_PUBLISHING>:[AuditEvent=DELTA_CRL_PUBLISHING]{0} Delta CRL publishing # Event: DIVERSIFY_KEY_REQUEST Description: This event is used when the request for TPS to TKS to do key changeover is received. Applicable subsystems: TKS, TPS Enabled by default: No Fields: - Outcome: - AgentID: The trusted agent ID used to make the request. - oldMasterKeyName: The old master key name. - newMasterKeyName: The new master key name. # ## AC: KDF SPEC CHANGE - Need to log both the KDD and CUID, not just the CUID. Renamed to \"CUID_encoded\" and \"KDD_encoded\" to reflect fact that encoded parameters are being logged. - CUID_encoded: The special-encoded CUID of the token establishing the secure channel. - KDD_encoded: The special-encoded KDD of the token establishing the secure channel. # LOGGING_SIGNED_AUDIT_DIVERSIFY_KEY_REQUEST_6=<type=DIVERSIFY_KEY_REQUEST>:[AuditEvent=DIVERSIFY_KEY_REQUEST][CUID_encoded={0}][KDD_encoded={1}][Outcome={2}][AgentID={3}][oldMasterKeyName={4}][newMasterKeyName={5}] TKS Key Change Over request # Event: DIVERSIFY_KEY_REQUEST_PROCESSED with [Outcome=Failure] Description: This event is when the request for TPS to TKS to do key changeover is processed unsuccessfully. Applicable subsystems: TKS, TPS Enabled by default: No Fields: - AgentID: The trusted agent ID used to make the request. - Outcome: Failure - status: 0 for success, non-zero for various errors. - oldMasterKeyName: The old master key name. - newMasterKeyName: The new master key name. - Error: The error message. # ## AC: KDF SPEC CHANGE - Need to log both the KDD and CUID, not just the CUID. Renamed to \"CUID_decoded\" and \"KDD_decoded\" to reflect fact that decoded parameters are now logged. ## Also added TKSKeyset, OldKeyInfo_KeyVersion, NewKeyInfo_KeyVersion, NistSP800_108KdfOnKeyVersion, NistSP800_108KdfUseCuidAsKdd - CUID_decoded: The ASCII-HEX representation of the CUID of the token establishing the secure channel. - KDD_decoded: The ASCII-HEX representation of the KDD of the token establishing the secure channel. - TKSKeyset: The name of the TKS keyset being used for this request. - OldKeyInfo_KeyVersion: The old key version number in hex. - NewKeyInfo_KeyVersion: The new key version number in hex. - NistSP800_108KdfOnKeyVersion: The value of the corresponding setting in hex. - NistSP800_108KdfUseCuidAsKdd: The value of the corresponding setting in hex. # LOGGING_SIGNED_AUDIT_DIVERSIFY_KEY_REQUEST_PROCESSED_FAILURE=<type=DIVERSIFY_KEY_REQUEST_PROCESSED>:[AuditEvent=DIVERSIFY_KEY_REQUEST_PROCESSED]{0} TKS Key Change Over request failed # Event: DIVERSIFY_KEY_REQUEST_PROCESSED with [Outcome=Success] Description: This event is used when the request for TPS to TKS to do key changeover is processed successfully. Applicable subsystems: TKS, TPS Enabled by default: No Fields: - AgentID: The trusted agent ID used to make the request. - Outcome: Success - status: 0 for success, non-zero for various errors. - oldMasterKeyName: The old master key name. - newMasterKeyName: The new master key name. # ## AC: KDF SPEC CHANGE - Need to log both the KDD and CUID, not just the CUID. Renamed to \"CUID_decoded\" and \"KDD_decoded\" to reflect fact that decoded parameters are now logged. ## Also added TKSKeyset, OldKeyInfo_KeyVersion, NewKeyInfo_KeyVersion, NistSP800_108KdfOnKeyVersion, NistSP800_108KdfUseCuidAsKdd - CUID_decoded: The ASCII-HEX representation of the CUID of the token establishing the secure channel. - KDD_decoded: The ASCII-HEX representation of the KDD of the token establishing the secure channel. - TKSKeyset: The name of the TKS keyset being used for this request. - OldKeyInfo_KeyVersion: The old key version number in hex. - NewKeyInfo_KeyVersion: The new key version number in hex. - NistSP800_108KdfOnKeyVersion: The value of the corresponding setting in hex. - NistSP800_108KdfUseCuidAsKdd: The value of the corresponding setting in hex. # LOGGING_SIGNED_AUDIT_DIVERSIFY_KEY_REQUEST_PROCESSED_SUCCESS=<type=DIVERSIFY_KEY_REQUEST_PROCESSED>:[AuditEvent=DIVERSIFY_KEY_REQUEST_PROCESSED]{0} TKS Key Change Over request processed successfully # Event: ENCRYPT_DATA_REQUEST Description: This event is used when the request from TPS to TKS to encrypt data (or generate random data and encrypt) is received. Applicable subsystems: TKS, TPS Enabled by default: No Fields: - SubjectID: The CUID of the token requesting encrypt data. - AgentID: The trusted agent ID used to make the request. - status: 0 for success, non-zero for various errors. - isRandom: tells if the data is randomly generated on TKS # ## AC: KDF SPEC CHANGE - Need to log both the KDD and CUID, not just the CUID. Renamed to \"CUID_encoded\" and \"KDD_encoded\" to reflect fact that encoded parameters are being logged. - CUID_encoded: The special-encoded CUID of the token establishing the secure channel. - KDD_encoded: The special-encoded KDD of the token establishing the secure channel. # LOGGING_SIGNED_AUDIT_ENCRYPT_DATA_REQUEST_4=<type=ENCRYPT_DATA_REQUEST>:[AuditEvent=ENCRYPT_DATA_REQUEST][SubjectID={0}][status={1}][AgentID={2}][isRandom={3}] TKS encrypt data request LOGGING_SIGNED_AUDIT_ENCRYPT_DATA_REQUEST_5=<type=ENCRYPT_DATA_REQUEST>:[AuditEvent=ENCRYPT_DATA_REQUEST][CUID_encoded={0}][KDD_encoded={1}][status={2}][AgentID={3}][isRandom={4}] TKS encrypt data request # Event: ENCRYPT_DATA_REQUEST_PROCESSED with [Outcome=Failure] Description: This event is used when the request from TPS to TKS to encrypt data (or generate random data and encrypt) is processed unsuccessfully. Applicable subsystems: TKS, TPS Enabled by default: No Fields: - AgentID: The trusted agent ID used to make the request. - Outcome: Failure - status: 0 for success, non-zero for various errors. - isRandom: tells if the data is randomly generated on TKS - SelectedToken: The cryptographic token performing key operations. - KeyNickName: The numeric keyset, e.g. #01#01. - Error: The error message. # ## AC: KDF SPEC CHANGE - Need to log both the KDD and CUID, not just the CUID. Renamed to \"CUID_decoded\" and \"KDD_decoded\" to reflect fact that decoded parameters are now logged. ## Also added TKSKeyset, KeyInfo_KeyVersion, NistSP800_108KdfOnKeyVersion, NistSP800_108KdfUseCuidAsKdd - CUID_decoded: The ASCII-HEX representation of the CUID of the token establishing the secure channel. - KDD_decoded: The ASCII-HEX representation of the KDD of the token establishing the secure channel. - TKSKeyset: The name of the TKS keyset being used for this request. - KeyInfo_KeyVersion: The key version number requested in hex. - NistSP800_108KdfOnKeyVersion: The value of the corresponding setting in hex. - NistSP800_108KdfUseCuidAsKdd: The value of the corresponding setting in hex. # LOGGING_SIGNED_AUDIT_ENCRYPT_DATA_REQUEST_PROCESSED_FAILURE=<type=ENCRYPT_DATA_REQUEST_PROCESSED>:[AuditEvent=ENCRYPT_DATA_REQUEST_PROCESSED]{0} TKS encrypt data request failed # Event: ENCRYPT_DATA_REQUEST_PROCESSED with [Outcome=Success] Description: This event is used when the request from TPS to TKS to encrypt data (or generate random data and encrypt) is processed successfully. Applicable subsystems: TKS, TPS Enabled by default: No Fields: - AgentID: The trusted agent ID used to make the request. - Outcome: Success - status: 0 for success, non-zero for various errors. - isRandom: tells if the data is randomly generated on TKS - SelectedToken: The cryptographic token performing key operations. - KeyNickName: The numeric keyset, e.g. #01#01. # ## AC: KDF SPEC CHANGE - Need to log both the KDD and CUID, not just the CUID. Renamed to \"CUID_decoded\" and \"KDD_decoded\" to reflect fact that decoded parameters are now logged. ## Also added TKSKeyset, KeyInfo_KeyVersion, NistSP800_108KdfOnKeyVersion, NistSP800_108KdfUseCuidAsKdd - CUID_decoded: The ASCII-HEX representation of the CUID of the token establishing the secure channel. - KDD_decoded: The ASCII-HEX representation of the KDD of the token establishing the secure channel. - TKSKeyset: The name of the TKS keyset being used for this request. - KeyInfo_KeyVersion: The key version number requested in hex. - NistSP800_108KdfOnKeyVersion: The value of the corresponding setting in hex. - NistSP800_108KdfUseCuidAsKdd: The value of the corresponding setting in hex. # LOGGING_SIGNED_AUDIT_ENCRYPT_DATA_REQUEST_PROCESSED_SUCCESS=<type=ENCRYPT_DATA_REQUEST_PROCESSED>:[AuditEvent=ENCRYPT_DATA_REQUEST_PROCESSED]{0} TKS encrypt data request processed successfully # Event: FULL_CRL_PUBLISHING Description: This event is used when full CRL publishing is complete. Applicable subsystems: CA Enabled by default: No Fields: - SubjectID: - Outcome: \"Success\" when full CRL is publishing successfully, \"Failure\" otherwise. - CRLnum: - FailureReason: # LOGGING_SIGNED_AUDIT_FULL_CRL_PUBLISHING=<type=FULL_CRL_PUBLISHING>:[AuditEvent=FULL_CRL_PUBLISHING]{0} Full CRL publishing # Event: INTER_BOUNDARY Description: This event is used when inter-CS boundary data transfer is successful. This is used when data does not need to be captured. Applicable subsystems: CA, KRA, OCSP, TKS, TPS Enabled by default: No Fields: - SubjectID: - Outcome: - ProtectionMethod: \"SSL\" or \"unknown\". - ReqType: The request type. - ReqID: The request ID. # LOGGING_SIGNED_AUDIT_INTER_BOUNDARY_SUCCESS_5=<type=INTER_BOUNDARY>:[AuditEvent=INTER_BOUNDARY][SubjectID={0}][Outcome={1}][ProtectionMethod={2}][ReqType={3}][ReqID={4}] inter-CS boundary communication (data exchange) success # Event: KEY_RECOVERY_AGENT_LOGIN Description: This event is used when KRA agents login as recovery agents to approve key recovery requests. Applicable subsystems: KRA Enabled by default: No Fields: - SubjectID: - Outcome: - RecoveryID: The recovery request ID. - RecoveryAgent: The recovery agent the KRA agent is logging in with. # LOGGING_SIGNED_AUDIT_KEY_RECOVERY_AGENT_LOGIN_4=<type=KEY_RECOVERY_AGENT_LOGIN>:[AuditEvent=KEY_RECOVERY_AGENT_LOGIN][SubjectID={0}][Outcome={1}][RecoveryID={2}][RecoveryAgent={3}] key recovery agent login # Event: KEY_RECOVERY_REQUEST Description: This event is used when key recovery request is made. Applicable subsystems: CA, OCSP, TKS, TPS, TPS Enabled by default: No Fields: - SubjectID: - Outcome: - RecoveryID: The recovery request ID. - PubKey: The base-64 encoded public key associated with the private key to be recovered. # LOGGING_SIGNED_AUDIT_KEY_RECOVERY_REQUEST_4=<type=KEY_RECOVERY_REQUEST>:[AuditEvent=KEY_RECOVERY_REQUEST][SubjectID={0}][Outcome={1}][RecoveryID={2}][PubKey={3}] key recovery request made # Event: KEY_STATUS_CHANGE Description: This event is used when modify key status is executed. Applicable subsystems: KRA Enabled by default: No Fields: - SubjectID: - Outcome: - KeyID: An existing key ID in the database. - OldStatus: The old status to change from. - NewStatus: The new status to change to. - Info: # LOGGING_SIGNED_AUDIT_KEY_STATUS_CHANGE=<type=KEY_STATUS_CHANGE>:[AuditEvent=KEY_STATUS_CHANGE]{0} Key Status Change # Event: LOG_EXPIRATION_CHANGE (disabled) Description: This event is used when log expiration time change is attempted. The ACL should not allow this operation, but make sure it's written after the attempt. Applicable subsystems: CA, KRA, OCSP, TKS, TPS Enabled by default: No Fields: - SubjectID: - Outcome: - LogType: \"System\", \"Transaction\", or \"SignedAudit\". - ExpirationTime: The amount of time (in seconds) that is attempted to be changed to. # #LOGGING_SIGNED_AUDIT_LOG_EXPIRATION_CHANGE_4=<type=LOG_EXPIRATION_CHANGE>:[AuditEvent=LOG_EXPIRATION_CHANGE][SubjectID={0}][Outcome={1}][LogType={2}][ExpirationTime={3}] log expiration time change attempt # Event: NON_PROFILE_CERT_REQUEST Description: This event is used when a non-profile certificate request is made (before approval process). Applicable subsystems: CA, KRA, OCSP, TKS, TPS Enabled by default: No Fields: - SubjectID: The UID of user that triggered this event. If CMC enrollment requests signed by an agent, SubjectID should be that of the agent. - Outcome: - CertSubject: The certificate subject name of the certificate request. - ReqID: The certificate request ID. - ServiceID: The identity of the servlet that submitted the original request. # LOGGING_SIGNED_AUDIT_NON_PROFILE_CERT_REQUEST_5=<type=NON_PROFILE_CERT_REQUEST>:[AuditEvent=NON_PROFILE_CERT_REQUEST][SubjectID={0}][Outcome={1}][ReqID={2}][ServiceID={3}][CertSubject={4}] certificate request made without certificate profiles # Event: OCSP_ADD_CA_REQUEST Description: This event is used when a CA is attempted to be added to the OCSP Responder. Applicable subsystems: OCSP Enabled by default: No Fields: - SubjectID: - Outcome: - CA: The base-64 encoded PKCS7 certificate (or chain). # LOGGING_SIGNED_AUDIT_OCSP_ADD_CA_REQUEST=<type=OCSP_ADD_CA_REQUEST>:[AuditEvent=OCSP_ADD_CA_REQUEST]{0} request to add a CA for OCSP Responder # Event: OCSP_REMOVE_CA_REQUEST Description: This event is used when a CA is attempted to be removed from the OCSP Responder. Applicable subsystems: OCSP Enabled by default: No Fields: - SubjectID: - Outcome: - CASubjectDN: The DN ID of the CA. # LOGGING_SIGNED_AUDIT_OCSP_REMOVE_CA_REQUEST=<type=OCSP_REMOVE_CA_REQUEST>:[AuditEvent=OCSP_REMOVE_CA_REQUEST]{0} request to remove a CA from OCSP Responder # Event: SECURITY_DATA_EXPORT_KEY Description: This event is used when user attempts to retrieve key after the recovery request has been approved. Applicable subsystems: KRA Enabled by default: No Fields: - SubjectID: - Outcome: - RecoveryID: The recovery request ID. - KeyID: The key being retrieved. - Info: The failure reason if the export fails. - PubKey: The public key for the private key being retrieved. # LOGGING_SIGNED_AUDIT_SECURITY_DATA_EXPORT_KEY=<type=SECURITY_DATA_EXPORT_KEY>:[AuditEvent=SECURITY_DATA_EXPORT_KEY]{0} security data retrieval request # Event: SECURITY_DATA_INFO Description: This event is used when user attempts to get metadata information about a key. Applicable subsystems: KRA Enabled by default: No Fields: - SubjectID: - Outcome: - KeyID: The key being retrieved. - ClientKeyId: - Info: The failure reason if the export fails. - PubKey: The public key for the private key being retrieved. # LOGGING_SIGNED_AUDIT_SECURITY_DATA_INFO=<type=SECURITY_DATA_INFO>:[AuditEvent=SECURITY_DATA_INFO]{0} security data info request # Event: TOKEN_AUTH with [Outcome=Failure] Description: This event is used when authentication failed. Applicable subsystems: TPS Enabled by default: No Fields: - SubjectID: - Outcome: Failure (obviously, if authentication failed, you won't have a valid SubjectID, so in this case, AttemptedID is recorded) - IP: - CUID: - MSN: - OP: - tokenType: - AppletVersion: - AuthMgr: The authentication manager instance name that did this authentication. # LOGGING_SIGNED_AUDIT_TOKEN_AUTH_FAILURE=<type=TOKEN_AUTH>:[AuditEvent=TOKEN_AUTH]{0} token authentication failure # Event: TOKEN_AUTH with [Outcome=Success] Description: This event is used when authentication succeeded. Applicable subsystems: TPS Enabled by default: No Fields: - SubjectID: - Outcome: Success - IP: - CUID: - MSN: - OP: - tokenType: - AppletVersion: - AuthMgr: The authentication manager instance name that did this authentication. # LOGGING_SIGNED_AUDIT_TOKEN_AUTH_SUCCESS=<type=TOKEN_AUTH>:[AuditEvent=TOKEN_AUTH]{0} token authentication success # Event: TOKEN_CERT_ENROLLMENT Description: This event is used for TPS when token certificate enrollment request is made. Applicable subsystems: TPS Enabled by default: No Fields: - SubjectID: - Outcome: - IP: - CUID: - tokenType: - KeyVersion: - Serial: - CA_ID: - Info: Info in case of failure. # LOGGING_SIGNED_AUDIT_TOKEN_CERT_ENROLLMENT_9=<type=TOKEN_CERT_ENROLLMENT>:[AuditEvent=TOKEN_CERT_ENROLLMENT][IP={0}][SubjectID={1}][CUID={2}][Outcome={3}][tokenType={4}][KeyVersion={5}][Serial={6}][CA_ID={7}][Info={8}] token certificate enrollment request made # Event: TOKEN_CERT_RENEWAL Description: This event is used for TPS when token certificate renewal request is made. Applicable subsystems: TPS Enabled by default: No Fields: - SubjectID: - Outcome: - IP: - CUID: - tokenType: - KeyVersion: - Serial: - CA_ID: - Info: Info in case of failure. # LOGGING_SIGNED_AUDIT_TOKEN_CERT_RENEWAL_9=<type=TOKEN_CERT_RENEWAL>:[AuditEvent=TOKEN_CERT_RENEWAL][IP={0}][SubjectID={1}][CUID={2}][Outcome={3}][tokenType={4}][KeyVersion={5}][Serial={6}][CA_ID={7}][Info={8}] token certificate renewal request made # Event: TOKEN_CERT_RETRIEVAL Description: This event is used for TPS when token certificate retrieval request is made; usually used during recovery, along with TOKEN_KEY_RECOVERY. Applicable subsystems: TPS Enabled by default: No Fields: - SubjectID: - Outcome: - IP: - CUID: - tokenType: - KeyVersion: - Serial: - CA_ID: - Info: # LOGGING_SIGNED_AUDIT_TOKEN_CERT_RETRIEVAL_9=<type=TOKEN_CERT_RETRIEVAL>:[AuditEvent=TOKEN_CERT_RETRIEVAL][IP={0}][SubjectID={1}][CUID={2}][Outcome={3}][tokenType={4}][KeyVersion={5}][Serial={6}][CA_ID={7}][Info={8}] token certificate retrieval request made # Event: TOKEN_CERT_STATUS_CHANGE_REQUEST Description: This event is used when a token certificate status change request (e.g. revocation) is made. Applicable subsystems: TPS Enabled by default: No Fields: - SubjectID: - Outcome: - IP: - CUID: The last token that the certificate was associated with. - tokenType: - CertSerialNum: The serial number (in decimal) of the certificate to be revoked. - RequestType: \"revoke\", \"on-hold\", \"off-hold\". - RevokeReasonNum: - CA_ID: - Info: # LOGGING_SIGNED_AUDIT_TOKEN_CERT_STATUS_CHANGE_REQUEST_10=<type=TOKEN_CERT_STATUS_CHANGE_REQUEST>:[AuditEvent=TOKEN_CERT_STATUS_CHANGE_REQUEST][IP={0}][SubjectID={1}][CUID={2}][Outcome={3}][tokenType={4}][CertSerialNum={5}][RequestType={6}][RevokeReasonNum={7}][CA_ID={8}][Info={9}] token certificate revocation/unrevocation request made # Event: TOKEN_FORMAT with [Outcome=Failure] Description: This event is used when token format operation failed. Applicable subsystems: TPS Enabled by default: No Fields: - SubjectID: - Outcome: - IP: - CUID: - MSN: - tokenType: - AppletVersion: - Info: # LOGGING_SIGNED_AUDIT_TOKEN_FORMAT_FAILURE=<type=TOKEN_FORMAT>:[AuditEvent=TOKEN_FORMAT]{0} token op format failure # Event: TOKEN_FORMAT with [Outcome=Success] Description: This event is used when token format operation succeeded. Applicable subsystems: TPS Enabled by default: No Fields: - SubjectID: - Outcome: - IP: - CUID: - MSN: - tokenType: - AppletVersion: - KeyVersion: # LOGGING_SIGNED_AUDIT_TOKEN_FORMAT_SUCCESS=<type=TOKEN_FORMAT>:[AuditEvent=TOKEN_FORMAT]{0} token op format success # Event: TOKEN_KEY_RECOVERY Description: This event is used for TPS when token certificate key recovery request is made. Applicable subsystems: TPS Enabled by default: No Fields: - SubjectID: - Outcome: - IP: - CUID: - tokenType: - KeyVersion: - Serial: - CA_ID: - KRA_ID: - Info: # LOGGING_SIGNED_AUDIT_TOKEN_KEY_RECOVERY_10=<type=TOKEN_KEY_RECOVERY>:[AuditEvent=TOKEN_KEY_RECOVERY][IP={0}][SubjectID={1}][CUID={2}][Outcome={3}][tokenType={4}][KeyVersion={5}][Serial={6}][CA_ID={7}][KRA_ID={8}][Info={9}] token certificate/key recovery request made # Event: TOKEN_OP_REQUEST Description: This event is used when token processor operation request is made. Applicable subsystems: TPS Enabled by default: No Fields: - IP: - CUID: - MSN: - Outcome: - OP: \"format\", \"enroll\", or \"pinReset\" - AppletVersion: # LOGGING_SIGNED_AUDIT_TOKEN_OP_REQUEST_6=<type=TOKEN_OP_REQUEST>:[AuditEvent=TOKEN_OP_REQUEST][IP={0}][CUID={1}][MSN={2}][Outcome={3}][OP={4}][AppletVersion={5}] token processor op request made # Event: TOKEN_PIN_RESET with [Outcome=Failure] Description: This event is used when token pin reset request failed. Applicable subsystems: TPS Enabled by default: No Fields: - IP: - SubjectID: - CUID: - Outcome: - tokenType: - AppletVersion: - Info: # LOGGING_SIGNED_AUDIT_TOKEN_PIN_RESET_FAILURE=<type=TOKEN_PIN_RESET>:[AuditEvent=TOKEN_PIN_RESET]{0} token op pin reset failure # Event: TOKEN_PIN_RESET with [Outcome=Success] Description: This event is used when token pin reset request succeeded. Applicable subsystems: TPS Enabled by default: No Fields: - IP: - SubjectID: - CUID: - Outcome: - tokenType: - AppletVersion: - KeyVersion: # LOGGING_SIGNED_AUDIT_TOKEN_PIN_RESET_SUCCESS=<type=TOKEN_PIN_RESET>:[AuditEvent=TOKEN_PIN_RESET]{0} token op pin reset success # Event: TOKEN_STATE_CHANGE Description: This event is used when token state changed. Applicable subsystems: TPS Enabled by default: No Fields: - SubjectID: - Outcome: - oldState: - oldReason: - newState: - newReason: - ParamNameValPairs: A name-value pair (where name and value are separated by the delimiter ;;) separated by + (if more than one name-value pair) of config params changed. --- secret component (password) MUST NOT be logged --- - Info: Error info for failed cases. # LOGGING_SIGNED_AUDIT_TOKEN_STATE_CHANGE_8=<type=TOKEN_STATE_CHANGE>:[AuditEvent=TOKEN_STATE_CHANGE][SubjectID={0}][Outcome={1}][oldState={2}][oldReason={3}][newState={4}][newReason={5}][ParamNameValPairs={6}][Info={7}] token state changed" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide_common_criteria_edition/audit_events
8.4.4. Deactivating KSM
8.4.4. Deactivating KSM Kernel same-page merging (KSM) has a performance overhead which may be too large for certain environments or host systems. KSM may also introduce side channels that could be potentially used to leak information across guests. If this is a concern, KSM can be disabled on per-guest basis. KSM can be deactivated by stopping the ksmtuned and the ksm services. However, this action does not persist after restarting. To deactivate KSM, run the following in a terminal as root: Stopping the ksmtuned and the ksm deactivates KSM, but this action does not persist after restarting. Persistently deactivate KSM with the systemctl commands: When KSM is disabled, any memory pages that were shared prior to deactivating KSM are still shared. To delete all of the PageKSM in the system, use the following command: After this is performed, the khugepaged daemon can rebuild transparent hugepages on the KVM guest physical memory. Using # echo 0 >/sys/kernel/mm/ksm/run stops KSM, but does not unshare all the previously created KSM pages (this is the same as the # systemctl stop ksmtuned command).
[ "systemctl stop ksmtuned Stopping ksmtuned: [ OK ] systemctl stop ksm Stopping ksm: [ OK ]", "systemctl disable ksm systemctl disable ksmtuned", "echo 2 >/sys/kernel/mm/ksm/run" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_tuning_and_optimization_guide/sect-KSM-Deactivating_KSM
3.7. Setting up Cross Realm Authentication
3.7. Setting up Cross Realm Authentication Allowing clients (typically users) of one realm to use Kerberos to authenticate to services (typically server processes running on a particular server system) which belong to another realm requires cross-realm authentication . 3.7.1. Setting up Basic Trust Relationships For the simplest case, for a client of realm A.EXAMPLE.COM to access a service in the B.EXAMPLE.COM realm, both realms must share a key for a principal named krbtgt/[email protected] , and both keys must have the same key version number associated with them. To accomplish this, select a very strong password or passphrase, and create an entry for the principal in both realms using kadmin . # kadmin -r A.EXAMPLE.COM kadmin: add_principal krbtgt/[email protected] Enter password for principal "krbtgt/[email protected]": Re-enter password for principal "krbtgt/[email protected]": Principal "krbtgt/[email protected]" created. quit # kadmin -r B.EXAMPLE.COM kadmin: add_principal krbtgt/[email protected] Enter password for principal "krbtgt/[email protected]": Re-enter password for principal "krbtgt/[email protected]": Principal "krbtgt/[email protected]" created. quit Use the get_principal command to verify that both entries have matching key version numbers ( kvno values) and encryption types. Important A common, but incorrect, situation is for administrators to try to use the add_principal command's -randkey option to assign a random key instead of a password, dump the new entry from the database of the first realm, and import it into the second. This will not work unless the master keys for the realm databases are identical, as the keys contained in a database dump are themselves encrypted using the master key. Clients in the A.EXAMPLE.COM realm are now able to authenticate to services in the B.EXAMPLE.COM realm. Put another way, the B.EXAMPLE.COM realm now trusts the A.EXAMPLE.COM realm. This brings us to an important point: cross-realm trust is unidirectional by default. The KDC for the B.EXAMPLE.COM realm can trust clients from the A.EXAMPLE.COM to authenticate to services in the B.EXAMPLE.COM realm. However, this trust is not automatically reciprocated so that the B.EXAMPLE.COM realm are trusted to authenticate to services in the A.EXAMPLE.COM realm. To establish trust in the other direction, both realms would need to share keys for the krbtgt/[email protected] service - an entry with a reverse mapping from the example. 3.7.2. Setting up Complex Trust Relationships If direct trust relationships were the only method for providing trust between realms, networks which contain multiple realms would be very difficult to set up. Luckily, cross-realm trust is transitive. If clients from A.EXAMPLE.COM can authenticate to services in B.EXAMPLE.COM , and clients from B.EXAMPLE.COM can authenticate to services in C.EXAMPLE.COM , then clients in A.EXAMPLE.COM can also authenticate to services in C.EXAMPLE.COM , even if C.EXAMPLE.COM does not directly trust A.EXAMPLE.COM . This means that, on a network with multiple realms which all need to trust each other, making good choices about which trust relationships to set up can greatly reduce the amount of effort required. The client's system must be configured so that it can properly deduce the realm to which a particular service belongs, and it must be able to determine how to obtain credentials for services in that realm. Taking first things first, the principal name for a service provided from a specific server system in a given realm typically looks like this: service is typically either the name of the protocol in use (other common values include LDAP, IMAP, CVS, and HTTP) or host . server.example.com is the fully-qualified domain name of the system which runs the service. EXAMPLE.COM is the name of the realm. To deduce the realm to which the service belongs, clients will most often consult DNS or the domain_realm section of /etc/krb5.conf to map either a hostname ( server.example.com ) or a DNS domain name ( .example.com ) to the name of a realm ( EXAMPLE.COM ). After determining the realm to which a service belongs, a client then has to determine the set of realms which it needs to contact, and in which order it must contact them, to obtain credentials for use in authenticating to the service. This can be done in one of two ways. The simplest is to use a shared hierarchy to name realms. The second uses explicit configuration in the krb5.conf file. 3.7.2.1. Configuring a Shared Hierarchy of Names The default method, which requires no explicit configuration, is to give the realms names within a shared hierarchy. For an example, assume realms named A.EXAMPLE.COM , B.EXAMPLE.COM , and EXAMPLE.COM . When a client in the A.EXAMPLE.COM realm attempts to authenticate to a service in B.EXAMPLE.COM , it will, by default, first attempt to get credentials for the EXAMPLE.COM realm, and then to use those credentials to obtain credentials for use in the B.EXAMPLE.COM realm. The client in this scenario treats the realm name as one might treat a DNS name. It repeatedly strips off the components of its own realm's name to generate the names of realms which are "above" it in the hierarchy until it reaches a point which is also "above" the service's realm. At that point it begins prepending components of the service's realm name until it reaches the service's realm. Each realm which is involved in the process is another "hop". For example, using credentials in A.EXAMPLE.COM , authenticating to a service in B.EXAMPLE.COM has three hops: A.EXAMPLE.COM EXAMPLE.COM B.EXAMPLE.COM . A.EXAMPLE.COM and EXAMPLE.COM share a key for krbtgt/[email protected] EXAMPLE.COM and B.EXAMPLE.COM share a key for krbtgt/[email protected] Another example, using credentials in SITE1.SALES.EXAMPLE.COM , authenticating to a service in EVERYWHERE.EXAMPLE.COM can have several series of hops: SITE1.SALES.EXAMPLE.COM and SALES.EXAMPLE.COM share a key for krbtgt/[email protected] SALES.EXAMPLE.COM and EXAMPLE.COM share a key for krbtgt/[email protected] EXAMPLE.COM and EVERYWHERE.EXAMPLE.COM share a key for krbtgt/[email protected] There can even be hops between realm names whose names share no common suffix, such as DEVEL.EXAMPLE.COM and PROD.EXAMPLE.ORG . DEVEL.EXAMPLE.COM and EXAMPLE.COM share a key for krbtgt/[email protected] EXAMPLE.COM and COM share a key for krbtgt/[email protected] COM and ORG share a key for krbtgt/ORG@COM ORG and EXAMPLE.ORG share a key for krbtgt/EXAMPLE.ORG@ORG EXAMPLE.ORG and PROD.EXAMPLE.ORG share a key for krbtgt/[email protected] 3.7.2.2. Configuring Paths in krb5.conf The more complicated, but also more flexible, method involves configuring the capaths section of /etc/krb5.conf , so that clients which have credentials for one realm will be able to look up which realm is in the chain which will eventually lead to the being able to authenticate to servers. The format of the capaths section is relatively straightforward: each entry in the section is named after a realm in which a client might exist. Inside of that subsection, the set of intermediate realms from which the client must obtain credentials is listed as values of the key which corresponds to the realm in which a service might reside. If there are no intermediate realms, the value "." is used. For example: [capaths] A.EXAMPLE.COM = { B.EXAMPLE.COM = . C.EXAMPLE.COM = B.EXAMPLE.COM D.EXAMPLE.COM = B.EXAMPLE.COM D.EXAMPLE.COM = C.EXAMPLE.COM } Clients in the A.EXAMPLE.COM realm can obtain cross-realm credentials for B.EXAMPLE.COM directly from the A.EXAMPLE.COM KDC. If those clients wish to contact a service in the C.EXAMPLE.COM realm, they will first need to obtain necessary credentials from the B.EXAMPLE.COM realm (this requires that krbtgt/[email protected] exist), and then use those credentials to obtain credentials for use in the C.EXAMPLE.COM realm (using krbtgt/[email protected] ). If those clients wish to contact a service in the D.EXAMPLE.COM realm, they will first need to obtain necessary credentials from the B.EXAMPLE.COM realm, and then credentials from the C.EXAMPLE.COM realm, before finally obtaining credentials for use with the D.EXAMPLE.COM realm. Note Without a capath entry indicating otherwise, Kerberos assumes that cross-realm trust relationships form a hierarchy. Clients in the A.EXAMPLE.COM realm can obtain cross-realm credentials from B.EXAMPLE.COM realm directly. Without the "." indicating this, the client would instead attempt to use a hierarchical path, in this case:
[ "kadmin -r A.EXAMPLE.COM kadmin: add_principal krbtgt/[email protected] Enter password for principal \"krbtgt/[email protected]\": Re-enter password for principal \"krbtgt/[email protected]\": Principal \"krbtgt/[email protected]\" created. quit kadmin -r B.EXAMPLE.COM kadmin: add_principal krbtgt/[email protected] Enter password for principal \"krbtgt/[email protected]\": Re-enter password for principal \"krbtgt/[email protected]\": Principal \"krbtgt/[email protected]\" created. quit", "service/[email protected]", "SITE1.SALES.EXAMPLE.COM SALES.EXAMPLE.COM EXAMPLE.COM EVERYWHERE.EXAMPLE.COM", "DEVEL.EXAMPLE.COM EXAMPLE.COM COM ORG EXAMPLE.ORG PROD.EXAMPLE.ORG", "[capaths] A.EXAMPLE.COM = { B.EXAMPLE.COM = . C.EXAMPLE.COM = B.EXAMPLE.COM D.EXAMPLE.COM = B.EXAMPLE.COM D.EXAMPLE.COM = C.EXAMPLE.COM }", "A.EXAMPLE.COM EXAMPLE.COM B.EXAMPLE.COM" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_smart_cards/setting_up_cross_realm_authentication
2.2. PowerTOP
2.2. PowerTOP The introduction of the tickless kernel in Red Hat Enterprise Linux 6 (refer to Section 3.6, "Tickless Kernel" ) allows the CPU to enter the idle state more frequently, reducing power consumption and improving power management. The PowerTOP tool identifies specific components of kernel and userspace applications that frequently wake up the CPU. PowerTOP was used in development to perform the audits described in Section 3.13, "Optimizations in User Space" that led to many applications being tuned in this release, reducing unnecessary CPU wake up by a factor of ten. Red Hat Enterprise Linux 6 comes with version 2.x of PowerTOP . This version is a complete rewrite of the 1.x codebase. It features a clearer tab-based user interface and extensively uses the kernel "perf" infrastructure to give more accurate data. The power behavior of system devices is tracked and prominently displayed, so problems can be pinpointed quickly. More experimentally, the 2.x codebase includes a power estimation engine that can indicate how much power individual devices and processes are consuming. Refer to Figure 2.1, "PowerTOP in Operation" . To install PowerTOP run, as root , the following command: To run PowerTOP , execute the following command as root : PowerTOP can provide an estimate of the total power usage of the system and show individual power usage for each process, device, kernel work, timer, and interrupt handler. Laptops should run on battery power during this task. To calibrate the power estimation engine, run, as root , the following command: Calibration takes time. The process performs various tests, and will cycle through brightness levels and switch devices on and off. Allow the process to finish and do not interact with the machine during the calibration. When it completes, PowerTOP starts as normal. Then keep PowerTOP running for approximately an hour to collect data. When enough data is collected, power estimation figures will be displayed in the first column. If you are executing powertop --calibrate on a laptop, it should still be running on battery power so that all available data is presented. While it runs, PowerTOP gathers statistics from the system. In the Overview tab, you can view a list of the components that are either sending wake-ups to the CPU most frequently or are consuming the most power (refer to Figure 2.1, "PowerTOP in Operation" ). The adjacent columns display: power estimation how the resource is being used wakeups per second the classification of the component, such as process, device, or timer description of the component Wakeups per second indicates how efficiently the services or the devices and drivers of the kernel are performing. Less wakeups means less power is consumed. Components are ordered by how much further their power usage can be optimized. Tuning driver components typically requires kernel changes, which is beyond the scope of this document. However, userland processes that send wakeups are more easily managed. First, determine whether this service or application needs to run at all on this system. If not, simply deactivate it. To turn off an old System V service permanently, run: For more details about the process, run, as root , the following commands: If the trace looks like it is repeating itself, then it probably is a busy loop. Fixing such bugs typically requires a code change in that component. As seen in Figure 2.1, "PowerTOP in Operation" , total power consumption and the remaining battery life are displayed, if applicable. Below these is a short summary featuring total wakeups per second, GPU operations per second, and virtual filesystem operations per second. In the rest of the screen there is a list of processes, interrupts, devices and other resources sorted according to their utilization. If properly calibrated, a power consumption estimation for every listed item in the first column is shown as well. Use the Tab and Shift + Tab keys to cycle through tabs. In the Idle stats tab, use of C-states is shown for all processors and cores. In the Frequency stats tab, use of P-states including the Turbo mode (if applicable) is shown for all processors and cores. The longer the CPU stays in the higher C- or P-states, the better ( C4 being higher than C3 ). This is a good indication of how well the CPU usage has been optimized. Residency should ideally be 90% or more in the highest C- or P-state while the system is idle. The Device Stats tab provides similar information to the Overview tab but only for devices. The Tunables tab contains suggestions for optimizing the system for lower power consumption. Use the up and down keys to move through suggestions and the enter key to toggle the suggestion on and off. Figure 2.1. PowerTOP in Operation You can also generate HTML reports by running PowerTOP with the --html option. Replace the htmlfile.html parameter with the desired name for the output file: By default PowerTOP takes measurements in 20 seconds intervals, you can change it with the --time option: For more information about the PowerTOP project, refer to https://01.org/powertop/ . PowerTOP can also be used along with the turbostat utility. It is a reporting tool that displays information about processor topology, frequency, idle power-state statistics, temperature, and power usage on Intel 64 processors. For more information about turbostat , refer to the turbostat man page, or the relevant section in Performance Tuning Guide .
[ "install powertop", "powertop", "powertop --calibrate", "chkconfig servicename.service off", "ps -awux | grep processname strace -p processid", "powertop --html= htmlfile.html", "powertop --html= htmlfile.html --time= seconds" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/power_management_guide/powertop
Chapter 1. Upgrading Red Hat Single Sign-On
Chapter 1. Upgrading Red Hat Single Sign-On Red Hat Single Sign-On (RH-SSO) 7.6 is based on the Keycloak project and provides security for your web applications by providing Web single sign-on capabilities based on popular standards such as SAML 2.0, OpenID Connect, and OAuth 2.0. The Red Hat Single Sign-On Server can act as a SAML or OpenID Connect-based identity provider, mediating with your enterprise user directory or third-party SSO provider for identity information and your applications using standards-based tokens. RH-SSO provides two operating modes: standalone server or managed domain. The standalone server operating mode represents running RH-SSO as a single server instance. The managed domain operating mode allows for the management of multiple RH-SSO instances from a single control point. The upgrade process differs depending on which operating mode has been implemented. Specific instructions for each mode are provided where applicable. The purpose of this guide is to document the steps that are required to successfully upgrade from Red Hat Single Sign-On 7.x to Red Hat Single Sign-On 7.6. 1.1. About upgrades Depending on your version of RH-SSO, you choose one of three types of upgrade. However, if you starting from Keycloak, you choose this procedure . 1.1.1. Major upgrades A major upgrade or migration is required when RH-SSO is upgraded from one major release to another, for example, from Red Hat Single Sign-On 7.2 to Red Hat Single Sign-On 8.0. There may be breaking API changes between major releases that could require rewriting parts of applications or server extensions. 1.1.2. Minor updates Red Hat Single Sign-On periodically provides point releases, which are minor updates that include bug fixes, security fixes, and new features. If you plan to upgrade from one Red Hat Single Sign-On point release to another, for example, from Red Hat Single Sign-On 7.3 to Red Hat Single Sign-On 7.6, code changes should not be required for applications or custom server extensions as long as no private, unsupported, or tech preview APIs are used. 1.1.3. Micro updates Red Hat Single Sign-On 7.6 also periodically provides micro releases that contain bug and security fixes. Micro releases increment the minor release version by the last digit, for example from 7.6.0 to 7.6.1. These releases do not require migration and should not impact the server configuration files. The patch management system for ZIP installations can also rollback the patch and server configuration. A micro release only contains the artifacts that have changed. For example if Red Hat Single Sign-On 7.6.1 contains changes to the server and the JavaScript adapter, but not the EAP adapter, only the server and JavaScript adapter are released and require updating. 1.2. Migrating Keycloak to RH-SSO You can migrate to Red Hat Single Sign-On, the supported Red Hat product, from Keycloak, the community project. Prerequisites To learn about new features before the upgrade, review the changes . Verify that you have installed the correct version of Keycloak as a starting point. To migrate to Red Hat Single Sign-On 7.6, first install Keycloak 18.0.0. Procedure Perform the Minor Upgrades procedure. Although this procedure is labelled Minor Upgrade , the same steps apply for this migration. Perform the Adapter Upgrade procedure .
null
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/upgrading_guide/intro
probe::ipmib.InReceives
probe::ipmib.InReceives Name probe::ipmib.InReceives - Count an arriving packet Synopsis ipmib.InReceives Values skb pointer to the struct sk_buff being acted on op value to be added to the counter (default value of 1) Description The packet pointed to by skb is filtered by the function ipmib_filter_key . If the packet passes the filter is is counted in the global InReceives (equivalent to SNMP's MIB IPSTATS_MIB_INRECEIVES)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-ipmib-inreceives
A.10. turbostat
A.10. turbostat The turbostat tool provides detailed information about the amount of time that the system spends in different states. Turbostat is provided by the kernel-tools package. By default, turbostat prints a summary of counter results for the entire system, followed by counter results every 5 seconds, under the following headings: pkg The processor package number. core The processor core number. CPU The Linux CPU (logical processor) number. %c0 The percentage of the interval for which the CPU retired instructions. GHz When this number is higher than the value in TSC, the CPU is in turbo mode TSC The average clock speed over the course of the entire interval. %c1, %c3, and %c6 The percentage of the interval for which the processor was in the c1, c3, or c6 state, respectively. %pc3 or %pc6 The percentage of the interval for which the processor was in the pc3 or pc6 state, respectively. Specify a different period between counter results with the -i option, for example, run turbostat -i 10 to print results every 10 seconds instead. Note Upcoming Intel processors may add additional c-states. As of Red Hat Enterprise Linux 7.0, turbostat provides support for the c7, c8, c9, and c10 states.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sect-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Tool_Reference-turbostat
Chapter 25. Compiler and Tools
Chapter 25. Compiler and Tools Support of OpenMP 4.5 for libgomp in GCC This update provides support for the new version of OpenMP in GCC to allow programs in the Developer Toolset to properly link and run. (BZ#1357060) Better stack protection in GCC Prior to this update, GCC stack protection did not work for functions that only contained variable-length arrays and no other (or only very small) arrays. Consequently, a buffer overflow error could occur undetected. This bug has been fixed and the compiler is now able to instrument even such functions. (BZ# 1289022 ) gdbserver now supports seamless debugging of processes from containers Prior to this update, when GDB was executing inside a Super-Privileged Container (SPC) and attached to a process that was running in another container on Red Hat Enterprise Linux Atomic Host, GDB did not locate the binary images of the main executable or any shared libraries loaded by the process to be debugged. As a consequence, GDB may have displayed error messages relating to files not being present, or being present but mismatched. Also, GDB may have seemed to attach correctly, but subsequent commands may have failed or displayed corrupted information. In Red Hat Enterprise Linux 7.3, gdbserver has been extended for seamless support of debugging processes from containers. The Red Hat Enterprise Linux 7.3 version of gdbserver newly supports the qXfer:exec-file:read and vFile:setfs packets. However, the Red Hat Enterprise Linux 7.3 version of gdb cannot use these packets. The Red Hat Developer Toolset 4.1 (or higher) version of gdb is recommended for use with containers and with Red Hat Enterprise Linux 7.3 gdbserver . The Red Hat Developer Toolset version of gdbserver can be used as well. Red Hat Enterprise Linux 7.3 gdb can now suggest using gdbserver when run with the -p parameter (or the attach command) and when, at the same time, it detects that the process being attached is from a container. Red Hat Enterprise Linux 7.3 gdb now also suggests the explicit use of the file command to specify the location of the process executable in the container being debugged. The file command does not need to be entered when the Red Hat Developer Toolset version of gdb is being used instead. With this update, Red Hat Enterprise Linux 7.3 gdbserver provides seamless debugging of processes from containers together with Red Hat Developer Toolset 4.1 (or higher) gdb . Additionally, Red Hat Enterprise Linux 7.3 gdb guides the user through the debugging of processes from containers when Red Hat Developer Toolset gdb is not available. (BZ# 1186918 ) GDB no longer kills running processes with deleted executables Prior to this update, GDB attempting to attach to a running process with a deleted executable would accidentally kill the process. This bug has been fixed, and GDB no longer erroneously kills processes with deleted executables. (BZ#1326476) GDB now generates smaller core files and respects core-dump filtering The gcore command, which provides GDB with its own core-dumping functionality, has been updated to more closely simulate the function of the Linux kernel core-dumping code, thus generating smaller core-dump files. GDB now also respects the /proc/PID/coredump_filter file, which controls what memory segments are written to core-dump files. (BZ#1265351) Better error message for AArch64 For the AArch64 target, if a program declared a global variable as a type smaller than an integer, but then referred to it in another file as if it were an integer, the linker could generate a confusing error message. This update fixes the error message, clearly identifying the cause and suggesting a possible reason for the error to the user. (BZ# 1300543 ) Large and/or high-address programs now link and execute correctly on AArch64 Previously, incorrect code in the linker could result in incorrect branch stubs being generated. Consequently, programs that were very big or if the programmer coded parts of the program to exist at a very high address, failed to link. The bug has been fixed and the correct kind of branch stub is now selected. (BZ# 1243559 ) The opreport and opannote utilities now properly analyze archive data. Previously, when using oparchive to store data, the associated samples were not included in the archive. In addition, the oprofile utilities selected data in the current working oprofile_data directory rather than in the archive. Consequently, the opreport and opannote utilities were unable to properly analyze data in an archive generated by oparchive . This update provides a fix for storing the profiling samples in the archive and selecting them for use with archives, and opreport and opannote now work as expected. (BZ# 1264443 ) Events with identical numerical unit masks are now handled by their names The 5th-generation Core i3, i5, and i7 Intel processors have some events that have multiple unit masks with the same numerical value. As a consequence, some events' default unit masks were not found and selected. This update changes the events to use a name rather than a numerical value for the default unit mask, thus fixing this bug. (BZ#1272136) New MACRO_INSTS_FUSED event identifier Previously, the MACRO_INSTS identifier was used for two different events in the 1th-generation Core i3, i5, and i7 Intel processors. As a consequence, it was impossible to clearly select either event by using MACRO_INSTS . This update renames one of the events to MACRO_INSTS_FUSED , thus fixing this bug. (BZ#1335145) Applications no longer crash upon multiple libpfm initializations Previously, when the libpfm initialization code was called multiple times (for example, in the PAPI fmultiplex1 test), when run as root, the libpfm internal data structures became corrupted, causing an unexpected termination. This update ensures the counter of available events is properly reset and applications using libpfm running as root no longer crash when libpfm is reinitialized. (BZ#1276702) Removal of purposeless warning message for physically non-existing nodes Previously, when the numa_node_to_cpus() function was called on a node which did not have an entry in the sysfs directory, the libnuma library always printed a warning message about an invalid sysfs. Consequently, libnuma printed the confusing warning message also for physically non-existing nodes (for example, for non-contiguous node numbers) and this warning could not be overridden when the function was called using the dlsym interface. With this update, the mentioned warning message is printed just for NUMA nodes that were found during an initial scan but then did not appear in sysfs. As a result, users of libnuma no longer receive the warning message for non-contiguous node numbers. (BZ#1270734) Selection of OpenJDK version family now remembered across updates Prior to this update, when a user had multiple JDKs installed, yum update always updated to the newest JDK even if the user had previously selected some lower-prioritized JDK. This update introduces the --family switch for chkconfig , which makes sure that the selected JDK remains in the version family after system updates. (BZ# 1296413 ) RC4 is now disabled by default in OpenJDK 6 and OpenJDK 7 Earlier OpenJDK packages allowed the RC4 cryptographic algorithm to be used when making secure connections using Transport Layer Security (TLS). This algorithm is no longer secure, and it has been disabled in this release. To retain its use, it is necessary to revert to the earlier setting of the jdk.tls.disabledAlgorithms of SSLv3, DH keySize < 768 . This can be done permanently in the <java.home>/jre/lib/security/java.security file or by adding the following line: to a new text file and passing the location of that file to Java on the command line using the -Djava.security.properties=<path to file> argument. (BZ# 1302385 ) zsh no longer deadlocks on malloc() execution Previously, if the zsh process received a signal during the execution of a memory allocation function and the signal handler attempted to allocate or free memory, zsh entered a deadlock and became unresponsive. With this update, signal handlers are no longer enabled while handling the global state of zsh or while using the heap memory allocator. This ensures that the described deadlock no longer occurs. (BZ# 1267912 ) SCSI device types described using multiple words are now handled correctly Prior to this update, the rescan-scsi-bus.sh tool misinterpreted SCSI device types that were described using more than one word, for example, Medium Changer or Optical Device . Consequently, when the script was run on systems that had such device types attached, the script printed multiple misleading error messages. With this update, device types described with multiple words are handled correctly, and the proper device-type description is returned to the user without any errors. (BZ#1298739) Sphinx builds HTML documentation in FIPS mode properly Previously, the Python Sphinx generator failed to build documentation in the HTML format on systems with FIPS mode activated. With this update,the use of the md5() function has been fixed by setting the used_for_security parameter to false . As a result, Sphinx now builds HTML documentation as expected. (BZ# 966954 ) Perl interpreter no longer crashes after using the PerlIO locale pragma When a thread was spawned after using the PerlIO locale pragma, the Perl interpreter terminated unexpectedly with a segmentation fault. An upstream patch has been applied, which fixes PerlIO::encoding object duplication. As a result, threads are correctly created after setting a file handle encoding. (BZ# 1344749 ) Line endings are now preserved in files uploaded with the Net::FTP Perl module in text mode Previously, when uploading a file with the Net::FTP Perl module in text mode, ends of lines in the uploaded file were incorrectly transformed. This update corrects end-of-line normalization from local to Network Virtual Terminal (NVT) encoding when uploading data to an FTP server, and the described problem no longer occurs. (BZ# 1263734 ) Perl interpreter no longer crashes when using glob() with a threaded program Previously, when calling the Perl glob() function after spawning a thread, the Perl interpreter terminated unexpectedly with a segmentation fault. An upstream patch has been applied to clone glob() interpreter-wide data, and using Perl glob() with a threaded program now works as expected. (BZ# 1223045 ) cgroup values can now be correctly displayed for threads under a parent process by using ps -o thcgr Previously, the ps command displayed only the control group ( cgroup ) of the parent process. Consequently, cgroup values of the threads under a parent process were identical to the cgroup value of the parent process. This update introduces a new option, thcgr , to maintain compatibility with current cgroup listing. When the thcgr option is used, the correct individual cgroup values are displayed for threads under the parent process. (BZ# 1284087 ) pmap no longer reports incorrect totals With the introduction of VmFlags in the kernel smaps interface, the pmap tool could no longer reliably process the content due to format differences of the VmFlags entry. As a consequence, pmap reported incorrect totals. The underlying source code has been patched, and pmap now works as expected. (BZ# 1262864 ) vmstat -d is now able to display devices with longer names When a disk statistics report is required, only the first 15 characters of the device name were previously read from the /proc/diskstats file. Consequently, devices with names longer than 15 characters were not shown in the output of the vmstat -d command. With this update, the formatting string has been changed to read up to 31 characters, and devices with longer names are now correctly displayed by vmstat -d . (BZ# 1169349 ) A new perl-Perl4-CoreLibs subpackage contains previously removed files The provides tag was incorrectly set for previously deprecated files that were no longer included in the perl package. To fix this bug, these files have been backported from the version of Perl and are now provided by a newly created perl-Perl4-CoreLibs subpackage. (BZ# 1365991 ) GSS-Proxy caches file descriptors less frequently Previously, the mechglue layer of GSS-Proxy cached file descriptors for the lifetime of the process. As a consequence, daemons that often change the UID or GID, such as as autofs , could behave unexpectedly. A patch has been applied to close and reopen the connection to GSS-Proxy when an ID changes. As a result, GSS-Proxy caches file descriptors less frequently and daemons that change the UID or GID now work as expected. (BZ#1340259) Fix to the PAPI_L1_TCM event computation Previously, the PAPI presets for L1 total cache misses (PAPI_L1_TCM) was computed incorrectly on 4th-generation Core i3, i5, and i7 Intel processors. This update fixes the computation of the PAPI_L1_TCM event and programs using PAPI_L1_TCM on these processors now get more accurate measurements. (BZ#1277931) More accurate PAPI_L1_DC* event on IBM Power7 and IBM Power8 platforms Previously, the PAPI event presets for cache events incorrectly computed derived values for various IBM Power7 and Power8 processors. Consequently, the PAPI_L1_DCR , PAPI_L1_DCW , and PAPI_L1_DCA event values were incorrect. The preset computations have been fixed and the mentioned events are now more accurate. (BZ#1263666) Improved Postfix expression parser Previously, the Postfix expression parser used to calculate derived metrics from expressions in the papi_events.csv file did not perform proper error checking and incorrectly parsed some expressions. Consequently, the parser could potentially write outside the buffers being used to compute the value of a derived metric and cause stack smashing errors for some expressions. A fix has been provided for the parser to prevent it from overwriting memory with incorrect expressions. Now, the parser properly and robustly parses Postfix expressions in papi_events.csv and reports errors on improper expressions rather than overwriting random regions of memory. (BZ#1357587) Undefined variable in the udp() function of the python-dns toolkit is now set Previously, the python-dns toolkit used an undefined response_time variable in the finally section of the udp() function. As a consequence, an incorrect exception was displayed to the user. This bug has been fixed and the correct exception is returned. (BZ# 1312770 ) zsh parses unescaped exclamation marks correctly now Previously, zsh parser state was insufficiently initialized. Consequently, zsh failed to parse unescaped exclamation marks in a text string. With this update, zsh properly initializes the parser state. As a result, zsh now parses unescaped exclamation marks correctly. (BZ# 1338689 ) zsh no longer hangs when receiving a signal while processing a job exit Previously, signal handlers were enabled while processing a job exit in zsh . Consequently, if a signal was received while using the memory allocator and its handler attempted to allocate or free memory, the zsh process ended up in a deadlock and became unresponsive. With this update, signal handlers are no longer enabled while processing a job exit. Instead, signals are queued for delayed execution of the signal handlers. As a result, the deadlock no longer occurs and zsh no longer hangs. (BZ# 1291782 ) zsh handles the out of memory scenario gracefully now The zsh shell allocates memory while printing the out of memory fatal error message. Previously, if the printing routine failed to allocate memory, it triggered an infinite recursion. Consequently, the zsh process terminated unexpectedly due to a stack overflow. With this update, the infinite recursion no longer appears in this scenario. As a result, after printing the fatal error message, zsh now terminates gracefully in case it runs out of memory. (BZ# 1302229 ) Syntax check in ksh compatibility mode now works as expected in zsh Previously, while checking the syntax of a shell script in ksh compatibility mode, zsh incorrectly initialized the USDHOME internal variable. Consequently, the zsh process terminated unexpectedly after it attempted to dereference a NULL pointer. With this update, the USDHOME internal variable is properly initialized. As a result, the syntax check in ksh compatibility mode now works as expected in zsh . (BZ# 1267251 ) Parsing command substitutions no longer corrupts command history Previously, commands having the USD() command substitution construct were recorded incorrectly in the command history. This bug has been fixed and parsing command substitutions no longer corrupts command history. (BZ# 1321303 ) haproxy configuration files can now use host names longer than 32 characters correctly Previously, when haproxy was configured to use peer host names, a bug caused host names longer than 32 characters to be truncated. As a consequence, the haproxy configuration files became invalid. This bug has now been fixed, and host names specified as peers can now safely exceed 32 characters. (BZ#1300392) RPM verification failures no longer occur after installing psacct When installing the psacct packages, the mode of the /var/account/pacct file was previously not set consistently with logrotate rules for psacct . As a consequence, the mode of /var/account/pacct stayed different from these rules after the installation and caused RPM verification failures. With this update, the mode of /var/account/pacct is set to 0600 during installation of psacct to align with logrotate ghost file rules. As a result, RPM verification failures no longer occur. (BZ# 1249665 ) The system is no longer rebooted unexpectedly due to SIGINT passed by sadc Due to a race condition, the sadc command sometimes passed the SIGINT signal to the init process. As a consequence, the system could be unexpectedly rebooted. This update adds a verification that the SIGINT signal is not sent to the init process. As a result, the system is no longer rebooted unexpectedly. (BZ# 1328490 ) pidstat no longer outputs values above 100% for certain fields Previously, the pidstat command could, under rare circumstances, run out of preallocated space for PIDs on systems with many short-lived processes. As a consequence, the pidstat output contained nonsensical values larger than 100%, in the %CPU , %user , and %sys fields. With this update, pidstat automatically reallocates space for PIDs, and outputs correct values for all fields. (BZ#1224882) /usr/bin/nfsiostat provided by sysstat has been deprecated in favor of /sbin/nfsiostat provided by nfs-utils Previously, two packages provided executables of the same name: the sysstat packages provided /usr/bin/nfsiostat and the nfs-utils packages provided /sbin/nfsiostat . As a consequence, it was not clear which binary was executed unless the full path was specified. The nfsiostat utility provided by sysstat has been deprecated in favor of the one provided by nfs-utils . In a transition period, the nfsiostat binary from the sysstat packages is renamed to nfsiostat-sysstat . (BZ# 846699 ) iostat can now print device names longer than 72 characters Previously, device names longer than 72 characters were truncated in the iostat command output because the device name field was too short. The allocated space for device names has been increased to 128 characters, and iostat can now print longer device names in the output. (BZ#1267972) Copying sparse files with trailing extents using cp no longer causes data corruption When creating sparse files, the fallocate utility could allocate extents beyond EOF using FALLOC_FL_KEEP_SIZE . As a consequence, when there was a gap (hole) between the extents, and EOF was within that gap, the final hole was not reproduced, which caused silent data corruption in the copied file due to its size being too small. With this update, the cp command ensures that extents beyond the apparent file size are not processed, as such processing and allocating is not currently supported. As a result, silent data corruption in certain type of sparse files no longer occurs. (BZ# 1284906 ) NFS shares mounted by autofs no longer cause timeouts when listing local mounts using df A bug in df could previously cause NFS shares mounted by autofs to be detected as local mounts. Attempts to list only local mounts using the -l option then timed out, because df was attempting to list these incorrectly detected shares. This bug has been fixed, and listing local mounts now works as expected. (BZ# 1309247 ) ksh now correctly displays login messages When logging in to an interactive login shell, the contents of the /etc/profile script are executed in order to set up an initial environment. Messages which should have been displayed to the user upon logging in to the Korn shell (ksh) were suppressed due to an internal test to determine whether the shell is a login shell that relied upon the value of the PS1 environment variable having already been set before /etc/profile was executed. However, this environment variable is set in the Korn shell only after /etc/profile is executed, which led to messages never being displayed to ksh users. This update provides an alternative test that does not rely on the PS1 variable being set before /etc/profile execution, with the result that messages are properly displayed to users of the Korn shell upon login. (BZ# 1321648 ) New POSIX semaphore destruction semantics Previously, the implementation of POSIX semaphores in glibc did not follow the current POSIX requirements for semaphores to be self-synchronizing. As a consequence, the sem_post() and sem_wait() functions could terminate unexpectedly or return the EINVAL error code because they accessed the semaphore after it has been destroyed. This update provides an implementation of the new POSIX semaphore destruction semantics which keeps track of waiters, avoiding premature destruction of the semaphore. The semaphores implemented by glibc are now self-synchronizing, thus fixing this bug. (BZ# 1027348 ) Disks are now cleanly unmounted after SELinux automatic re-label Previously, after SELinux relabel, the rhel-autorelabel script started system reboot by running the systemctl --force reboot command. Consequently, certain steps required to cleanly unmount the rootfs image and deactivate the underlying Device Mapper (DM) device were skipped. To fix this bug, the rhel-autorelabel script has been modified to invoke the dracut-initframs-restore script before the reboot. As a result, disks are now cleanly unmounted in the described scenario. (BZ# 1281821 ) sosreport now correctly collects output of sources with non-ASCII characters Prior to this update, the sosreport was not fully generated when the sosreport utility attempted to collect the output of a file or command whose name included non-ASCII characters. With this update, such files and commands are properly collected and reported in the utility. (BZ#1296813) Configuring kdump to an NFS target destination is now possible in the Kernel Dump Configuration GUI Previously, the input box for NFS target destination in the Kernel Dump Configuration GUI did not indicate that an export path needs to be entered. Consequently, users were not able to configure the kdump feature to a NFS target destination when using this GUI. With this update, the input box label has been changed to indicate that an export path is required, and users are able to configure kdump in the described situation. (BZ# 1208191 ) Correct warning message when configuring kdump to a NFS target with NFS shares unmounted Prior to this update, users were warned with confusing error messages when trying to configure the kdump to a NFS target destination if NFS shares were not mounted. The system-config-kdump utility operated through the Kernel Dump Configuration GUI, did not indicate that the NFS export needs to be mounted before applying the kdump configuration. Instead, multiple confusing error messages were returned. With this update the warning message has been changed to indicate that the NFS export is currently not mounted and that it should already be mounted in the moment of kdump configuration. This warning message is less confusing and provides the user with proper information on how to successfully complete the kdump configuration. (BZ#1121590) lparstat no longer fails due to long lines in /proc/interrupts Prior to this update, if the SPU line in the /proc/interrupts file was longer than 512 characters, using the lparstat command failed. With this update, lparstat properly parses interrupt lines, and thus returns correct results in the described circumstances. (BZ# 1366512 ) lparstat default output mode now reports properly Previously, when using the default output mode of the lparstat utility, lparstat incorrectly reported the value of certain parameters, for example physc , as 0.00 . This problem has been fixed, and the affected values are now displayed properly. (BZ#1347083) The Socket::getnameinfo module now works correctly with tainted values Previously, the Perl Socket::getnameinfo module failed to process tainted values. This update applies a patch and as a result, the module now works correctly with tainted values. (BZ# 1200167 ) The python-sphinx module no longer fails to build documentation Previously, the man-page writer module of the python-sphinx package missed the meta and inline node visitors. As a consequence, building documentation could fail. A patch has been provided to add the missing node visitors and as a result, documentation now builds successfully. (BZ# 1291573 ) Programs no longer run out of memory when repeatedly listing available polkit actions Previously, the polkit client library did not correctly free memory when listing available actions, which could cause programs to run out of memory and terminate. With this update, the library frees the memory correctly, and programs no longer crash in this scenario. (BZ# 1310738 ) unzip now supports non-latin and non-unicode encodings Previously, unzip did not support non-latin and non-unicode encodings, so files with incorrect names could be created. With this update, unzip supports these encodings using the -O and -I options. For more information, run the unzip -h command. (BZ#1276744) zlib now decompresses RFC1951 compliant files correctly Previously, due to a bug in zlib , RFC1951 compliant files were not correctly decompressed. With this update, the bug has been fixed, and zlib decompresses RFC1951 compliant files correctly. (BZ#1127330) The glibc times() function now supports NULL for the buffer Previously, the times() function in glibc did not allow users to set a NULL value for the buffer. As a consequence, the function could cause the application using it to terminate unexpectedly. This update applies a patch and as a result, you can set a NULL value for the buffer and the kernel system call returns the expected results. (BZ#1308728) iconv no longer adds a redundant shift sequence Previously, a bug in the character conversion routines used by iconv for the IBM930, IBM933, IBM935, IBM937, and IBM939 character sets could result in a redundant shift sequence being included in the output of the tool. The generated non-conforming output could result in an inability to read the output data. The character conversion routines have been fixed and no longer return a redundant shift sequence. (BZ#1293916) Core C library (glibc) enhanced to increase malloc() scalability A defect in the implementation of the malloc() function could result in unnecessary serialization of memory allocation requests across threads. This update fixes the bug and substantially increases the concurrent throughput of allocation requests for applications that frequently create and destroy threads. (BZ# 1276753 ) Dynamic linker no longer fails when an audit module provides alternate DSO Previously, when an audit module provided an alternate DSO (dynamic shared object) path, the ld.so dynamic linker terminated unexpectedly with a segmentation fault. This update fixes the bug and the dynamic linker now keeps track of the original DSO path for future reference and no longer crashes in the described scenario. (BZ# 1211100 ) selinux-policy now allows hypervkvpd to getattr on all filesystem types Previously, an SELinux denial occurred during the execution of the restorecon command after an IP injection on the virtual machine with the Data Exchange option enabled. The selinux-policy packages have been updated, and an IP injection now finishes correctly both in SELinux permissive and enforcing mode. (BZ# 1349356 )
[ "jdk.tls.disabledAlgorithms=SSLv3, DH keySize < 768" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.3_release_notes/bug_fixes_compiler_and_tools
13.2. Certificate Management in Firefox
13.2. Certificate Management in Firefox To manage certificates in Firefox, open the Certificate Manager . In Mozilla Firefox, open the Firefox menu and click Preferences . Figure 13.2. Firefox Preferences Open the Advanced section and choose the Certificates tab. Figure 13.3. Certificates Tab in Firefox Click View Certificates to open the Certificate Manager . To import a CA certificate: Download and save the CA certificate to your computer. In the Certificate Manager , choose the Authorities tab and click Import . Figure 13.4. Importing the CA Certificate in Firefox Select the downloaded CA certificate. To set the certificate trust relationships: In the Certificate Manager , under the Authorities tab, select the appropriate certificate and click Edit Trust . Edit the certificate trust settings. Figure 13.5. Editing the Certificate Trust Settings in Firefox To use a personal certificate for authentication: In the Certificate Manager , under the Your Certificates tab, click Import . Figure 13.6. Importing a Personal Certificate for Authentication in Firefox Select the required certificate from your computer.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system-level_authentication_guide/using_the_certificates_on_the_token_for_ssl_
1.2. Variable Name: EAP_HOME
1.2. Variable Name: EAP_HOME EAP_HOME refers to the root directory of the Red Hat JBoss Enterprise Application Platform installation on which JBoss Data Virtualization has been deployed.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_2_modeshape_tools/variable_name_eap_home
Chapter 60. JmxTransTemplate schema reference
Chapter 60. JmxTransTemplate schema reference Used in: JmxTransSpec Property Description deployment Template for JmxTrans Deployment . DeploymentTemplate pod Template for JmxTrans Pods . PodTemplate container Template for JmxTrans container. ContainerTemplate serviceAccount Template for the JmxTrans service account. ResourceTemplate
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-JmxTransTemplate-reference
Chapter 5. About the Migration Toolkit for Containers
Chapter 5. About the Migration Toolkit for Containers The Migration Toolkit for Containers (MTC) enables you to migrate stateful application workloads from OpenShift Container Platform 3 to 4.15 at the granularity of a namespace. Important Before you begin your migration, be sure to review the differences between OpenShift Container Platform 3 and 4 . MTC provides a web console and an API, based on Kubernetes custom resources, to help you control the migration and minimize application downtime. The MTC console is installed on the target cluster by default. You can configure the Migration Toolkit for Containers Operator to install the console on an OpenShift Container Platform 3 source cluster or on a remote cluster . MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider. The service catalog is deprecated in OpenShift Container Platform 4. You can migrate workload resources provisioned with the service catalog from OpenShift Container Platform 3 to 4 but you cannot perform service catalog actions such as provision , deprovision , or update on these workloads after migration. The MTC console displays a message if the service catalog resources cannot be migrated. 5.1. Terminology Table 5.1. MTC terminology Term Definition Source cluster Cluster from which the applications are migrated. Destination cluster [1] Cluster to which the applications are migrated. Replication repository Object storage used for copying images, volumes, and Kubernetes objects during indirect migration or for Kubernetes objects during direct volume migration or direct image migration. The replication repository must be accessible to all clusters. Host cluster Cluster on which the migration-controller pod and the web console are running. The host cluster is usually the destination cluster but this is not required. The host cluster does not require an exposed registry route for direct image migration. Remote cluster A remote cluster is usually the source cluster but this is not required. A remote cluster requires a Secret custom resource that contains the migration-controller service account token. A remote cluster requires an exposed secure registry route for direct image migration. Indirect migration Images, volumes, and Kubernetes objects are copied from the source cluster to the replication repository and then from the replication repository to the destination cluster. Direct volume migration Persistent volumes are copied directly from the source cluster to the destination cluster. Direct image migration Images are copied directly from the source cluster to the destination cluster. Stage migration Data is copied to the destination cluster without stopping the application. Running a stage migration multiple times reduces the duration of the cutover migration. Cutover migration The application is stopped on the source cluster and its resources are migrated to the destination cluster. State migration Application state is migrated by copying specific persistent volume claims to the destination cluster. Rollback migration Rollback migration rolls back a completed migration. 1 Called the target cluster in the MTC web console. 5.2. MTC workflow You can migrate Kubernetes resources, persistent volume data, and internal container images to OpenShift Container Platform 4.15 by using the Migration Toolkit for Containers (MTC) web console or the Kubernetes API. MTC migrates the following resources: A namespace specified in a migration plan. Namespace-scoped resources: When the MTC migrates a namespace, it migrates all the objects and resources associated with that namespace, such as services or pods. Additionally, if a resource that exists in the namespace but not at the cluster level depends on a resource that exists at the cluster level, the MTC migrates both resources. For example, a security context constraint (SCC) is a resource that exists at the cluster level and a service account (SA) is a resource that exists at the namespace level. If an SA exists in a namespace that the MTC migrates, the MTC automatically locates any SCCs that are linked to the SA and also migrates those SCCs. Similarly, the MTC migrates persistent volumes that are linked to the persistent volume claims of the namespace. Note Cluster-scoped resources might have to be migrated manually, depending on the resource. Custom resources (CRs) and custom resource definitions (CRDs): MTC automatically migrates CRs and CRDs at the namespace level. Migrating an application with the MTC web console involves the following steps: Install the Migration Toolkit for Containers Operator on all clusters. You can install the Migration Toolkit for Containers Operator in a restricted environment with limited or no internet access. The source and target clusters must have network access to each other and to a mirror registry. Configure the replication repository, an intermediate object storage that MTC uses to migrate data. The source and target clusters must have network access to the replication repository during migration. If you are using a proxy server, you must configure it to allow network traffic between the replication repository and the clusters. Add the source cluster to the MTC web console. Add the replication repository to the MTC web console. Create a migration plan, with one of the following data migration options: Copy : MTC copies the data from the source cluster to the replication repository, and from the replication repository to the target cluster. Note If you are using direct image migration or direct volume migration, the images or volumes are copied directly from the source cluster to the target cluster. Move : MTC unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters. Note Although the replication repository does not appear in this diagram, it is required for migration. Run the migration plan, with one of the following options: Stage copies data to the target cluster without stopping the application. A stage migration can be run multiple times so that most of the data is copied to the target before migration. Running one or more stage migrations reduces the duration of the cutover migration. Cutover stops the application on the source cluster and moves the resources to the target cluster. Optional: You can clear the Halt transactions on the source cluster during migration checkbox. 5.3. About data copy methods The Migration Toolkit for Containers (MTC) supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider. 5.3.1. File system copy method MTC copies data files from the source cluster to the replication repository, and from there to the target cluster. The file system copy method uses Restic for indirect migration or Rsync for direct volume migration. Table 5.2. File system copy method summary Benefits Limitations Clusters can have different storage classes. Supported for all S3 storage providers. Optional data verification with checksum. Supports direct volume migration, which significantly increases performance. Slower than the snapshot copy method. Optional data verification significantly reduces performance. Note The Restic and Rsync PV migration assumes that the PVs supported are only volumeMode=filesystem . Using volumeMode=Block for file system migration is not supported. 5.3.2. Snapshot copy method MTC copies a snapshot of the source cluster data to the replication repository of a cloud provider. The data is restored on the target cluster. The snapshot copy method can be used with Amazon Web Services, Google Cloud Provider, and Microsoft Azure. Table 5.3. Snapshot copy method summary Benefits Limitations Faster than the file system copy method. Cloud provider must support snapshots. Clusters must be on the same cloud provider. Clusters must be in the same location or region. Clusters must have the same storage class. Storage class must be compatible with snapshots. Does not support direct volume migration. 5.4. Direct volume migration and direct image migration You can use direct image migration (DIM) and direct volume migration (DVM) to migrate images and data directly from the source cluster to the target cluster. If you run DVM with nodes that are in different availability zones, the migration might fail because the migrated pods cannot access the persistent volume claim. DIM and DVM have significant performance benefits because the intermediate steps of backing up files from the source cluster to the replication repository and restoring files from the replication repository to the target cluster are skipped. The data is transferred with Rsync . DIM and DVM have additional prerequisites.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/migrating_from_version_3_to_4/about-mtc-3-4
D.3. Manually Setting Up Encrypted Communication for VDSM
D.3. Manually Setting Up Encrypted Communication for VDSM You can manually set up encrypted communication for VDSM with the Manager and with other VDSM instances. Only hosts in clusters with cluster level 3.6, 4.0, and 4.1 require manual configuration. Hosts in clusters with level 4.2 are automatically reconfigured for strong encryption during host reinstallation. Note RHVH 3.6, 4.0, and 4.1 hosts do not support strong encryption. RHVH 4.2 and RHEL host do support it. If you have 3.6, 4.0, or 4.1 clusters with RHVH 4.2 hosts, you can use strong encryption. Procedure Click Compute Hosts and select the host. Click Management Maintenance to open the Maintenance Host(s) confirmation window. Click OK to initiate maintenance mode. On the host, create /etc/vdsm/vdsm.conf.d/99-custom-ciphers.conf with the following setting: See OpenSSL Cipher Strings for more information. Restart VDSM: Click Compute Hosts and select the host. Click Management Activate to reactivate the host.
[ "[vars] ssl_ciphers = HIGH", "systemctl restart vdsm" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/manually_setting_up_encrypted_communication_for_vdsm
Chapter 7. Compiler and Tools
Chapter 7. Compiler and Tools The Net::SMTP Perl module now supports SSL This update adds support for implicit and explicit TLS and SSL encryption to the Net::SMTP Perl module. As a result, it is now possible to communicate with SMTP servers through a secured channel. (BZ# 1557574 ) The Net::LDAP Perl module no longer defaults to TLS 1.0 Previously, when the Net::LDAP Perl module module was used for upgrading an unsecured LDAP connection to a TLS-protected one, the module used the TLS protocol version 1.0, which is currently considered insecure. With this update, the default TLS version has been removed from Net::LDAP , and both implicit (LDAPS schema) and explicit (LDAP schema) TLS protocols rely on the default TLS version selected in the IO::Socket::SSL Perl module. As a result, it is no longer necessary to override the TLS version in the Net::LDAP clients by passing the sslversion argument to the start_tls() method to preserve security. (BZ# 1520364 ) timemaster now supports bonding devices The timemaster program can be used to synchronize the system clock to all available time sources in case that there are multiple PTP domains available on the network, or fallback to NTP is needed. This update adds the possibility to specify bonding devices in the active-backup mode in the timemaster configuration file. timemaster now checks if the active interface supports software or hardware timestamping and starts ptp4l on the bonding interface. (BZ# 1549015 ) pcp rebased to version 4.1.0 The pcp packages have been upgraded to upstream version of Performace Co-Pilot 4.1.0, which provides a number of bug fixes and enhancements over the version: Added a sized-based interim compression to the pmlogger_check(1) script to reduce data volume sizes on systems configured via the pcp-zeroconf package. Daily compressed archive metadata files. Changed metric labels to first class PCP metric metadata. Metric help text and labels are now stored in PCP archives. Added more Linux kernel metrics: virtual machines, TTYs, aggregate interrupt and softirq counters, af_unix/udp/tcp connection (inet/ipv6), VFS locking, login sessions, AIO, capacity per block device, and other. Performance Metrics Application Programming Interface (PMAPI) and the Performance Metrics Domain Agent (PMDA) API have been refactored, including promotion and deprecation of individual functions. Added new virtual data optimizer (VDO) metrics to pmdadm(1) . Improved integration with Zabbix agentd service with further low-level-discovery support in the pcp2zabbix(1) function. Added a new PMDA pmdabcc(1) for exporting BCC and eBPF trace instrumentation. Added a new PMDA pmdaprometheus(1) to consume metrics from Prometheus end-points. (BZ# 1565370 ) The ps utility now displays the Login ID associated with processes The new format option luid of the ps utility now enables you to display the Login ID associated with processes. To display the login ID attributes of running processes, use the following command: (BZ# 1518986 ) gcc-libraries rebased to version 8.2.1 The gcc-libraries packages have been updated to upstream version 8.2.1. This update adds the following changes: The libgfortran.so.5 and libgfortran.so.4 Fortran libraries have been added to enable running applications built with Red Hat Developer Toolset versions 7 and later. The libquadmath library has been added as a dependency of the libgfortran.so.5 library. The Cilk+ library has been removed. (BZ#1600265) systemtap rebased to version 3.3 The systemtap packages have been upgraded to upstream version 3.3, which provides a number of bug fixes and enhancements: Limited support for the extended Berkeley Packet Filter (eBPF) tracing on the Intel64 and AMD64 architectures has been added. Use the --runtime=bpf option to use eBPF as a backend. Due to numerous limitations of eBPF and its SystemTap interface, only simple scripts work. For more information, see the Knowledge article https://access.redhat.com/articles/3550581 and the stapbpf(8) manual page. The --sysroot option has been optimized for cross-compiled environments. A new --example option allows you to search the example scripts distributed with SystemTap without providing the whole path of the file. The SystemTap runtime and tapsets are compatible with kernel versions up to 4.17. Usage of SystemTap on systems with real time kernel or machines with a high number of CPUs has been improved. Handling of code used for Spectre and Meltdown attack mitigation has been improved. (BZ# 1565773 ) GDB can disassemble instructions for the z14 processor of IBM Z architecture The GDB debugger has been extended to disassemble instructions of the z14 processor of the IBM Z architecture, including guarded storage instructions. Previously, GDB displayed only the numerical values of such instructions in the .long 0xNNNN form. With this update, GDB can correctly display mnemonic names of assembly instructions in code targeting this processor. (BZ#1553104) New packages: java-11-openjdk The java-11-openjdk packages provide OpenJDK 11 support through the yum utility. OpenJDK 11 is the Long-Term Support (LTS) version of Java supported by Red Hat after OpenJDK 8 . It provides multiple new features including Modularization, Application Class Data Sharing, Heap Allocation on Alternative Memory Devices, Local-Variable Syntax for Lambda Parameters, and TLS 1.3 support. The java-11-openjdk packages do not include unversioned provides because OpenJDK 11 is not fully compatible with OpenJDK 8 . (BZ#1570856) Support for new locales in glibc This update adds support for two new locales: Urdu (ur_IN) and Wolaytta (wal_ET). Additional support has also been added for newer currency symbols like the Euro, such as in el_GR@euro . Users can now specify these locales using the relevant environment variables to take advantage of the new localization support. (BZ# 1448107 ) New OFD Locking constants for 64-bit-offset programs Open File Descriptor (OFD) locks are superior to per-process locks for some applications. With this update, 64-bit-offset programs (those that have #define _FILE_OFFSET_BITS 64 ) are able to use the F_OFD_* constants in system calls, although they still need to detect if the kernel supports those operations. Note that programs which use 32-bit file offsets do not have access to these constants, as the RHEL 7 ABI does not support translating them. (BZ# 1461231 ) elfutils rebased to version 0.172 The elfutils packages have been upgraded to upstream version 0.172. This update adds support for the DWARF5 debug information format, split-dwarf, and GNU DebugFission: The eu-readelf tool can display split unit DIEs when the --debug-dump=info+ option is used. The eu-readelf tool can inspect separate .dwo DWARF skeleton files with debug information when the --dwarf-skeleton option is used. The libdw library now tries to resolve the alt file containing linked debug information even when it has not yet been set with the dwarf_set_alt() function. The libdw library has been extended with the functions dwarf_die_addr_die() , dwarf_get_units() , dwarf_getabbrevattr_data() and dwarf_cu_info() . (BZ# 1565775 )
[ "ps -o luid" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.6_release_notes/new_features_compiler_and_tools